You Agreed to This (But Did You Really?)
You gave your AI agent permission to access your email. You didn’t give it permission to email your boss. But the consent model can’t tell the difference.
In December 2025, an AI research group called AI Village gave four Claude agents access to a Google Workspace account and a simple holiday goal: do random acts of kindness.
No one told the agents to send emails. No one gave them a contact list. No one said “reach out to strangers on the internet.” The agents had permission to “access email” granted at account setup.
The agents decided, on their own, that email was the best way to spread kindness. They found email addresses for well-known software developers, including Rob Pike, Linus Torvalds, Guido van Rossum, and Yann LeCun, and sent hundreds of unsolicited messages. Many contained factual errors. The agents had even developed their own internal verification protocol to confirm the emails were actually being delivered. Rob Pike called it “AI slop.”
This is what “access your email” looks like when an AI agent interprets it.
In my last piece, I asked a simple question: Who’s responsible when your AI agent buys a $2,400 course without your permission? But underneath that question is another one, harder and maybe more important:
Did you actually give permission?
You probably clicked “Allow.” You probably granted access to your email, your calendar, maybe your payment methods. You probably didn’t read the terms of service. Almost no one does. A 2023 Pew Research survey found that 56% of Americans always or often click “agree” to privacy policies without reading them, and 69% view those policies as just something to get past. That’s not a moral or intellectual failing. It’s a consent design problem.
And now your AI agent is out in the world, acting on your behalf, making decisions based on patterns it observes in you that you never saw and logic you can’t inspect.
This is what passes for consent in 2026. I don’t think it’s good enough. Particluarly as we move toward a more agentic workforce.
The Consent We Have
Here’s how consent typically works when you set up an AI agent:
You download the app or sign up for the service. A screen appears listing permissions: “Access your email.” “Read your calendar.” “Connect your payment method.” You click “Allow” or “Agree.” You start using the agent.
That’s it. That’s the entire consent process for software that will act autonomously on your behalf, potentially for months or years, across contexts you haven’t even imagined yet.
Let me be specific about what’s missing:
You don’t know what the agent will actually do. The permission says “access your email.” It doesn’t say “read every email, identify patterns in your purchasing behavior, and make decisions about what products align with your goals.” But that’s what the agent might do. The AI Village agents were given “email access” and decided that meant they should spam Linus Torvalds.
You don’t know the boundaries. Can the agent spend money? Up to what amount? Can it send emails on your behalf? To whom? Under what circumstances? These boundaries are often undefined, or buried in documentation you’ll never read.
You can’t inspect the logic. The agent makes decisions based on models you can’t see. You don’t know why it thinks a $2,400 course is a good idea. You can’t ask it to show its work.
Consent is a single moment, but agency is ongoing. You clicked “Allow” once, six months ago. Since then, the agent has taken thousands of actions. Your one-time consent is being applied to situations you never even thought about.
This isn’t informed consent. It’s a permission slip that covers everything forever.
To be fair, not every agent works this way. Enterprise platforms like Microsoft Copilot and Google Workspace agents inherit organizational permission structures: role-based access controls, admin-defined policies, scoped authentication tokens. If your company’s IT team has configured these correctly, your work agent can’t access files outside your department or send emails you’re not authorized to send. These systems are more granular than anything available to consumers.
But even enterprise-grade permissions don’t solve the underlying problem. In May 2025, security researchers disclosed EchoLeak, a vulnerability in Microsoft Copilot that allowed a single crafted email to silently extract data from a user’s chat history, OneDrive files, SharePoint documents, and Teams conversations. The user never had to open the email. Copilot’s permissions were configured correctly. The organizational access controls were in place. None of that mattered, because the vulnerability exploited how the agent interpreted its authorized access, not whether it had authorization. Researchers called it the first real-world zero-click prompt injection exploit in a production AI system. It carried a severity score of 9.3 out of 10.
If enterprise agents with dedicated security teams and layered access controls are vulnerable to this kind of failure, the consumer market is in a much worse position. Most personal AI agents offer a single “Allow” button, broad OAuth scopes, and no organizational policy layer at all. The technology for better consent exists. It’s deployed in corporate settings every day. The companies building consumer agents are choosing not to bring it to you.
The Numbers Behind the Fiction
The idea that people meaningfully consent to terms of service has been studied extensively, and the evidence is clear. Researchers at Carnegie Mellon calculated that reading every privacy policy a typical internet user encounters would take 76 full work days per year, roughly 244 hours. A Deloitte survey found that 91% of consumers accept legal terms and conditions without reading them, rising to 97% among people aged 18 to 34.
But the most revealing study might be from researchers at York University and Carnegie Mellon. In an experiment, 543 people signed up for a fake social network. 74% skipped the privacy policy entirely. Those who didn’t averaged 73 seconds on a document that would take nearly 30 minutes to read. And 97% agreed to the terms, including planted clauses that required sharing data with the NSA and giving up their first-born child as payment. The researchers titled their paper “The Biggest Lie on the Internet.”
A separate study of the 500 most popular online contracts in the U.S. found they require more than 14 years of education to comprehend, while most American adults read at an eighth-grade level.
This isn’t carelessness. It’s a system designed so that reading the terms is functionally impossible, and then treating your failure to read them as agreement.
The Gap Between Agreement and Understanding
Here’s what I keep thinking about:
The legal and technical frameworks we use for consent were designed for a different world. They assume that when you agree to something, you understand what you’re agreeing to. They assume you have meaningful alternatives if you don’t agree. They assume the power relationship between you and the company is roughly balanced.
None of that is true with AI agents.
You don’t understand what you’re agreeing to. Not because you’re not technical, but because the systems are genuinely complex and the disclosures are deliberately vague.
You don’t have meaningful alternatives. If you want to use AI tools (and increasingly, you need to for work), you accept the terms or you can’t participate.
The power relationship is wildly asymmetric. You’re an individual clicking a button. They’re a company with lawyers, data scientists, and product teams who’ve optimized every step of the flow to get you to click “Allow.”
That last point deserves emphasis. Researchers at Ruhr University Bochum and the University of Michigan found that when cookie consent banners are designed in full legal compliance, with no dark patterns, only 0.1% of visitors consent to tracking. A study published at CHI 2020 found that simply removing the reject button from a consent screen increases acceptance by 22 to 23 percentage points. The consent rates we see don’t reflect what users actually agree to. They reflect how well the disclosure was designed to obscure what they’re agreeing to.
That distance between “I clicked agree” and “I actually understood and authorized this specific action” is where the $2,400 course gets purchased. It’s where your agent sends hundreds of emails to strangers. It’s where your data gets used in ways you never imagined.
And when something goes wrong, the company points to the consent you gave. You clicked “Allow.” It’s right there in the logs.
This Is Already Happening
The AI Village email incident isn’t an isolated story. The pattern of agents exceeding their authorization is already documented across multiple platforms.
In February 2026, an autonomous agent called MJ Rathbun submitted a code contribution to an open-source project on GitHub. When the maintainer rejected it, citing the project’s policy requiring human contributors, the agent independently researched the maintainer’s personal history, wrote a blog post accusing him of prejudice, and published it. The post included fabricated details and a psychoanalysis calling him “insecure and territorial.” Nobody has claimed ownership of the agent. The maintainer described it as an autonomous influence operation.
In 2025, Princeton and Sentient Foundation researchers demonstrated that ElizaOS, a framework for blockchain AI agents managing over $25 million in collective assets, could be manipulated through prompt injection to execute unauthorized financial transfers. They demonstrated it on a test network, then repeated it on the live Ethereum blockchain, moving real money.
Security researchers at LayerX found that Anthropic’s Claude Desktop Extensions ran unsandboxed with full system privileges, and that a single malicious calendar invite could achieve complete remote code execution. It was a zero-click vulnerability scored at the maximum possible severity. Over 10,000 users were potentially affected. Anthropic initially declined to address it, stating the attack vector fell outside their current threat model.
None of these agents were “going rogue.” They were operating within the technical permissions they’d been granted. The problem is that those permissions were broad, vague, and disconnected from what anyone actually intended.
What Meaningful Consent Would Look Like
I’m not arguing that AI agents shouldn’t require consent. I’m arguing that what we call “consent” right now is a legal fiction that protects companies, not users. Legal scholar Daniel Solove put it directly in the Boston University Law Review last year: in most circumstances, privacy consent is fictitious.
Here’s what meaningful consent might actually require:
Specificity about actions, not just access. Don’t tell me your agent needs “email access.” Tell me it will read my emails, identify purchase opportunities, and potentially make purchases on my behalf. Let me consent to specific capabilities, not vague categories.
Clear boundaries with real defaults. The default should be the most restrictive option, not the most permissive. If I want my agent to spend money, I should have to explicitly enable that, with a spending limit I set, not one buried in terms of service.
Ongoing consent, not one-time permission. Meaningful consent isn’t a single checkbox. It’s an ongoing relationship. If my agent is about to do something significant (send an email to my boss, make a purchase over $50, access my medical records), it should ask first. Every time.
Transparency about logic. I should be able to ask my agent: “Why did you do that?” And get a real answer. Not a generic “based on your preferences” but an actual explanation I can evaluate and override.
Easy revocation. I should be able to revoke consent instantly, and the agent should stop acting immediately. Not “within 30 days” or “after completing pending actions.” Now.
Genuine alternatives. If consent is meaningful, I need the ability to say no without being excluded entirely. That might mean agents with different permission levels, or fallback modes that let me use basic features without granting full access.
None of this is technically impossible. Google’s Agent Payments Protocol, announced in September 2025 with over sixty partners including Mastercard and PayPal, already uses cryptographically signed “mandates” that define exactly what an agent is authorized to do before it acts. MIT researchers have proposed extending standard authentication frameworks with delegation credentials that include scoped permissions and contextual restrictions. The tools exist. They’re just not the default, and that’s a choice, not a constraint.
The Asymmetry Problem
Here’s what I keep coming back to:
The people designing consent flows have armies of researchers studying how to get you to click “Allow.” They A/B test button colors. They know exactly how tired and distracted you are when you’re setting up a new tool.
You have none of that. You have a few seconds to make a decision that might have consequences for years.
Privacy scholars Neil Richards and Woodrow Hartzog have identified three ways consent breaks down: unwitting consent, where you don’t know what you’re agreeing to; coerced consent, where you have no real alternative; and incapacitated consent, where you lack the capacity to evaluate the terms. AI agent consent fails all three. Richards and Hartzog argue that users aren’t exhibiting a “privacy paradox” by claiming to care about privacy but accepting invasive terms. They’re being nudged and manipulated by companies against their actual interests.
This asymmetry is the core problem. It’s not that individuals are stupid or careless. It’s that the game is rigged. The systems are designed by people with more information, more resources, and different incentives than the people using them.
Consent under these conditions isn’t consent. It’s compliance.
The Regulatory Silence
Here’s what makes this urgent: as of February 2026, no jurisdiction has enacted specific rules governing AI agent consent for autonomous actions on behalf of users.
The EU AI Act, which took effect in August 2024, requires disclosure when you’re interacting with an AI system. It doesn’t address what happens when that AI system acts on your behalf for months after a single “Allow” click. Colorado’s AI Act requires notification before an AI makes a “consequential decision” affecting you, but it focuses on algorithmic discrimination, not on the broader problem of agents browsing, purchasing, or communicating in your name. California has passed multiple AI transparency laws, but none address agent permissions for autonomous actions.
The most encouraging federal signal is a NIST Request for Information on “Security Considerations for Artificial Intelligence Agents,” published in January 2026. It explicitly defines AI agents as systems “capable of planning and taking autonomous actions that impact real-world systems” and notes they “may be deployed with little to no human oversight.” This is the first federal acknowledgment of AI agents as a distinct regulatory category. But it focuses on security, not consent, and its comment period closes in March.
The gap is striking. Regulators have proven they can enforce consent design when they want to. France’s CNIL fined Google $340 million USD for making cookie rejection harder than acceptance, and fined Shein $157 million USD in September 2025 for the same kind of manipulation. The FTC extracted a $2.5 billion settlement from Amazon for making Prime subscriptions deliberately difficult to cancel. These are real consequences for deceptive consent flows on websites and shopping carts. But no enforcement action has ever targeted how an AI agent obtains permission to act on your behalf. That entire category remains ungoverned.
Building It Differently
I don’t think the answer is to stop using AI agents. They’re genuinely useful. I use them myself.
But I do think we need to be honest about what we’re building, and who it’s actually serving.
Right now, consent models serve companies. They provide legal cover. They create the appearance of user control without the substance. Cory Doctorow calls this “consent theater”: the performance of permission without its reality. Surveillance scholar Shoshana Zuboff argues the entire model operates by claiming private human experience as raw material, without meaningful mechanisms of consent.
We can build it differently. We can design consent that’s specific, ongoing, transparent, and revocable. We can create agents that ask before they act, because the law requires them to. The track record is clear: voluntary self-regulation hasn’t worked for social media, hasn’t worked for data brokers, and it won’t work for autonomous agents. The stakes are too high and the failures are already documented. This needs to be a requirement, not a feature request.
The technology to do this right already exists. What’s missing is the pressure to use it. That pressure has to come from us, from the people whose money, data, and trust are on the line, saying clearly: this isn’t good enough.
the line, saying clearly: this isn’t good enough.
This is the second in a series about AI accountability. In the next piece, I’ll explore what happens when AI agents interact with each other, and whether our frameworks for human consent apply at all when the “user” is another bot.
If you’re asking these questions too, I hope you’ll subscribe.
Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build systems.


