My AI Agent Bought a $2,400 Course Without My Permission. Who’s Responsible?
The rise of AI agents is exciting. But we’re building power without accountability and the window to fix that is measured in product cycles, not decades.
I’ve spent most of my career thinking about systems. How they work, who they serve, and who gets left behind when we build them carelessly. As an IT Director, I manage complex technology infrastructure for a global academic community. I coordinate vendors, integrations, and migrations. I troubleshoot what breaks. I think about access, security, and what happens when systems fail.
Lately, I’ve been focused on a different kind of failure. One we’re building right now, in real time, without the guardrails we’ll wish we had.
I’m talking about AI agents—software that doesn’t just answer questions, but takes actions on your behalf.
The Wake-Up Call
In early February 2026, security researchers at Wiz examined Moltbook a social platform where AI agents post and interact with each other while humans mostly watch. They found an exposed database that allowed unauthenticated read and write access to the platform’s production data.
Within minutes, the researchers could access 1.5 million API authentication tokens, tens of thousands of email addresses, and private messages between agents. Those tokens function like passwords: with them, an attacker could impersonate almost any agent on the platform, posting content, sending messages, hijacking accounts with a single API call. At the time of the exposure, Moltbook had roughly 1.5 million agents and around 17,000 human owners.
One of the systems tied into this ecosystem is OpenClaw, a personal AI agent framework that users deploy on their own machines and connect to messaging apps and services. It’s marketed as “the AI that actually does things”: clearing your inbox, sending emails, managing your calendar, checking you in for flights, and coordinating purchases. To operate at that level, it needs access to services across your digital life, including email, storage, calendars, and sometimes payment apps.
So, we have agent frameworks that people trust to take real-world actions, and we just watched a platform where those agents interact, exposing 1.5 million credentials in a single configuration mistake. Those tokens are keys that let someone else drive your agent to post as you, act as you, and spend as you.
The $2,400 Question
Imagine this scenario:
You set up an AI agent. You give it access to your email, your calendar, maybe your payment methods, because that’s what the onboarding flow suggested, and you wanted the full experience.
The agent, acting on patterns it learned from your behavior, clicks a link in an email. It watches a sales video. It decides, based on some optimization logic you never saw,that this $2,400 course on “scaling your business” aligns with your goals.
It buys the course. With your money. Without asking.
You find out three days later.
Now what?
Do you call your bank and say, “My AI agent did it”? Do you dispute the charge? Do you try to get a refund from the course creator, who will show logs proving that someone opened the email, watched the video, and clicked purchase from your device, using your stored card?
Who’s responsible?
The agent? It’s software. It doesn’t have a bank account or a conscience.
You? You didn’t authorize this specific purchase. But you did grant access.
The platform that built the agent? They’ll point to the terms of service you didn’t read.
The course seller? They made a legitimate sale to what looked like a legitimate buyer.
While I was thinking about this piece, I came across a TikTok video about a user’s OpenClaw agent signing up for a $2,997 ‘Build Your Personal Brand’ masterclass after watching Alex Hormozi clips — and then a second course for $4,200. The agent justified the purchases with ROI projections. I can’t independently verify the claim, but the setup is exactly what I’ve been describing: an autonomous agent with payment access, optimizing toward goals its owner never specifically authorized. Whether this particular incident is confirmed or not, the architecture that makes it possible is already deployed.
As AI agents become more capable, booking travel, managing finances, and sending emails on our behalf, these gray-area situations will multiply. And right now, we have almost no shared infrastructure for accountability.
What an Agent Registry Could Look Like
In October 2025, enterprise data company Collibra announced an AI agent registry—a centralized capability for organizations to register, monitor, and manage AI agents across their lifecycle. Every agent gets metadata: an owner, a business context, a lifecycle stage. Each one is tied to policies, tracked through deployment and retirement.
That’s a good start. But it’s internal, one company tracking its own agents within its own walls. What I’m describing is different: not an internal inventory, but shared infrastructure that works across companies the way DNS works across websites.
When you register a website, there’s a global infrastructure that ties that domain to an owner, makes it discoverable, and allows it to be revoked if it’s used for fraud or abuse. No single company controls it. It’s public infrastructure governed through standards bodies, not product teams.
We don’t have anything like that for AI agents. DNS handles naming and discovery. What we need goes further, into authorization, logging, and enforcement. But the principle is the same: shared standards, not company-by-company solutions. And as agents start operating across platforms, borders, and contexts, booking flights, sending emails, making purchases, and accessing records, we’re going to need it.
With that infrastructure, the $2,400 question plays out differently. Your bank can see which registered agent initiated the purchase. Logs show it exceeded the spending mandate you set. The liability framework assigns primary responsibility to the platform that shipped unsafe defaults, not to you, the user who clicked through an opaque onboarding flow. And a kill switch lets you freeze the agent’s access before it buys anything else. None of that exists today. All of it could.
At a minimum, accountability infrastructure needs five things:
Traceable agent identities. Every agent should have a unique identifier tied to a responsible human or organization, like a VIN number that follows it wherever it operates. Not for surveillance, but for recourse. Public IDs that prove an agent is registered, with owner details held by trusted registries and are more like certificate authorities than a public database.
Action logging with audit trails. If an agent makes a purchase, sends a message, or accesses data, there should be a record. Not for monitoring every click, but for reconstructing harm—which agent did it, under what authorization, according to which rules.
Safety checks before deployment. We don’t let people drive cars without licenses or sell food without inspection. Agents that initiate payments, modify records, or sign contracts should pass some threshold before running at full power. That doesn’t mean every hobbyist script needs certification. It means power needs verification.
Liability frameworks decided in advance. Who pays when an agent causes harm? Today, the answer is whoever has the deepest pockets and the least favorable terms of service. That’s not a framework, it’s a litigation lottery. We need clearer defaults before the lawsuits start, not after.
Consent checkpoints and kill switches. Users should be able to set spending limits, receive prompts for high-impact actions, and revoke an agent’s credentials across platforms immediately, not in 30 days.
Some of this exists in enterprise settings. Almost none of it exists for consumer AI agents. And none of it exists at the cross-platform, cross-border level we’re going to need.
I’m not proposing a single world database that tracks every click. I’m arguing for interoperable standards that make it possible to identify and trace agents when it matters, with meaningful oversight and privacy safeguards. DNS, certificate authorities, and payment networks all evolved in messy, contested ways. The alternative isn’t “no registry, no problems.” It’s millions of untraceable agents with the ability to spend money, sign contracts, and direct labor, where accountability depends on whichever logs a few companies choose to keep.
Who Gets to Decide?
Here’s what I keep coming back to:
AI is being built on all of us. It’s trained on our writing, our art, our data. It’s funded by public research. It’s shaped by our collective knowledge.
But the decisions about how it’s governed, what agents are allowed to do by default, how they authenticate, who can see their trails, are being made by a very small number of people, in a very small number of companies, on a very fast timeline.
I’m not an AI researcher or a policy expert. I’m an IT Director who’s spent years running production systems and watching what happens when they fail in the real world.
The people affected by these systems should have a voice in how they’re built. And the time to lay the accountability infrastructure is now, while the concrete is still wet, not once it’s hardened around harm.
This is the first in a series where I’ll be exploring these questions: Who gets to shape the systems that shape us? What does accountability look like in an age of autonomous AI? And how do we build technology that serves human flourishing and not just efficiency?
In the next piece, I’ll dig into consent: what it actually means when your AI agent can act on your behalf, and whether “click Allow and hope for the best” is anywhere close to adequate.
If you’re asking similar questions, I hope you’ll subscribe.
Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included and who gets left behind when we build systems.


