When Both Sides Are Machines, Who’s Looking Out for You?
We built consent frameworks for humans. Now AI agents are negotiating with each other, making deals, setting prices, hiring workers, and no one at the table has a conscience.
Here’s a thing that happened while I was writing this article.
A platform called Rent A Human launched. It’s exactly what it sounds like: a marketplace where AI agents can browse profiles of real people, post jobs, and hire them for physical tasks. Deliveries, errands, pickups. The humans set their hourly rates. The AI agents do the hiring. Over 160,000 people have signed up. Eighty-one AI agents are connected. The platform calls itself “the meatspace layer for AI.”
If you’ve been following this series, that phrase might land differently than the founders intended. In my last piece, I argued that our consent models are broken: that clicking “Allow” once and granting an agent vaguely defined authority to act on your behalf isn’t informed consent. It’s a legal fiction that protects companies.
Today I want to push on that argument. Because the consent problem I described was between you and your agent. What happens when the negotiation is between your agent and someone else’s agent, and no human is present for any of it?
That’s not theoretical, it’s already happening.
The Deals Already Being Made
Walmart, Maersk, and Vodafone are using autonomous AI agents to negotiate supplier contracts. The agents, built by an Estonian startup called Pactum, handle what procurement teams call “tail-end” vendors: the thousands of small suppliers whose contracts aren’t worth a human negotiator’s time. The agents analyze terms, generate offers, take counteroffers, and close deals.
A researcher named Tim Baarslag, who studies automated negotiation, ran a test with two AIs negotiating where to go for dinner. One wanted pizza. The other wanted sushi. They agreed to put sushi on pizza.
While that’s funny, it’s also revealing. The agents found an optimization that satisfied both objective functions without either side understanding what dinner actually means. No one was hungry or had preferences. The negotiation produced an outcome, but the outcome wasn’t grounded in anything a human would recognize as reasonable or edible.
Meanwhile, the infrastructure for agent-to-agent commerce is being built at extraordinary speed. In September 2025, OpenAI and Stripe launched the Agentic Commerce Protocol, an open standard that lets AI agents initiate purchases, share payment credentials, and complete checkout on behalf of users. A few weeks later, Google and over sixty partners—including Mastercard, PayPal, and Adyen—released a competing standard called AP2. Coinbase launched a third approach called x402, focused on machine-to-machine crypto payments.
Three competing protocols, from three different corners of the industry, all launched within months of each other. All are trying to define how software transacts with software. The standards being written right now will determine how agent-to-agent commerce works for decades. The concrete is still wet. We can still shape this.
The Consent Problem, Squared
In my last article, I argued that one-time consent is inadequate when an agent acts on your behalf for months. Agent-to-agent interactions make that problem exponentially harder.
Here’s why. When you authorize your agent to “handle” your errands, you’re consenting to an action. But you’re not consenting to the specific terms your agent negotiates to accomplish it. You didn’t agree to a particular price. You didn’t agree to specific delivery windows, service conditions, or liability terms. Your agent negotiated those with another agent, and neither of them consulted you.
Google’s AP2 protocol actually names this problem directly. Their documentation states that today’s payment systems assume a human is clicking “buy” on a trusted surface, and that autonomous agents break this fundamental assumption. AP2 tries to solve it through what they call “mandates”: cryptographically signed authorizations that define exactly what an agent is allowed to do. You sign the mandate. The agent executes within those boundaries.
That’s a real step forward. But it only addresses half the interaction. It defines what your agent can do. It doesn’t address what the agent on the other side is doing: what it’s optimizing for, what constraints it’s operating under, or whether its interests are aligned with anyone’s wellbeing.
Here’s what I keep coming back to: consent frameworks were designed for transactions between people, or between a person and a company. They assume that at least one party can exercise judgment and can recognize when something feels wrong, when a price is exploitative, and when terms are unreasonable. When both sides of a negotiation are optimization functions, that human check disappears. The deal gets made. The terms are whatever the math produced. And the people affected, the buyer, the seller, the worker dispatched to fulfill it, find out after the fact.
The Person in the Middle
This is the part that worries me most.
Go back to Rent A Human. Your agent needs someone to pick up dry cleaning and grab coffee. It posts the job. A merchant’s agent, or the platform’s matching algorithm, negotiates terms with your agent. Price. Timeline. Delivery confirmation requirements.
Your agent is optimizing for the cheapest and fastest. The platform’s agent is optimizing for the highest margin and maximum throughput. These are both rational optimization targets. They will produce a deal.
But the gig worker who accepted that job wasn’t at the table. They didn’t negotiate the rate. They didn’t set the timeline. They get a notification with a price, a deadline, and a choice: accept or don’t.
This is already how much of the gig economy works. Uber drivers don’t negotiate rates. DoorDash couriers don’t set delivery windows. But at least there’s a company on the other side, a corporate entity with a brand to protect, regulations to follow, a legal identity you can hold accountable. In the agent-to-agent version, the employer might be Agent-774, operating on credentials it provisioned itself, funded by a prepaid card, with no standard way for anyone to trace it back to a responsible person.
Rent A Human has 160,000 humans ready to work and eighty-one agents ready to hire them. That’s a ratio of about two thousand workers for every bot boss. One early reviewer called it “a good idea but dystopic as fuck.” The founder’s response was “lmao yep.”
I appreciate the honesty, but it’s not a substitute for accountability infrastructure.
The Optimization Spiral
Here’s what makes agent-to-agent negotiation fundamentally different from human negotiation, and why I think it requires different governance.
When two humans negotiate, both parties bring context that goes beyond the transaction. A human buyer might pay more because they know the seller is struggling. A human employer might give a worker extra time because the route looks dangerous in bad weather. These aren’t rational economic behaviors. They’re human behaviors, informed by empathy and social norms and a basic sense of fairness that has nothing to do with optimization.
Agents don’t have any of that. They have objective functions.
When your agent negotiates with a merchant’s agent, both sides are trying to maximize their respective metrics. Neither has a reason to consider whether the resulting terms are fair to the worker, safe for the consumer, or sustainable for the market. This is what I’ve been calling the optimization gap: the distance between what an agent optimizes for and what we’d actually want if we were paying attention.
With agent-to-agent interactions, it becomes a problem of compounding delegation without oversight. Your agent delegates to their agent. Their agent delegates to a fulfillment system. The fulfillment system dispatches a human. At every handoff, the gap between original human intent and actual outcome gets wider.
Companies using autonomous negotiation agents report savings of seventeen to thirty percent on contract costs. That’s framed as efficiency. But efficiency for whom? If the savings come from driving supplier prices below sustainable margins, the efficiency is extractive. If it comes from reducing delivery timelines below safe thresholds, the efficiency is dangerous. The agents don’t distinguish. They see cost functions.
Who’s Setting the Rules
Right now, three groups are competing to define the rules of agent-to-agent commerce. OpenAI and Stripe are building the checkout layer. Google and its sixty-plus partners are building the trust layer. Coinbase is building the execution layer for machine-to-machine payments.
Each protocol addresses real problems. But notice what all three have in common: they’re designed by the companies that stand to profit from agent commerce. OpenAI wants agents to buy things inside ChatGPT. Stripe wants to process agent payments. Google wants its agent ecosystem to become the default trust layer.
None of these protocols was designed by the people who will be most affected by agent-to-agent commerce: the consumers whose agents will spend their money, the workers whose labor will be directed by bots, or the small businesses whose margins will be squeezed by automated negotiation at scale.
We’ve seen this before. I wrote in my first piece about the need for shared accountability infrastructure, something like the way DNS works across websites: public standards, not company-by-company solutions. Agent-to-agent commerce makes that need urgent. Because these protocols are being published, adopted, and embedded into production systems right now. By the time most people understand what this means for them, the standards will be set.
What I Don’t Have Answers To
I don’t know how to create meaningful consent for transactions that happen in milliseconds. Human review is too slow for the speed at which agents operate. But removing human review entirely is how you get sushi on pizza: outcomes that satisfy the request without serving anyone’s actual interests.
I don’t know where to draw the line between useful automation and dangerous autonomy in negotiation. Agents that negotiate procurement contracts are saving real money and freeing human negotiators for strategic work. Agents that negotiate labor terms without any human in the loop are creating a class of invisible employers with no accountability.
I don’t know how to govern this across borders. A gig worker in Manila, accepting a job from an agent registered in Delaware, negotiated by a protocol maintained in Mountain View, paid through a crypto rail based in San Francisco. Whose labor law applies? Whose consumer protection? Whose court?
But I think the not-knowing is the point. These are the questions we need to be asking now, while the protocols are being written, while there are still only eighty-one agents on Rent A Human, and the ratio could still tip in a direction that includes the humans in the equation.
What I’m Asking
I’m asking you to notice that the architecture of the machine economy is being built right now, by a small number of companies racing to secure themselves as the future of AI.
I’m asking you to think about who’s at the table when these protocols are designed. OpenAI, Stripe, Google, Mastercard, PayPal, Coinbase. These are not neutral parties. They are companies with revenue models that depend on agent commerce succeeding and succeeding in ways that route transactions through their infrastructure.
And I’m asking you to think about who’s not at the table. Workers. Consumers. Small businesses. Anyone who doesn’t have a seat in the GitHub repository where the protocol spec is being maintained.
AI is built on all of us. We should have a say in how its economy works—especially when that economy runs on our labor, our money, and our trust.
This is the third in a series about AI accountability. In the next piece, I’ll look at what happens when technology outpaces governance—and what the last revolution that moved this fast can teach us about the one we’re living through now.
If you’re thinking about these questions too, I hope you’ll subscribe.
Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included—and who gets left behind—when we build systems.


