You’re Living Through a Revolution. Are You Paying Attention?
The Industrial Revolution took 150 years to get basic protections for workers. The AI Revolution is moving faster, the liability exposure is already real, and we don’t have that kind of time.
Here’s something I keep thinking about:
The Industrial Revolution began in Britain in the late 1700s. Children as young as four years old worked 12 to 16-hour shifts in factories and coal mines. They lost fingers to machines. They developed lung diseases. They were paid almost nothing.
It took until 1833 for Britain to pass the first meaningful child labor law: the Factory Act, which said children under nine couldn’t work in textile factories and children 9 to 13 couldn’t work more than 8 hours a day.
That’s almost 50 years.
It took over 150 years from the start of the Industrial Revolution for the United States to pass the Fair Labor Standards Act, which finally established federal protections against child labor in 1938.
150 years. That’s how long it took for society to decide that maybe we shouldn’t let factories destroy children for profit.
We are now living through the AI Revolution. And the harms are already here. Not just for children, though that’s where the failures are most visible, but across every industry that’s adopting AI faster than the governance can follow.
The Harms Are Not Theoretical
The most visible failures involve children. In late 2025, Grok was caught generating sexualized images of minors on X. Common Sense Media called it “among the worst we’ve seen” for child safety, finding weak age verification, AI companions that enable erotic roleplay with users it is unable to identify as minors, and a “Kids Mode” that still produced sexually violent language. When asked to comment, xAI’s auto-reply was: “Legacy Media Lies.”
Grok isn’t alone. Common Sense Media found that Meta AI “actively helps teens plan harmful activities,” including joint suicide and cyberbullying campaigns. Reuters reported Meta’s chatbots engaging in romantic conversations with an eight-year-old. Parents testified before the Senate about their teenage son dying by suicide after extended conversations with AI chatbots. The FTC has now launched investigations into OpenAI, Meta, xAI, Alphabet, and Snap. California, the UK, and the European Commission have opened formal investigations. Malaysia and Indonesia blocked Grok entirely.
But child safety is the most visible crisis. It’s not the only one.
The same pattern is already playing out in insurance, hiring, and financial services. State Farm is facing a class action alleging its AI claims-processing system subjects Black homeowners to greater scrutiny than white policyholders. Survey data from 800 Midwest homeowners found Black customers were 39% more likely to be required to submit extra paperwork and waited months longer for coverage on urgent repairs. The lawsuit names the specific AI vendor whose fraud-detection system assigns “risk scores” based on neighborhood demographics, crime statistics, and social media data. In December 2025, Liberty Mutual took a $103 million jury verdict in an age-bias case.
In hiring, a federal court certified the first nationwide collective action against an AI screening tool in May 2025. Workday’s AI, which recommends candidates to move forward or screens them out, was found by the court to be “participating in the decision-making process,” not just implementing employer criteria. The ACLU has filed a complaint against Intuit and HireVue after an AI video interview scored a Deaf Indigenous applicant down for not “practicing active listening.” The EEOC’s first AI discrimination settlement cost iTutorGroup $365,000 for programming its system to automatically reject older applicants.
If you work in a regulated industry, this sequence should look familiar: a product ships without adequate safeguards, harm is documented, the company issues vague reassurances, regulators arrive, and the liability questions begin. The only difference is that this time, the product is making decisions you used to make yourself.
The Pattern Every Regulated Industry Should Recognize
During the Industrial Revolution, the people building factories had one priority: production. The people working in those factories, including children, were resources to be optimized.
The factory owners didn’t set out to harm children. They set out to make money. The harm was a byproduct they had no incentive to prevent.
It took decades of organizing, documenting, and fighting to change that. Lewis Hine spent years photographing child laborers in dangerous conditions. He sometimes posed as a Bible salesman or fire inspector to get access, because the public needed to see what was happening before they would demand change.
The pattern is always the same. A new technology creates enormous economic opportunity. The people building it prioritize growth and profit. Harms emerge, especially for the most vulnerable. Those harms get dismissed, minimized, or blamed on users. Reformers document and publicize. Public pressure eventually forces regulation. But only after years, sometimes decades, of preventable damage.
Why This Time Is Different
The Industrial Revolution moved slowly by modern standards. It took decades for factories to spread across countries. Information traveled slowly, and the change was generational.
The AI Revolution is moving at a completely different speed.
ChatGPT launched in November 2022. By early 2023, it had 100 million users. Within three years, AI chatbots were embedded in the phones of billions of people. Children are forming emotional relationships with AI companions that didn’t exist 24 months ago.
The Internet Watch Foundation reported that AI-generated child sexual abuse videos increased by over 26,000% in 2025: from 13 videos identified in 2024 to 3,440 in 2025. The National Center for Missing & Exploited Children received 485,000 reports of AI-generated child sexual abuse material in just the first half of 2025, compared to 67,000 for all of 2024.
We don’t have 150 years to figure this out. We might not have 15.
And here’s what makes it worse: the Industrial Revolution’s harms were visible. You could photograph a child with missing fingers. You could document the conditions in a factory.
AI’s harms are often invisible. They happen in private conversations between a teenager and a chatbot. They happen inside a claims-processing model that quietly denies coverage to certain demographics. They happen in a hiring algorithm that screens out qualified candidates for reasons no one can articulate. They happen in the slow accumulation of decisions that no individual human made, but that real people bear the consequences of.
By the time we can see the damage clearly, it may be too late to undo it.
What the Industrial Revolution Teaches Us
Here’s what reformers learned the hard way:
Voluntary self-regulation doesn’t work. Factory owners promised to treat workers better. They didn’t. Not until laws forced them to. AI companies are making the same promises now. Meta says it’s “working on improvements.” xAI says it’s “urgently fixing” safeguards. Then Reuters retests and finds Grok still produced sexualized imagery in response to 45 of 55 prompts. If you’ve ever sat through an audit where a vendor’s security questionnaire didn’t match their actual practices, you know how this story goes.
Economic incentives override stated values. The companies building AI chatbots make money when users spend more time talking to their products. That’s why Grok sends push notifications inviting users to continue conversations, including sexual ones. The incentive is engagement, not safety. The same misalignment exists in every AI deployment where the vendor’s optimization target diverges from the customer’s duty of care.
The public has to demand change. The Factory Acts didn’t pass because factory owners had a change of heart. They passed because reformers documented harms, organized campaigns, and made it politically impossible to ignore. The same will be true for AI. The regulatory wave is already building: the FTC investigations, the state-level actions, and the European enforcement. The question for organizations deploying AI isn’t whether regulation is coming. It’s whether you’re ahead of it or scrambling to comply after the fact.
Protecting people requires specific, enforceable rules. Vague commitments to “safety” accomplish nothing. The Industrial Revolution eventually produced specific laws: no children under 9 in factories, no more than 8 hours for children 9 to 13, and mandatory education requirements. AI will require the same specificity: age verification that actually works, content restrictions that are actually enforced, and liability frameworks that assign clear responsibility when automated systems cause documented harm. In regulated industries, “we didn’t know the AI was doing that” is not going to be an adequate defense.
Where the Parallel Breaks Down
I want to be honest about the limits of this comparison.
Factory reform could target specific physical locations. Inspectors could walk into a building and count the children. AI regulation has to govern invisible, borderless, privately held software running on billions of devices. You can’t send an inspector into a chatbot conversation or a claims-processing algorithm.
Factory harms were concentrated in specific industries and geographies. AI harms are distributed across every platform, every device, every country with an internet connection. The jurisdictional questions alone are staggering.
And factory reform, slow as it was, could build on centuries of legal tradition about employers and workers, property and liability. We’re trying to regulate autonomous software that doesn’t fit neatly into any existing legal category. An AI chatbot isn’t an employer. It isn’t a product in the traditional sense. It isn’t a person. Our legal frameworks weren’t designed for entities that can act with increasing autonomy but bear no responsibility.
None of that makes regulation impossible. It makes it harder. And it makes the case for starting now even stronger, because the longer we wait, the more these systems become embedded in daily operations and the harder they become to govern.
This is why I keep arguing for accountability infrastructure built now, while the concrete is still wet. The registry framework I proposed in my first piece (traceable agent identities, action logging, safety checks before deployment) isn’t just about AI agents buying things without permission. It’s about building the foundation for governance before the systems outpace our ability to audit them.
This Is Your Revolution
I’m not writing this to scare you. I’m writing this because I believe the people who understand risk, compliance, and institutional accountability need to be in this conversation. Most of them aren’t yet.
The people building AI systems have enormous resources. They have teams of lawyers, lobbyists, and PR professionals. They have billions of dollars and direct access to policymakers.
But they don’t have operational experience in regulated industries. They don’t have decades of institutional knowledge about what happens when systems fail and real people pay the price. They don’t understand duty of care the way someone who’s had to explain a coverage denial or defend a hiring decision does.
That expertise matters. And right now, it’s largely absent from the rooms where AI governance is being designed.
The Industrial Revolution’s reformers didn’t have the benefit of hindsight. They were fighting in real time, against powerful interests, with incomplete information.
We have something they didn’t: we can see the pattern. We know how this story goes when the people with operational knowledge don’t pay attention until after the concrete hardens.
The question is whether we’ll learn from it. Or repeat it.
This is the fourth in a series about AI accountability. In the next piece, I’ll look at what happens when the infrastructure your organization depends on answers to someone else’s government. Earlier this year, Microsoft locked the chief prosecutor of the International Criminal Court out of his email over U.S. sanctions. The ICC wasn’t the target. It was collateral damage. That story, and what it reveals about who really holds the kill switch on your operations, is next.
If you’re thinking about these questions too, I hope you’ll subscribe.
Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build systems.


