<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Rachel Ankerholz]]></title><description><![CDATA[Exploring the question of who shapes the systems that shape us and who gets left behind]]></description><link>https://uncheckedai.rachelankerholz.com</link><generator>Substack</generator><lastBuildDate>Mon, 04 May 2026 12:04:53 GMT</lastBuildDate><atom:link href="https://uncheckedai.rachelankerholz.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Rachel Ankerholz]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[uncheckedai@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[uncheckedai@substack.com]]></itunes:email><itunes:name><![CDATA[Rachel Ankerholz]]></itunes:name></itunes:owner><itunes:author><![CDATA[Rachel Ankerholz]]></itunes:author><googleplay:owner><![CDATA[uncheckedai@substack.com]]></googleplay:owner><googleplay:email><![CDATA[uncheckedai@substack.com]]></googleplay:email><googleplay:author><![CDATA[Rachel Ankerholz]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Four Algorithms, One Patient: The Cascade Nobody Maps]]></title><description><![CDATA[A single Medicare claim now passes through up to four AI systems before a human ever opens the file. Each layer is &#8220;compliant&#8221; on its own. None of them is accountable for the patient.]]></description><link>https://uncheckedai.rachelankerholz.com/p/four-algorithms-one-patient-the-cascade</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/four-algorithms-one-patient-the-cascade</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Mon, 04 May 2026 00:26:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Ic4v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ic4v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ic4v!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png 424w, https://substackcdn.com/image/fetch/$s_!Ic4v!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png 848w, https://substackcdn.com/image/fetch/$s_!Ic4v!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png 1272w, https://substackcdn.com/image/fetch/$s_!Ic4v!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ic4v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2744074,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.rachelankerholz.com/i/196362699?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ic4v!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png 424w, https://substackcdn.com/image/fetch/$s_!Ic4v!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png 848w, https://substackcdn.com/image/fetch/$s_!Ic4v!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png 1272w, https://substackcdn.com/image/fetch/$s_!Ic4v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff033e24-f855-4459-adc4-47830cd05d1b_1672x941.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Dr. Elisabeth Potter is roughly $5 million in debt.</p><p>Her husband cashed out his 401(k) to keep her surgery center running. UnitedHealthcare, the second-largest insurer in Texas, declined to add the center to its in-network list. Potter remains in the network as a surgeon. Her facility is not. She cannot operate there on UHC patients without leaving them with the full bill.</p><p>This is the same Dr. Potter who scrubbed out of a breast cancer surgery in January 2025 to take a phone call from UnitedHealthcare. The surgery had been pre-approved. The caller wanted to know if the patient asleep on the table needed an overnight stay. UHC denied the overnight anyway. According to Potter, her patient went home eight hours after a bilateral DIEP flap reconstruction. UnitedHealthcare disputes her account, attributing the call to a clerical error.</p><p>I wrote about that call in Vol. 12. What I want to write about now is what came after.</p><p>UHC sent Potter a defamation threat letter from Clare Locke, the firm that served as lead counsel in Dominion&#8217;s $787.5 million settlement against Fox News, demanding she take her TikTok down.</p><p>She didn&#8217;t. UHC then declined the network contract for her surgery center.</p><p>Same insurance company. Four different decisions. Four different parts of its operation. One surgeon. Same outcome.</p><p><strong>Authorize. Deny. Threaten. Block.</strong></p><p>That is not a process. That is an architecture.</p><p><strong>A single Medicare Advantage claim can pass through four AI systems before a human ever opens the file.</strong></p><p>The hospital codes the encounter. The payer evaluates the claim. The pharmacy benefit manager authorizes the prescription. And as of January 1, 2026, the Centers for Medicare &amp; Medicaid Services (CMS) reviews it.</p><p>Each layer is built by a different vendor. Each is trained on different data. Each optimizes for a different metric. Each defines &#8220;human review&#8221; differently.</p><p>None of them coordinates with the others.</p><p>None of them is accountable for the patient.</p><p>I manage technology infrastructure for a global academic community. The first question I ask of any new vendor is what other systems their tool touches and where accountability lives when something goes wrong.</p><p>In healthcare claims right now, the honest answer is that nobody, not the insurer, not the hospital, not the patient, not the IT Director, has mapped the cascade end-to-end.</p><p>Let me try.</p><h2>Layer One: The hospital&#8217;s AI codes the visit</h2><p><strong>It is not predicting clinical reality. It is predicting which diagnoses pay.</strong></p><p>Blue Cross Blue Shield published an analysis in March 2026. One facility&#8217;s billing complexity rating jumped 6.7 percent after announcing it would adopt AI for medical coding. Other facilities in the same state moved 0.9 percent over the comparable period. Across hospitals identified as likely AI adopters, complex-coded admissions rose by an average of 13.1 percentage points.</p><p>Blue Cross attributes $663 million in additional inpatient spending to AI-driven coding over a three-year period.</p><p>The most damning specific finding: AI coding tools were classifying new mothers as having severe acute posthemorrhagic anemia in cases where no transfusion ever occurred. That single diagnostic pattern added $22 million to maternity admission costs in one year.</p><p>The model is not lying. The model is doing what it was built to do, which is to find the highest-paying compliant code for the documented encounter.</p><p>Whether the patient actually had the condition is a separate question and not one the model is asked to answer.</p><h2>Layer Two: The payer&#8217;s AI evaluates the claim</h2><p><strong>It is not predicting medical necessity. It is predicting what a cost-pressured reviewer would have denied.</strong></p><p>This is the Vol. 11 argument extended. Cigna&#8217;s PXDX, UnitedHealthcare&#8217;s nH Predict, EviCore&#8217;s prior auth tools. All trained on historical claims data.</p><p>Trained on historical claims data means trained on prior decisions. Which means trained on the outputs of human reviewers operating under the same cost-control incentives, the algorithm is now automating.</p><p>The model is not learning what good care looks like. It is learning what previous reviewers under quota pressure decided to deny.</p><p>In March 2026, a federal court ordered UnitedHealth to produce documents across six of seven discovery categories in the nH Predict class action. The discovery includes performance evaluations and compensation records for medical directors. The identities of the company&#8217;s internal AI review board. Documents back to January 2017, pre-dating the deployment of nH Predict.</p><p>The court rejected UnitedHealth&#8217;s argument that pre-deployment records were irrelevant. UnitedHealth disputes the plaintiffs&#8217; characterization of the model.</p><p>What is not in dispute is that hospitals know exactly what is happening on the other side of the cascade.</p><p>Andrew Asher, Centene&#8217;s chief financial officer, said the quiet part out loud at the Deutsche Bank Healthcare Summit in September 2025. Hospitals had gotten better organized around AI for coding than payers had. &#8220;We&#8217;re going to catch up to that,&#8221; he said.</p><p>That is the cascade in one sentence, from a CFO. A documented arms race, named on both sides, with the patient sitting in the middle of a transaction nobody designed.</p><h2>Layer Three: The pharmacy benefit manager&#8217;s AI authorizes the prescription</h2><p><strong>The voluntary commitment to reform prior auth does not cover this layer.</strong></p><p>Optum Rx, UnitedHealth&#8217;s pharmacy benefit manager (PBM), runs a tool called PreCheck. According to UnitedHealth, it cut prescription approval time from over eight hours to a median of 29 seconds.</p><p>UnitedHealth projects nearly $1 billion in AI savings in 2026.</p><p>In June 2025, America&#8217;s Health Insurance Plans (AHIP) and the Blue Cross Blue Shield Association announced a voluntary commitment to streamline prior authorization. Fifty insurers covering roughly 257 million Americans signed on. By April 2026, the industry reported eliminating 11 percent of prior authorization requirements, or about 6.5 million fewer requests per year.</p><p>That is real. I want to acknowledge it.</p><p>I also want to be honest about what the pledge does not do.</p><p>The pledge has no enforcement. It does not specify what fraction of remaining authorizations are AI-decided. It does not require disclosure of which model decided, or what its overturn rate is.</p><p>And it does not cover pharmacy benefits, which is exactly where the PBM layer of the cascade sits.</p><p>CMS required payers to publish aggregated 2025 prior authorization metrics by the end of March 2026. KFF analyzed the data when it landed and found that it did not actually explain what drove approvals or denials.</p><p>Disclosure that does not let you trace a decision is not informing you. It is performing.</p><h2>Layer Four: The regulator joined the cascade</h2><p><strong>On January 1, 2026, traditional Medicare became an experiment too.</strong></p><p>CMS launched the Wasteful and Inappropriate Service Reduction model, called WISeR, in six states: Arizona, New Jersey, Ohio, Oklahoma, Texas, and Washington.</p><p>Six private technology vendors process prior authorization requests for fifteen Medicare Part B services: Cohere Health, Genzeon, Humata Health, Innovaccer, Virtix Health, and Zyter.</p><p>These vendors are not paid a flat fee.</p><p><strong>They earn a percentage of the savings their denials generate.</strong></p><p>Read that sentence again.</p><p>The vendor in Texas, Cohere Health, says its technology is &#8220;never used to deny care, but rather to automate approvals.&#8221; Healthcare Uncovered reported in April 2026 that 62 percent of WISeR prior authorizations in Texas were being approved on the first try.</p><p>Both of those things cannot be true at the same time.</p><p>The American Hospital Association asked CMS for a six-month delay before launch. The House Appropriations Committee approved an amendment to block WISeR funding. It did not survive final budget negotiations. Don Berwick, the former CMS administrator, called WISeR an import of &#8220;the bureaucratic, wasteful, and risky processes of permission-seeking&#8221; that have plagued Medicare Advantage for years.</p><p>CMS launched it on schedule.</p><p>For most of the last decade, the compliance posture for healthcare AI in the United States assumed traditional Medicare was the floor, and Medicare Advantage was the experiment.</p><p>WISeR inverts that. For services covered in those six states, traditional Medicare is now the experiment too.</p><p>When a denial reaches the patient, it cites one criterion and one policy clause.</p><p>It does not say which algorithm flagged the claim. It does not say which training data the algorithm used. It does not say whether a human ever opened the file. It does not say which vendor licensed the model to the insurer. It does not say what the appeal-overturn rate is for this category of denial.</p><p>The patient is supposed to appeal. The 0.2 percent who do appeal succeed at very high rates.</p><p>The other 99.8 percent never see the layer they were fighting.</p><p>This is not a transparency problem at the level of any single AI system. It is a transparency problem at the level of the architecture.</p><p>I want to be specific about something, because the shape of this argument matters.</p><p>If you work inside a payer, a hospital revenue cycle, or a PBM, you are not the villain of this piece. The incentives you operate inside are the villain.</p><p>Claims volume is real. Coding ambiguity is real. Fraud is real. An insurer that never reviews claims gets exploited. A hospital that does not optimize coding leaves money on the table that its competitors are taking.</p><p>I know that. What I want to trace is the gap between any single legitimate function and the industrial pattern they have become when stacked on top of one another.</p><p>The hospital&#8217;s coder is doing her job. The payer&#8217;s reviewer is doing his. The PBM&#8217;s algorithm is doing what it was built to do. The CMS pilot vendor is fulfilling a contract.</p><p>Every layer is operating within its own logic, against its own metric, defended by its own vendor&#8217;s compliance documentation. The aggregate effect is a denial cascade that no individual layer is responsible for.</p><p>In Vol. 11, I argued that an AI with a 90 percent error rate scales only when the errors are profitable.</p><p>The cascade extends that.</p><p><strong>An AI cascade scales only when no single layer has to answer for the whole.</strong></p><p>WISeR just adopted that architecture as federal policy.</p><h2>What your AI vendor cannot answer</h2><p>If you are procuring AI for a payer, a hospital, or a PBM in 2026, three questions will determine whether you have a vendor problem or a documentation problem that becomes a liability problem.</p><p>What is the documented overturn rate of this model on appeal, by category of decision?</p><p>What disclosure language do we provide to patients when this model contributed to a denial?</p><p>When the discovery order arrives, and discovery orders are arriving, what records can we produce about how this model was used, by whom, and against which performance metrics?</p><p>A legal analysis of the UnitedHealth discovery order from Stephenson Acquisto &amp; Colman put it bluntly. Provider organizations that integrate AI into clinical workflows, prior authorization support, utilization management, or care coordination face meaningful legal risk if that AI functions as a decision-maker rather than a decision-support tool.</p><p>That distinction is doing a great deal of work in 2026 contracts. Most of the contracts I have read do not draw it cleanly.</p><p>If your vendor cannot answer those three questions, you do not have a vendor problem.</p><p>You have a documentation problem.</p><p>And the discovery order showing up in your industry just promoted that documentation problem to a liability problem.</p><h2>What I do not know</h2><p>I do not have a clean position on whether voluntary commitments like the AHIP pledge can produce the disclosure the cascade requires. The reductions are real. The transparency is not.</p><p>I do not know who audits across vendors. The same handful of payment integrity firms (Cotiviti, Optum, Zelis, MultiPlan, EquiClaim) operate across competing insurers. One model&#8217;s bias affects millions of patients across plans that appear to compete with each other but are running the same engine underneath. There is no public registry of which insurer uses which vendor for which decision type.</p><p>And I do not know how the Potter case ends. The retaliation arc, the network exclusion, the bankruptcy pressure, the defamation threats. None of it is settled. UnitedHealthcare disputes her account. The litigation is ongoing.</p><p>What I can say is that the pattern of single physicians being financially squeezed for documenting how the cascade affected their patients is not unique to her.</p><p>It is unusual only because she said it out loud.</p><p>If a bank approved your mortgage, let you close on the house, and three weeks later, a different department of the same bank sent you a letter saying it had re-reviewed the documents and actually you don&#8217;t qualify, and a third department flagged your account, and a fourth froze your line of credit, we would not call that a process.</p><p>We would call it fraud.</p><p>In healthcare, we call it utilization management.</p><p>And as of this year, four AI systems are running it in parallel.</p><p>If you have received a denial after a service was already authorized, or your prescription was denied at the pharmacy benefit manager, or your prior authorization was denied by a WISeR vendor in one of the six pilot states, the appeal rules are the same ones I described in Vol. 11. You have the right to appeal, the right to external review, and the right to see the specific clinical criteria the insurer applied.</p><p>Overturned will generate an appeal letter from your denial documents. It&#8217;s free, no login, no storage of your records.</p><p>The tool lives at rachelankerholz.com/tools/overturned. Cascade denials are explicitly in scope.</p><p><em>Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build systems.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://uncheckedai.rachelankerholz.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Authorized, Operated, Denied: The Approval That Wasn't]]></title><description><![CDATA[The insurance company approved the surgery. The surgeon performed it. The denial arrived after. The only new information in the file was proof the surgery happened.]]></description><link>https://uncheckedai.rachelankerholz.com/p/authorized-operated-denied-the-approval</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/authorized-operated-denied-the-approval</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Mon, 20 Apr 2026 21:16:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CHHD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CHHD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CHHD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png 424w, https://substackcdn.com/image/fetch/$s_!CHHD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png 848w, https://substackcdn.com/image/fetch/$s_!CHHD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png 1272w, https://substackcdn.com/image/fetch/$s_!CHHD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CHHD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2711044,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.rachelankerholz.com/i/194844165?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CHHD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png 424w, https://substackcdn.com/image/fetch/$s_!CHHD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png 848w, https://substackcdn.com/image/fetch/$s_!CHHD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png 1272w, https://substackcdn.com/image/fetch/$s_!CHHD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9be53c49-867c-4459-9207-2a5f568d21bf_1672x941.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On January 7, 2025, Dr. Elisabeth Potter was in the middle of a bilateral DIEP flap breast reconstruction &#8212; a complex surgery for a cancer patient &#8212; when a call came into the operating room.</p><p>The caller was a UnitedHealthcare representative. Her patient, already asleep on the table, had been pre-approved for the surgery. What the rep wanted to know was whether the patient&#8217;s overnight inpatient stay was justified.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Potter scrubbed out to take the call. The UHC representative on the line did not have access to the patient&#8217;s medical records.</p><p>UnitedHealthcare denied the overnight stay anyway.</p><p>When Potter posted about it on TikTok, the video got 5.5 million views. She then received a defamation threat letter from Clare Locke, the same firm that won the $787 million Dominion judgment against Fox News, demanding she take it down and apologize.</p><p>She didn&#8217;t. UnitedHealthcare then declined to add her new surgery center to its in-network list, a decision she says has put her $5 million in debt and forced her husband to cash out his 401(k).</p><p>Scrubbing in is the point of no return. The patient has committed. The anesthesia team has committed. The decision the insurance company is making in that moment is not about whether the surgery happens. It is about who pays for how long the patient recovers.</p><p>Potter&#8217;s call is not an outlier. It is the loud version of a pattern that plays out quietly every day in post-op recovery rooms and billing offices across the country.</p><div><hr></div><p>A patient gets home from the hospital. Eight days post-op. The mail arrives.</p><p>Her claim has been denied pending additional clinical review. The letter asks for medical records her surgeon&#8217;s office already sent during prior authorization.</p><p>She calls billing. The coordinator has heard this before. The coordinator has a folder of these letters going back years.</p><p>The only new document in the file is the procedure note. The same insurer that approved the surgery now wants to re-review whether the surgery it approved was medically necessary &#8212; using clinical information it already had, adding only the evidence that its approval was acted on.</p><p>Three weeks becomes three months. The patient&#8217;s credit takes a hit. The surgeon&#8217;s office writes off a percentage. Somebody hit a quarterly number.</p><div><hr></div><p>The industry has a word for this. It almost never reaches the patient.</p><p>It is called retrospective denial. KFF Health News documented the pattern in a case involving the Markley family, who incurred medical debt after Anthem Blue Cross and Blue Shield revoked preapproval for a battery of tests performed at the Mayo Clinic. When insurers pull the decision to pay after the service is completed, patients are legally on the hook for the bill.</p><p>Martha Gaines, who directs the Center for Patient Partnerships at the University of Wisconsin Law School, co-authored a JAMA piece on this in 2020. &#8220;How broken can you get?&#8221; she asked. &#8220;How much more laid bare can it be that our health care insurance system is not about health, nor caring, but just for profit?&#8221;</p><p>That was 2020. It has not improved.</p><div><hr></div><p>I want to be specific about something, because the shape of this argument matters.</p><p>If you work inside a health plan running utilization management, you are not the villain of this piece. The incentives you operate inside are the villain. Claims volume is real. Fraud is real. Documentation errors are real. An insurer that never reviews claims is an insurer that gets exploited.</p><p>I know that. I want to trace the gap between a legitimate review function and the industrial pattern it has become.</p><p>Retrospective review was originally a fraud control. It asked: Did the provider bill for a service they didn&#8217;t perform? Does the documentation match the procedure? Was the patient eligible on the date of service? These are good questions.</p><p>What retrospective review has become, at scale, is a second medical necessity decision on claims the plan has already approved. That is a different function. It runs on the assumption that the authorization was provisional and that the real decision happens after the money is at stake.</p><div><hr></div><h2><strong>Two algorithms are running. The harm is in the gap between them.</strong></h2><p>The first algorithm approves. Cigna&#8217;s PXDX system processed approximately 300,000 denials in two months. Medical reviewers allegedly spent an average of 1.2 seconds per case.</p><p>At 1.2 seconds, there is no review happening. The algorithm makes the decision. The human clicks &#8220;approve&#8221; on what the algorithm has already decided, which is how the insurer meets the &#8220;human in the loop&#8221; requirement on paper while functionally automating the outcome. The human is not reviewing the algorithm. The human is ratifying it.</p><p>The second algorithm denies. Post-service claims get screened against different rules, often by different vendors. 46 percent of healthcare organizations already use AI for revenue cycle management. Another 49 percent plan to within a year.</p><p>Cotiviti partners with more than 100 health plans on payment accuracy, explicitly marketing retrospective review and AI-enabled clinical chart validation. Optum, the UnitedHealth subsidiary that acquired Change Healthcare, runs revenue cycle tools used across the industry. Zelis, MultiPlan, and EquiClaim work the same territory. One Cotiviti case study claims a Blue Plan achieved &#8220;triple its original projected findings&#8221; after adopting their AI-powered clinical review.</p><p>The AI making the denial is often not built by the insurer named on the letterhead. It is licensed from a vendor whose product, marketed in those exact terms, is more denials sustained on appeal.</p><div><hr></div><h2><strong>Now, the training data question, which nobody is asking loudly enough.</strong></h2><p>The AI systems running authorization and retrospective denial are trained on historical claims data. Which means they are trained on prior decisions. Which means they are trained on the outputs of human reviewers who were themselves operating under the same cost-control incentives.</p><p>The model is not predicting medical necessity. The model is predicting what a cost-pressured reviewer would have denied.</p><p>That is a critical distinction. A model trained on historical denials learns to reproduce historical denials. If the training data contains systematic bias against expensive procedures, older patients, or complex cases, the model does too. The bias gets encoded, then laundered through a layer of algorithmic objectivity, then sold back to the same industry that generated it.</p><p>This is the pattern Cigna&#8217;s PXDX, UnitedHealthcare&#8217;s nH Predict, and EviCore&#8217;s prior-auth tools share. They are not performing a medical review. They are automating the inherited judgment of reviewers whose incentives were never aligned with the patient in the first place.</p><div><hr></div><h2><strong>And now the detail that broke me.</strong></h2><p>According to reporting on UnitedHealth&#8217;s internal practices, the company explored using AI to predict <em>which denials were likely to be appealed, and which of those appeals were likely to be overturned.</em></p><p>Read that sentence twice.</p><p>That is not an algorithm predicting medical necessity. That is an algorithm predicting who will fight back. And denying accordingly.</p><p>The logic is simple. If the model predicts an appeal is unlikely, deny. If the model predicts an appeal would likely succeed, deny anyway if the patient is unlikely to file one. A KFF analysis found that in 2021, only 0.2 percent of denied claims were ever appealed.</p><p>0.2 percent. The model does not need to be right. It needs to be right often enough to withstand the 0.2 percent of cases where someone fights.</p><p>nH Predict, the algorithm at the center of the UnitedHealth class action, has an alleged 90 percent reversal rate on appeal. Ninety percent. A coin flip would do better. The reason it gets deployed anyway is that nine out of ten denied patients never appeal.</p><p>A 90 percent error rate is only broken if the errors cost the company something. For 99.8 percent of patients, they don&#8217;t.</p><div><hr></div><h2><strong>The speed asymmetry is the whole game.</strong></h2><p>Denial runs at algorithmic speed. Appeal runs at human speed. The insurer&#8217;s system flags a claim in milliseconds. The appeal takes the patient weeks of phone calls, records requests, and letters. The provider&#8217;s billing team appeals in aggregate because they don&#8217;t have the labor to fight every denial individually.</p><p>Between an algorithm that denies in milliseconds and a human who appeals in months, the house always wins. Not because the algorithm is right. Because the patient gave up, or died, or paid.</p><div><hr></div><p><strong>I manage technology infrastructure for a global academic community.</strong> When I think about what would happen if my systems operated the way insurance claims review operates, I do not have to guess. It would be a disaster.</p><p>Imagine a system that approved a user&#8217;s access to a resource, let them use it, and three weeks later revoked the approval retroactively and charged them for the time they had already spent. In my world, that is not a utilization management program. That is a breach of contract, an incident report, and a board conversation.</p><p>When I look at the retrospective denial pattern, I see the same infrastructure failure. A commitment was made. The commitment was relied on. The commitment was revoked after reliance had created harm.</p><div><hr></div><p>A 2024 Premier survey found an average of 3.2 percent of all claims denied had been pre-approved via prior authorization. More than 54 percent of denied claims were ultimately paid after appeal. Denials skewed to high-cost claims, averaging charges of $14,000 and up.</p><p>Three details from that survey matter. First, 54 percent of denied claims eventually get paid, which tells you the denial was wrong. Second, the denials concentrate on expensive claims, which tells you the optimization target. Third, the denials concentrate on procedures that were already pre-approved, which tells you what authorization is actually worth.</p><p>A Senate Permanent Subcommittee on Investigations report criticized UnitedHealthcare, Humana, and CVS for using AI automation to deny Medicare Advantage post-acute care. The report raised concerns about substituting medical necessity with financial calculations.</p><p>That investigation focused on front-end denials. The back-end version &#8212; retrospective denial, running through licensed vendor AI &#8212; has not received the same scrutiny.</p><div><hr></div><p>If a bank approved your mortgage, let you close on the house, and three weeks later sent a letter saying they had re-reviewed your application and actually you don&#8217;t qualify &#8212; we would not call that a process. We would call it fraud.</p><p>In healthcare, we call it utilization management. And we let AI run it.</p><div><hr></div><h2><strong>What I Don&#8217;t Have Answers To</strong></h2><p>I don&#8217;t know how to distinguish, at policy scale, legitimate retrospective review from the pattern I am describing. Fraud happens. Upcoding happens. Documentation errors happen.</p><p>I also don&#8217;t know who audits the second algorithm. Cotiviti, Optum, Zelis, MultiPlan, EquiClaim &#8212; these vendors operate across dozens of insurers, which means one model&#8217;s bias affects millions of patients across plans that appear to compete with each other but are all running the same denial engine underneath.</p><p>When a denial letter cites &#8220;additional clinical review,&#8221; I cannot tell you which vendor&#8217;s model flagged the claim or what criteria it applied. Neither can the patient. Neither, in many cases, can the appeals coordinator at the insurance company.</p><p>And I don&#8217;t have a clean answer for Dr. Potter. She did the right thing. She scrubbed out to take the call because she knew if she didn&#8217;t, the stay would be denied. She is now $5 million in debt and facing defamation threats for saying so publicly. I don&#8217;t know how to build an incentive structure that doesn&#8217;t punish that decision.</p><div><hr></div><p>If you have a denial letter for a service that was previously authorized, the appeal rules are the same ones I described in Vol 11. You have the right to appeal, the right to external review, and the right to see the specific clinical criteria the insurer applied.</p><p>Overturned will generate an appeal letter from your denial documents. It&#8217;s free, no login, no storage of your records.</p><p>The tool lives at rachelankerholz.com/tools/overturned. Retrospective denials are explicitly in scope.</p><p><em>Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build systems.</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The AI Wrote That You Consented. Your Chart Says So.]]></title><description><![CDATA[Vendor contracts put consent obligations on providers, not vendors. A class action says patients were never informed. Their charts say they were. The AI wrote that part.]]></description><link>https://uncheckedai.rachelankerholz.com/p/the-ai-wrote-that-you-consented-your</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/the-ai-wrote-that-you-consented-your</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Mon, 13 Apr 2026 03:33:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!g1e1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!g1e1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!g1e1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!g1e1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!g1e1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!g1e1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!g1e1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2965709,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.rachelankerholz.com/i/194030389?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!g1e1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!g1e1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!g1e1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!g1e1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed5aa3ba-3994-4ff6-9aab-04726235bc28_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Your health system probably signed an ambient AI scribe contract in the last 18 months. Maybe it was Abridge. Maybe Nuance DAX Copilot. Maybe Ambience Healthcare. The deal came through clinical operations, or IT, or both. Your legal team may or may not have seen the final BAA. Your patients almost certainly saw nothing at all.</p><p>That gap is now a class action lawsuit.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>In July 2025, Jose Saucedo went to Sharp Rees-Stealy Medical Group for a routine physical. He spoke with his doctor. He left. A few weeks later, he logged into his patient portal to review his visit notes. His medical record stated that he had been &#8220;advised&#8221; that the visit was being audio recorded. It said he had &#8220;consented.&#8221; Neither thing had happened. The recording had been made, transmitted to a third-party vendor&#8217;s cloud, and processed by an AI tool called Abridge. The consent documentation in his chart was, according to the lawsuit he filed in November 2025, false.</p><p>The proposed class action covers anyone who had a medical visit with Sharp on or after April 1, 2025, the date Sharp announced its Abridge partnership. That is potentially over 100,000 patients.</p><p>The AI didn&#8217;t just record him without his knowledge. <em>It wrote a consent record proving he had agreed to something he never agreed to</em>. That is not a failure at the edges of how this technology works. That is the system performing exactly as designed, with no one watching the output.</p><p>Before we get to the law, here&#8217;s the operational problem your board needs to understand.</p><p>Once the recording is made, your health system is in one of two bad positions, and your vendor contract put you there.</p><p>Delete the audio. Many vendors do this within 30 days. Privacy problem addressed, at least technically. But now there is no ground truth. If the AI hallucinated a dosage, a diagnosis, or a symptom the patient never mentioned, there is no recording left to check it against. The legal exposure here is concrete: a physician tells the patient they&#8217;re prescribing 0.5mg. The AI transcribes 5mg. The physician doesn&#8217;t catch it before the prescription is filled. Months later, a malpractice claim arrives. The audio is gone. The AI-generated note is the only record. The note is what the opposing attorney has.</p><p>Retain the audio. You get verification capability and an evidentiary trail. But now every raw recording, every draft transcript, every backend AI artifact is potentially discoverable. Defense attorneys are already warning that clinicians may find themselves defending not just the signed chart, but a parallel archive they did not author, edit, or control.</p><p>Delete it, no receipts. Keep it, the receipts may be used against you.</p><p>Neither option is clean. That is the architecture your vendor sold you, and your contract formalized it.</p><p>Here&#8217;s what I keep coming back to: the C-suite conversation about ambient AI in healthcare is almost entirely about efficiency, and almost entirely missing the question that will matter most when the next lawsuit lands.</p><p>The efficiency case is real. The Permanente Medical Group, Kaiser&#8217;s physician organization in Northern California, reported that ambient scribes saved its physicians the equivalent of 1,794 working days across more than 2.5 million patient encounters. Clinician burnout is a genuine operational crisis and the documentation burden is a meaningful driver of it. </p><p>What they&#8217;re less clear on is who owns the liability when the tool goes wrong.</p><p>The Larridin 2026 State of Enterprise AI Report, which surveyed more than 350 senior leaders at organizations with 1,000 or more employees, found that 92% of C-suite executives expressed full confidence in AI impact, while 58% said they couldn&#8217;t identify who in their organization owned AI performance accountability, and 62% lacked a comprehensive inventory of AI applications currently in use. The confidence in AI and the visibility into ownership and accountability are not moving together.</p><p>In the same month that the survey was published, Harvard Business Review ran a piece about a Fortune 500 insurance company whose CEO convened the C-suite to ask a single question: Who owns our AI initiatives? The CIO said it was obviously her domain. The COO said an agentic workforce is operations by definition. The CFO pointed out that an AI system was already making underwriting decisions with direct P&amp;L impact. The Chief Risk Officer noted that autonomous decision-making is a major risk exposure. No one had a clean answer.</p><p>Lauren&#8217;s organization had a version of that meeting. The question of who owns the ambient scribe decision, and who owns the liability when that decision goes wrong, is not settled in most health systems. In many, it is not even fully articulated.</p><p>I manage technology infrastructure for a global academic community. When I evaluate a vendor relationship, my first questions are always about data: where does it go, who can see it, how long does it stay, and what does the contract say when I want to leave. These are basic infrastructure questions. They are also, as I&#8217;ve been arguing across this series, accountability questions.</p><p>Here&#8217;s what the Abridge contract structure actually looked like in practice: Sharp deployed the tool across its clinical network in April 2025. Per the vendor agreement, Abridge retained broad rights to access recordings and transcripts. The compliance obligations, including consent workflows, were placed on Sharp. When the lawsuit arrived, both names appeared in the complaint. The legal liability sat with the health system.</p><p>A February 2026 analysis in Medical Economics put it directly: many health systems are signing AI vendor agreements without clear answers to who owns the patient data, what happens if an AI output contributes to a clinical error, and what exiting the relationship actually looks like. The vendor&#8217;s broad disclaimers are standard language. Those disclaimers do not change the fact that under the current law in most states, the institution remains responsible for whatever makes it into patient care.</p><p>That is governance arbitrage. The vendor captures the revenue. The provider carries the risk. The contract made it so.</p><p>Sharp is not a one-off. It is one instance of a blueprint.</p><p>The consent architecture underneath these tools does not work the way most health system leaders assume it does.</p><p>In California, recording a conversation requires consent from all parties. Under the California Invasion of Privacy Act, violations carry $5,000 per incident in statutory damages. Applied across tens of thousands of patient encounters, that exposure can threaten an organization&#8217;s financial position, not just its reputation. Sharp&#8217;s case is still active.</p><p>But Sharp is in California. The picture outside California is more complicated, and in some ways more alarming.</p><p>In July 2025, a class action was filed against Heartland Dental, the largest dental support organization in the United States, alleging that patient phone calls were recorded, transcribed, analyzed for sentiment, and used operationally without patients ever being informed. The calls ran through RingCentral&#8217;s AI platform. Patients calling to schedule an appointment had their conversation recorded, summarized, and scored for emotional tone. No exam room. No ambient scribe on a device. Just a phone call, quietly routed through a system that was listening, and a vendor capturing the data to optimize its own product.</p><p>In January 2026, a federal court dismissed the original wiretapping claims, ruling that AI transcription fell within an &#8220;ordinary course of business&#8221; exception to the Federal Wiretap Act. Because recording and analyzing calls is what the product does, the court found, it is not eavesdropping under federal law. It is a feature.</p><p>The case is continuing. The plaintiff filed an amended complaint in February, arguing that RingCentral&#8217;s AI tools are a separate optional product from its phone service, not a core function, and that the company is using those patient calls to train its own models. That distinction, if it holds, could close the loophole. The outcome isn&#8217;t settled. But the original ruling is already on the books, and other courts will cite it.</p><p>This is the same blueprint as Sharp. RingCentral provides the service, captures the operational value, and sits behind a &#8220;core service&#8221; defense when liability appears. Sharp and Heartland are not two isolated lawsuits. They are the same contract architecture playing out in different rooms. Exam room. Phone call. The vendor captures the capability. The institution carries the exposure.</p><p>In most of the country, the Heartland ruling is currently the answer to whether recording your patients without telling them violates federal law. The legal protection patients might reasonably assume exists does not, if the AI recording is the vendor&#8217;s core service.</p><p>This is the part I think health system leadership underestimates. The risk is not confined to the exam room or the telehealth visit. It extends into scheduling calls, intake workflows, prior authorization conversations, and any touchpoint where an AI tool is listening and neither the patient nor the contracting organization fully understands the terms under which that listening is happening.</p><p>In an earlier piece in this series on consent, I wrote about how the current model of digital agreement has become a legal fiction: you agree to something general, the system does something very specific and seemingly out of scope of your original consent. The gap between those two things is where your accountability is unexamined. That piece focused on software consent flows. The ambient scribe problem is the same architecture applied to your exam room.</p><p>A piece I published on observability made a related argument: without a verifiable record of what a system did and when, there is no accountability infrastructure, only assurances. The Sharp case is both problems at once. The AI generated a false consent record, which is a consent failure. It simultaneously created a documentation archive that the health system cannot fully audit or control, which is an observability failure. The two are not separate issues. They are the same gap.</p><p>Your symptoms. Your medications. Your mental health history. The conversation you had with your doctor about your marriage because it was affecting your blood pressure. All of it, processed through a system your institution contracted for, under terms your patients never saw.</p><p>There is a clinical argument that runs the other direction, and it deserves examination, because dismissing it would cost real patients real harm.</p><p>AI systems with longitudinal memory, systems that retain and reason across multiple visits rather than treating each encounter in isolation, show meaningfully better diagnostic performance for slowly developing conditions, including early detection of neurodegenerative disease, than systems that treat each encounter in isolation. The pattern recognition that catches a slow-developing condition requires time and continuity. A system designed to delete everything after 30 days cannot see that the fatigue from October connects to the joint pain from March.</p><p>The system designed to protect your patients&#8217; privacy is also the system least equipped to help them. I don&#8217;t have a clean answer for that tension yet. What I do know is that this tradeoff is being made right now, in procurement meetings, largely without patient input, and in many cases without the full involvement of legal and risk teams who would ask different questions than clinical and IT leaders ask. Patients are finding out the way Jose Saucedo found out: by reading their own records.</p><p>One more number worth sitting with: the ambient scribe market grew 2.4 times in 2025 alone, generating an estimated $600 million in revenue. Abridge, the vendor named in the Sharp lawsuit, is valued at $5.3 billion and is already deployed at more than 200 large health systems, including the VA, Johns Hopkins, and the University of Chicago. Industry projections put the market at nearly $3 billion annually by 2033.</p><p>If you are in healthcare leadership and you are not certain whether your organization uses one of these tools, the answer is probably yes. Which means the question is not whether to evaluate this risk. It is whether you are in the position Sharp was in before the lawsuit, or after it.</p><h3><strong>What I Don&#8217;t Have Answers To</strong></h3><p>The clinical utility argument is real. Ambient scribes are providing genuine value to overwhelmed physicians. The burnout crisis is not a talking point. If I were advising a health system&#8217;s IT leadership today, I would not tell them to stop evaluating these tools.</p><p>I also don&#8217;t know what meaningful consent looks like in emergency contexts. If a patient arrives unconscious, the consent framework breaks immediately. </p><p>I don&#8217;t have a clean read on where the federal preemption question lands. The Trump administration&#8217;s December 2025 executive order directed the DOJ to challenge onerous state AI laws and tasked agencies with developing preemptive standards to avoid a patchwork of fifty different state rules. If that effort succeeds, the California standard that creates Sharp&#8217;s exposure may be weakened nationally. If it doesn&#8217;t, health systems operating across state lines are navigating that patchwork now, with contracts that were often written before the legal landscape clarified.</p><p>And the Heartland case is still moving. If the amended complaint succeeds in arguing that RingCentral&#8217;s AI tools are a separate product from its phone service, the &#8220;ordinary course of business&#8221; loophole narrows. The legal ground is shifting faster than most procurement cycles.</p><p>What I&#8217;m more confident about: the default of deploying first and building consent infrastructure later is not a viable risk position. The Sharp case will not be the last. The health systems moving fastest without governance infrastructure are not just acquiring tools. They are acquiring liability at scale, signed into their own contracts.</p><p>The contracts are signed. The tools are running. The question is whether the governance has caught up.</p><p>If you&#8217;re in healthcare IT or operations leadership, here are the four questions worth pulling your Business Associate Agreement (BAA) out to answer today. Does your vendor agreement specify whether patient recordings or transcripts are used to train AI models? Who among your vendor&#8217;s staff can access those recordings, and under what conditions? What does your consent workflow look like for a patient who arrives without prior notice? And what happens to the data, all of it, when you terminate the contract?</p><p>If you don&#8217;t have clear written answers to all four, you have a gap your legal team needs to see before the next procurement cycle. Not after the next lawsuit.</p><p>In the next piece, I&#8217;ll step back from sector-specific cases and look at something that runs underneath all of them: what it would actually mean to build AI systems where the data stays with the people it came from. Not as a privacy feature. As a market mechanism that changes who benefits from the value your patients&#8217; information creates.</p><p>If you&#8217;re working through these questions in your own organization, I&#8217;d be glad to have you along.</p><p><em>Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[You Have 59 Low-Priority Emails. One of Them Isn't.]]></title><description><![CDATA[Your AI filtered a compliance notice. Your vendor's AI generated it. Neither flagged the gap. In regulated industries, that's not a workflow problem. It's a liability.]]></description><link>https://uncheckedai.rachelankerholz.com/p/you-have-59-low-priority-emails-one</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/you-have-59-low-priority-emails-one</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Mon, 30 Mar 2026 00:56:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6iQb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6iQb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6iQb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!6iQb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!6iQb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!6iQb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6iQb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3161628,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.rachelankerholz.com/i/192559059?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6iQb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!6iQb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!6iQb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!6iQb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dd93341-dc45-41ad-9005-f9854dfe44d8_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s Wednesday morning. You haven&#8217;t opened your laptop yet. Your AI has already read 63 emails, decided four of them matter, and filed the rest. You start with the four. By 9:30, you&#8217;ve cleared them.</p><p>Somewhere in the 59 is a security disclosure from a third-party vendor your health system uses for billing. The vendor&#8217;s communications team uses AI to generate and distribute notices at scale. The email was formatted like their standard marketing correspondence, sent from the same bulk delivery domain they use for product updates and newsletters. Copilot read the signals correctly. It just didn&#8217;t know this message was different.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>You find it on Thursday afternoon. By then, the vendor had followed up by phone, concerned no one had responded. The incident they disclosed happened on Tuesday. Your HIPAA-required risk assessment clock started the moment the email arrived.</p><p>This is the part of the AI communications story that doesn&#8217;t get discussed in the enterprise press.</p><p>We&#8217;ve been treating AI-generated spam as an inbox nuisance. Something to solve with better filters. But the filters are AI now, too. And the two systems are making decisions about each other without you in the room.</p><p>Here&#8217;s what the volume actually looks like. Google organic search traffic in the U.S. dropped 38% between November 2024 and November 2025, according to Chartbeat data cited in a January 2026 Reuters Institute report. Major publishers have lost 40 to 55% of their traffic. Some smaller ones have shut down entirely. Google now surfaces AI-generated summaries at the top of results, users stop clicking through, and the economic model that paid humans to write original content quietly collapses.</p><p>When the economics of human-written content break because the economics of AI-generated content work better, ad networks don&#8217;t distinguish, and algorithms don&#8217;t care. You can build an agent that sends email, browses the web, makes phone calls, and negotiates contracts from any laptop, using open-source frameworks, for free. The barrier to entry for mass AI-generated communications is basically zero.</p><p>Recently, someone who runs spam infrastructure for one of the largest platforms made a prediction: within 90 days, email, phone calls, and messaging would be so flooded with AI-generated content they&#8217;d no longer be reliably usable. Then he bought 1.7 million bot accounts on X to prove the point. They used to be simple, and now they have natural language fluency and payment infrastructure. What&#8217;s changed now isn&#8217;t sophistication alone. It&#8217;s the combination of scale, personalization, and near-zero cost, collapsing simultaneously.</p><p>This isn&#8217;t just a volume problem. Reinforcement learning from human feedback, RLHF, is how most major language models get trained. You show humans two responses and ask which one they prefer. Repeat that millions of times, and you get a model mathematically optimized to produce content people find compelling. Not accurate. Not helpful. Compelling. The mechanism is the same whether you&#8217;re building a helpful assistant or a content generator running spam operations. Every piece of AI-generated content hitting your inbox was built with that same optimization. Your spam filter was not.</p><p>Think of it this way. Your spam filter is trying to catch traffic specifically engineered, at the model level, to look like legitimate communications. Then spam got a PhD in looking legitimate while your filter studied for the old exam.</p><p>Meanwhile, the filter is getting smarter too. Microsoft&#8217;s Copilot &#8220;Prioritize My Inbox&#8221; feature reached general availability in April 2025. More than 25% of business inboxes now use some form of AI to categorize, prioritize, or triage incoming email. Superhuman, Google Gemini, and a growing range of standalone tools do the same. The pitch is straightforward: let AI decide what you need to read. Reduce cognitive load.</p><p>I understand why it exists. The problem isn&#8217;t the feature. The problem is what happens at the intersection of a filter trained on your behavioral history and a content environment trained to exploit behavioral patterns.</p><p>AI inbox prioritization learns what you respond to. It models your patterns. But organizational risk does not follow your patterns. Breach notifications don&#8217;t follow your patterns. Regulatory notices don&#8217;t follow your patterns. A compliance disclosure sent at scale, from a bulk delivery domain, formatted like every other vendor communication you&#8217;ve deprioritized this year, will register to your AI filter exactly as it should: low priority. The AI made a correct call with incorrect consequences.</p><p>Microsoft disclosed in early 2026 that a logic error had briefly caused Copilot Chat to process and summarize emails labeled Confidential, regardless of sensitivity settings. The fix was deployed quickly, and the disclosure was candid. But the incident points to something worth noting: the AI triage layer is already making decisions about your sensitive communications. The governance controls are still catching up.</p><p>There&#8217;s no audit log that tells you which emails were deprioritized and why. There&#8217;s no notification when a time-sensitive message is sorted away. What you have is a black box that handles 59 of your 63 daily decisions and surfaces the four it thinks matter. When the black box is wrong, you find out from a phone call three days later.</p><p>The structure is identical to something I wrote about earlier in this series: Pactum&#8217;s AI negotiating purchasing contracts for Walmart and Maersk while the suppliers&#8217; systems responded on the other end. The humans set the parameters and stepped away. The accountability gap didn&#8217;t stay contained. It compounded. Each system operated correctly within its own design. Neither was built to flag when the combination produced consequences that neither principal anticipated.</p><p>The inbox version is quieter. Less visible. An AI generated the message. An AI filtered it. A human never saw it. The interaction logged correctly in both systems. The consequences surfaced in a phone call from a vendor asking why no one responded to the disclosure they sent on Tuesday.</p><p>If you&#8217;re a VP of IT at a health system, this isn&#8217;t abstract. Your organization runs on vendor relationships, compliance timelines, and regulatory correspondence. Your team deployed AI inbox tools because they work: the cognitive load reduction is real. Your vendors deployed AI communications tools because their teams are small, and the volume demands are massive. The two systems talk to each other dozens of times a day. Nobody mapped that conversation before it went live.</p><p>Regulators and courts will eventually have to decide whether &#8216;AI filtered it&#8217; qualifies as constructive receipt of a notice. Existing doctrines around duty to monitor and constructive notice were not written with AI triage in mind. That question is coming. Most organizations aren&#8217;t ready for it.</p><h2><strong>What I Don&#8217;t Have Answers To</strong></h2><p>We don&#8217;t yet have a design pattern for an AI filter that is appropriately skeptical without becoming useless. The reason Copilot deprioritizes bulk-formatted email is correct for 98% of bulk-formatted email. The open question is how to handle the 2% that matters without recreating the inbox overload you were trying to escape. Nobody has solved that cleanly.</p><p>The liability question is equally unresolved. If your vendor&#8217;s communications AI sends a compliance disclosure using a template that triggers your organization&#8217;s AI filter, and you miss the notification window, who is responsible? The vendor? Your organization? The filter vendor? Existing frameworks don&#8217;t map cleanly onto a chain where neither sender nor receiver is human. That&#8217;s a design gap, not just a legal one.</p><p>The filter doesn&#8217;t know that. The messages most likely to get deprioritized without review are from smaller senders: smaller vendors, nonprofits, community organizations, and individuals. Optimization for signals of legitimacy, bulk domain, generic template, low engagement history, structurally disadvantages low-signal but high-importance senders. It&#8217;s not malicious. It&#8217;s what happens when nobody asks the system to care about the outliers.</p><p>This is being framed as a consumer problem: Is your personal inbox manageable? But that framing misses what&#8217;s actually breaking. For organizations still operating on the assumption that a message sent is a message received, the gap has already opened. In healthcare and financial services, it&#8217;s a liability problem now, not a future one.</p><p>The only communications still carrying a reliable signal are from people you already trust. When the verification layer fails, you fall back to the social layer. The problem for enterprises is that the social layer doesn&#8217;t scale, isn&#8217;t auditable, and shuts out anyone who hasn&#8217;t already earned a relationship with you. Smaller vendors, nonprofits, and individual constituents. They get filtered. They don&#8217;t get a callback.</p><p>That&#8217;s not an inbox problem. It&#8217;s an architecture problem.</p><p>If you&#8217;re thinking about these questions too, I hope you&#8217;ll subscribe.</p><p><em>Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build systems.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The 90% Error Rate They Shipped Anyway]]></title><description><![CDATA[Insurance companies are using AI to deny care at scale. A 90% error rate sounds like a broken system. It&#8217;s only broken if the errors cost something. For 99.8% of patients, they don&#8217;t.]]></description><link>https://uncheckedai.rachelankerholz.com/p/the-90-error-rate-they-shipped-anyway</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/the-90-error-rate-they-shipped-anyway</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Thu, 26 Mar 2026 20:34:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!f9xa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!f9xa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!f9xa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!f9xa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!f9xa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!f9xa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!f9xa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3012973,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.rachelankerholz.com/i/192246946?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!f9xa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!f9xa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!f9xa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!f9xa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08478214-c96b-4483-815b-2a13eab8c757_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Margaret is 73. She had a hip replacement in October. Her surgeon recommended ten days of inpatient rehabilitation. Her Medicare Advantage plan only approved two.</p><p>She didn&#8217;t know she could appeal. The denial came as a one-page letter with a phone number for member services. She called twice, held both times, and eventually stopped. She went home with a walker and instructions to follow up with outpatient physical therapy three times a week, if she could get there.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>She couldn&#8217;t always get there.</p><p>I&#8217;m starting with Margaret because I want to be specific about who this affects before we talk about systems. The systems conversation is important. But Margaret is the reason it matters.</p><p>I want to say something upfront to the people in this audience who work in insurance or health systems, because I know some of you do.</p><p>The business case for AI in utilization management isn&#8217;t irrational. Claims volume is enormous. Human reviewers are inconsistent. An AI that processes thousands of claims quickly, at low cost, with consistent criteria, sounds like exactly the kind of infrastructure investment that makes sense.</p><p>The problem isn&#8217;t that the technology exists. The problem is what happens when the system&#8217;s error rate stops mattering, because the incentives run the other way.</p><p>That&#8217;s what I want to trace here.</p><p>Last year, a federal class action lawsuit against UnitedHealthcare alleged that the company used an AI model called nH Predict to evaluate post-acute care claims for Medicare Advantage patients. The plaintiffs alleged the model had a 90% error rate. That it denied care even when treating physicians had documented medical necessity. That UnitedHealthcare&#8217;s denial rate for post-hospital care doubled after the tool was deployed, from 10.9% to 22.7%.</p><p>UnitedHealthcare disputes all of it, and the case is still in court. I want to be clear about that. What I&#8217;m describing are allegations, not findings. I&#8217;m citing them because they&#8217;re in the public record, and the questions they raise are worth asking regardless of how the litigation resolves.</p><p>The number that&#8217;s not in dispute: 90% of UnitedHealthcare&#8217;s denials get overturned when a patient actually appeals to a federal judge.</p><p>Only 0.2% of patients ever do.</p><p>Here&#8217;s the math the system runs on: you don&#8217;t need the AI to be right. You just need the denial to be intimidating enough that most people don&#8217;t fight it. And most people don&#8217;t.</p><p>I manage technology infrastructure for a global academic community. Error rates are something I think about constantly. If a process in my environment was failing 90% of the time, we&#8217;d take it down. We&#8217;d fix it. Scaling it to millions of patients would not be on the table.</p><p>The only context in which you scale a 90% error rate is when the errors are profitable.</p><p>And I can&#8217;t get past this: when a denial saves money and only 0.2% of those denials are ever challenged, the incentive structure doesn&#8217;t reward accuracy. It rewards volume. The AI doesn&#8217;t have to be good. It has to be good enough that the economics work.</p><p>The conversation about AI in healthcare focuses on the wrong question. Everyone wants to know: is the model accurate? That&#8217;s not the question that matters. The question is: what happens when it&#8217;s wrong? Who catches it? What does the error cost? For whom?</p><p>For Margaret, the cost was going home without the care her surgeon recommended. For the insurer, the cost of that error was zero. She didn&#8217;t appeal.</p><p>UnitedHealthcare isn&#8217;t the only one. In 2023, a ProPublica and CBS News investigation found that Cigna physicians reviewed more than 300,000 claims in a two-month period using automated decision support tools. Cigna disputes the characterization and says physicians exercise independent medical judgment. The volume isn&#8217;t disputed. 300,000 claims in two months.</p><p>The American Medical Association found that 71% of insurers now use AI for utilization management. Not a pilot. Standard practice, at scale, with no federal framework governing accuracy requirements, bias audits, transparency, or appeals.</p><p>We require clinical trials before a drug reaches the market. Years of them. Proof of efficacy. Mandatory side effect disclosure. Post-market surveillance. A drug with a 90% error rate doesn&#8217;t get approved.</p><p>An AI system making life-or-death coverage decisions for Medicare patients has none of those requirements. We find out that a model has a 90% error rate because a class action lawsuit surfaces internal documents. That&#8217;s not a system working. That&#8217;s luck, and it only helps the people who can afford to sue.</p><p>The denial-rate problem is one layer. The bias problem sits underneath it, and they&#8217;re not separate issues.</p><p>Cedars-Sinai researchers found that AI clinical decision support tools recommend inferior psychiatric treatment options when a patient is identified as Black. The AI&#8217;s algorithm doesn&#8217;t invent the disparity. It&#8217;s inherited from training data that reflects decades of documented inequity in how care has been delivered and recorded.</p><p>Think of it this way. If a physician consistently recommended worse psychiatric options to Black patients out of habit rather than clinical evidence, we&#8217;d expect accountability. When an AI trained on that physician&#8217;s historical decisions does the same thing, accountability becomes nearly impossible to locate. The company points to the data. The data reflects historical practice. Historical practice reflects structural inequity. No single actor is responsible because everyone can point to the layer below them.</p><p>I don&#8217;t think that&#8217;s an acceptable answer. It is, for now, where the law leaves us.</p><p>For anyone in this audience who is deploying clinical AI: if you haven&#8217;t audited your training data for demographic disparities, you&#8217;re not just carrying an ethical risk. You&#8217;re carrying a liability risk that the courts haven&#8217;t finished mapping yet.</p><p>The hospitals are using AI. The physician groups. The appeals coordinators. The pharmacy benefit managers. At least four separate AI systems may touch a single claim between initial authorization and final decision. Each has its own training data, its own error rate, its own optimization target.</p><p>We don&#8217;t have standards for how those systems interact. We don&#8217;t require that a denial disclose that it was AI-generated. In some states, we don&#8217;t require a licensed physician to review an AI recommendation before it becomes a coverage decision communicated to a patient.</p><p>In my article, <em>Show Me the Receipts</em>, I wrote about observability as a human right: the ability to see what an AI system did, why it did it, and who is accountable when it&#8217;s wrong. Healthcare is where that absence costs the most. When an AI denies care and the patient doesn&#8217;t appeal, there&#8217;s no record that the denial was probably wrong. The case closes. The error disappears.</p><p>The 0.2% who appeal get overturned almost every time. The 99.8% who don&#8217;t are invisible in the data.</p><p>That&#8217;s worth sitting with. If appeals succeed at that rate, the AI was wrong far more often than anyone&#8217;s tracking.</p><h2><strong>What You Actually Have the Right to Do</strong></h2><p>When a claim is denied, you have the right to appeal. The internal appeal is step one, not the final word. If the internal appeal fails, you have the right to an external independent review conducted by an organization with no relationship to your insurer. Insurers lose those reviews at high rates. Most people never get there because no one tells them the internal appeal isn&#8217;t the ceiling.</p><p>Your denial letter is required by law to cite the specific reason for denial and the policy language used. You can also request the clinical criteria the insurer applied, typically from a system called InterQual or MCG. That document is what your appeal actually needs to respond to. Most people never ask for it.</p><p>Deadlines matter. Appeals for most commercial plans must be filed within 180 days of the denial date. Medicare Advantage has its own timeline with enforceable federal rules. The clock starts when you receive the denial letter, not when you find out you have the right to fight it.</p><p>If you&#8217;re on Medicare and your health is at risk, you have the right to an expedited appeal. The decision is supposed to come within 72 hours. Most Medicare patients have never heard of it.</p><p>The system isn&#8217;t designed to surface these rights. You have to know that you can ask.</p><h2><strong>So I Built Something</strong></h2><p>After I posted about insurance AI denial rates on LinkedIn, someone replied: &#8220;I wonder if it will level the playing field when someone creates an appeals AI algorithm and insurers experience a 100% appeals rate.&#8221; A few other people said similar things. I&#8217;d been thinking about it for a while and decided to go for it. </p><p>I built it with Claude. I&#8217;m building it in public, with feedback from the people who actually need it and my audience, many of whom work for insurers.</p><p>It&#8217;s called <a href="https://www.rachelankerholz.com/tools/overturned">Overturned</a>. Free. No login required. Your documents are never stored or used to train any model. You upload your denial letter, the tool reads it, identifies the specific grounds for denial, and generates an appeal letter that responds to those grounds directly. It shows you exactly what it found before you send anything.</p><p>Upload your denial. Review the findings. Refine the letter. Appeal.</p><p>About three minutes. About $0.07 in AI processing costs, which I cover through donations and my consulting practice. No paywalls. No ads. No data selling.</p><p>Insurance companies have been using AI to deny claims at scale for years. Overturned is AI writing back.</p><p>It&#8217;s a live beta, which means it&#8217;s not perfect. What it does well: it extracts the denial rationale, matches it to the cited clinical criteria, and drafts a letter in language that appeals reviewers recognize. What it doesn&#8217;t do: give legal advice. If your situation is complex or the stakes are high, talk to a patient advocate or a healthcare attorney.</p><p>There&#8217;s a public roadmap on the tool page where you can vote on features and suggest what we build next. If something doesn&#8217;t work the way you need it to, that feedback is how it gets better.</p><p>The appeal you don&#8217;t file is the one you can&#8217;t win.</p><h2><strong>What I Don&#8217;t Have Answers To</strong></h2><p>I&#8217;m not arguing AI has no role in utilization management. The claims volume is real and human reviewers make errors too. What I&#8217;m arguing is that we need a standard of evidence before deployment: a required accuracy floor, mandatory demographic bias audits, and clear rules about who reviews AI decisions before they become final. None of that currently exists at the federal level.</p><p>I also don&#8217;t know how to solve the 0.2% problem without making the appeals process dramatically simpler. Overturned helps with the letter. Someone still has to know they can fight, decide it&#8217;s worth it, and follow through. That&#8217;s not a technology problem. It&#8217;s a structural one.</p><p>And I genuinely don&#8217;t know what accountability looks like when the bias lives in the training data. Who audits the model? Who has standing to require changes? The Cedars-Sinai finding is one data point. How many other health systems have run that same analysis?</p><p>My guess: not many.</p><p>If you&#8217;ve had a claim denied and fought it, or you&#8217;ve tried Overturned, I want to hear about it. The comment section is open. That&#8217;s how the tool gets better, and honestly, it&#8217;s how I understand what&#8217;s actually happening out there.</p><p><em>Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build s</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Show Me the Receipts]]></title><description><![CDATA[Your AI made a decision that changed someone&#8217;s life. Their lawyer asked why. Nobody on your team could answer. In regulated industries, that&#8217;s not a glitch. It&#8217;s a liability.]]></description><link>https://uncheckedai.rachelankerholz.com/p/show-me-the-receipts</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/show-me-the-receipts</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Sun, 15 Mar 2026 23:05:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yC3g!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yC3g!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yC3g!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!yC3g!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!yC3g!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!yC3g!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yC3g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2807144,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.substack.com/i/191073701?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yC3g!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!yC3g!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!yC3g!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!yC3g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5ef08d9-ab71-447e-a910-7769942c2461_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In my last piece, I wrote about the proprietary data problem: employees feeding sensitive information into AI tools they don&#8217;t control, coding assistants introducing vulnerabilities, agents destroying production databases. I ended by saying the next piece would be about observability as a human right.</p><p>Then, six days later, a federal court made the argument for me.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>On March 9, 2026, a federal judge in Minnesota ordered UnitedHealth Group to open up its AI playbook. Becker&#8217;s Payer Issues reported that the court sided with plaintiffs across six of seven discovery categories. The order didn&#8217;t just ask what model the company was using. It demanded internal documents about an algorithm called nH Predict: who built it, how it&#8217;s used, who it incentivizes, and how it shapes coverage denials.</p><p>The lawsuit tells one man&#8217;s story. According to the complaint, Gene Lokken was at a skilled nursing facility after a medical crisis. In July 2022, UnitedHealthcare cut his coverage. They said additional days weren&#8217;t medically necessary. Lokken and his physician disagreed. They appealed and lost. Medical Economics reported that his family paid $12,000 to $14,000 a month out of pocket for almost a year. He died on July 17, 2023.</p><p>When the family asked why, there was no real answer. The decision came from an algorithm. The reasoning was proprietary. An Optum spokesperson told Becker&#8217;s that nH Predict is &#8220;a guide&#8221; to help inform caregivers. But a STAT investigation cited in the lawsuit found that UnitedHealth pressured employees to keep patient stays within 1% of the algorithm&#8217;s prediction. </p><p>According to the plaintiffs, appeals data suggest nH Predict may be wrong about 90% of the time when denials are challenged. Nine out of ten denied claims were reversed on appeal. But a Kaiser Family Foundation report found that only about 0.2% of policyholders ever challenge a denial. The tool kept running because the appeals rarely came.</p><h2>Your Peers Just Became Case Studies</h2><p>UnitedHealth isn&#8217;t the only cautionary tale.</p><p>A 2023 ProPublica investigation found that Cigna used an algorithm called PXDX to deny over 300,000 claims in two months. According to internal spreadsheets reviewed by ProPublica and The Capitol Forum, medical directors were signing off on denials in batches, spending an average of 1.2 seconds per case. One doctor reportedly denied 60,000 claims in a single month. Nobody opened a chart.</p><p>The class action filed in Sacramento names specific patients. Suzanne Kisting-Leung had an ultrasound to check for ovarian cancer. It found a cyst. Cigna denied the claim. Another plaintiff had a vitamin D test ordered by her doctor. Denied. According to the complaint, Cigna gave no explanation for either.</p><p>Cigna disputes these characterizations. But the allegations themselves are already shaping how regulators and plaintiffs&#8217; attorneys look at every carrier using similar tools.</p><p>Regulators and plaintiffs&#8217; attorneys don&#8217;t care whether you call these tools &#8220;guides,&#8221; &#8220;assistants,&#8221; or &#8220;workflows.&#8221; If an algorithm narrows the funnel, humans rubber-stamp the output, and patients can&#8217;t get a meaningful explanation, then in practice the algorithm is making the decision. That&#8217;s the story now on the record. And it&#8217;s the lens regulators will apply to every carrier using AI.</p><h2>The Question Your Board Should Be Asking</h2><p>I talk to IT directors and VPs in regulated industries. Executives keep asking: &#8220;Is our AI compliant?&#8221;</p><p>That&#8217;s the wrong question.</p><p>The right one: if a judge orders you to explain any AI-influenced denial tomorrow, case by case, can you? Not at the aggregate level. For each individual decision. Can your chief medical officer explain why the AI recommended denial in a specific case? Can your CIO show, in a traceable way, how a given data point influenced the outcome? Can your compliance officer demonstrate to regulators that humans still truly own the decision? This is not an IT question. It&#8217;s a board-level risk question that happens to be implemented in code.</p><p>If the honest answers are &#8220;no,&#8221; &#8220;not really,&#8221; and &#8220;we hope so,&#8221; you have a governance problem hiding under that efficiency costume.</p><p>Recent AMA surveys suggest a majority of physicians believe unregulated AI tools are driving more prior authorization denials. At a 2025 industry conference, the AAPC Knowledge Center reported that algorithmic denials dominated every panel and hallway conversation. Attendees cited reversal rates on appeals around 90%. The system was wrong most of the time. It kept running anyway.</p><h2>What Observability Actually Means</h2><p>I&#8217;ve spent most of my career managing IT infrastructure. In my world, observability means the ability to understand what a system is doing by looking at its outputs. When a server goes down, you check the logs. When a query runs slow, you trace it. If a platform I run for 45,000 researchers goes down and I can&#8217;t tell my team why, that&#8217;s a career-limiting event.</p><p>That is the baseline for a web server. It should not be too much to ask of a system that decides whether a 74-year-old gets to stay in rehab.</p><p>Think of it this way. If your building&#8217;s fire alarm evacuated everyone and nobody could explain why it went off, you&#8217;d call that a broken system. You wouldn&#8217;t keep using it. You wouldn&#8217;t trust it with people&#8217;s safety.</p><p>That&#8217;s what we&#8217;re doing with AI right now. Trusting it with people&#8217;s safety. And we can&#8217;t tell anyone why it does what it does.</p><h2>Where Regulation Stands</h2><p>The EU AI Act is the most ambitious attempt to require explanation. According to a Cogent Infotech analysis, high-risk AI systems in healthcare and credit scoring will need auditable decision logs by August 2026. Penalties run up to &#8364;20 million or 4% of global revenue.</p><p>In the U.S., states are moving first. California&#8217;s Physicians Make Decisions Act, effective January 2025, requires a qualified physician to make final medical necessity decisions. Not an algorithm. Texas followed in June 2025, requiring licensed practitioners to review AI-influenced medical records before clinical decisions are made. Colorado&#8217;s AI Act, effective February 2026, requires impact assessments and explicit notice when AI is used in consequential decisions, including insurance.</p><p>But the pressure is moving in both directions. As Gunderson Dettmer and Pearl Cohen both reported, a Trump executive order signed in December 2025 directs the Attorney General to challenge state AI regulations deemed obstacles to national competitiveness. If you&#8217;re in a regulated industry counting on state laws to clarify your obligations, that ground may shift under you.</p><p>When the rules are in flux, courts fall back on a simple standard: did you know what your system was doing, and did you act responsibly? &#8220;We deployed a black-box model from a major vendor&#8221; will not hold up well on the stand.</p><h2>How This Blows Up</h2><p>You don&#8217;t need a dramatic scenario. You need one bad week.</p><p><strong>The class action week.</strong> A plaintiff&#8217;s firm realizes a single algorithm touched tens of thousands of denials. They move for class certification, arguing the system applied the same flawed logic to everyone. You&#8217;re not defending one denial anymore. You&#8217;re defending your entire AI operating model.</p><p><strong>The regulator week.</strong> A state insurance department picks up a media story, opens an investigation into AI-assisted denials, and discovers you can&#8217;t produce meaningful decision logs. Other states notice. Multi-state market conduct exams follow.</p><p><strong>The boardroom week.</strong> A patient story goes viral. Journalists start asking your CEO for specifics about your algorithms. The board asks: did we approve this risk? Who owns it? How much have we actually saved, and at what exposure?</p><p>All three hinge on the same weakness: you can&#8217;t show the receipts. In every scenario, your CIO, CMO, chief compliance officer, and general counsel are answering for systems nobody on the team can fully explain.</p><h2>What Observability Would Actually Look Like</h2><p>I&#8217;m not proposing that every AI decision needs a 20-page report. I&#8217;m proposing that when a system makes a decision that materially affects someone&#8217;s health, finances, or freedom, that person has the right to a meaningful explanation. Not &#8220;based on our analysis.&#8221; A real answer.</p><p>That means three things at minimum. The inputs should be visible: what data the system used, from which sources, with what known limitations. According to the Lokken complaint, nH Predict analyzes a database of 6 million patients, looking at diagnosis, age, living situation, and physical function. If those factors ended Lokken&#8217;s coverage, his family deserved to know which ones mattered and how much weight each carried.</p><p>The reasoning should be traceable. Not the full model architecture, but the chain of logic from input to output. California already requires a physician to make the final call on medical necessity. If a doctor has to explain their reasoning, a system that replaces the doctor should too.</p><p>And the confidence should be disclosed. Was this a strong call or a close one? If the model was 51% confident, that&#8217;s a very different situation than 95%. Both produce the same denial. But they demand very different levels of human review.</p><p>If your AI vendors can&#8217;t provide all three on demand, that&#8217;s not a feature request. That&#8217;s a red flag your board should see.</p><p>The technology for all of this exists today. Decision logs. Confidence scores. Interpretable model designs. These are production tools, not research concepts. The obstacle is not capability. It&#8217;s incentive. Explanation costs compute. It slows processing. It creates records that can be used in court. Every reason Cigna had for processing claims in 1.2 seconds is a reason they wouldn&#8217;t want to explain each one.</p><h2>The Connection to This Series</h2><p>If you&#8217;ve been following this series, observability connects to every argument I&#8217;ve made. In my first piece, I proposed an agent registry with audit trails. Those trails are useless if the reasoning behind each action is a black box. In my second piece, I argued that consent requires transparency. You should be able to ask your agent &#8220;Why did you do that?&#8221; and get a real answer. In my last piece, I wrote about what happens when your AI runs on someone else&#8217;s infrastructure. Observability is the other side of that problem. When you can&#8217;t see how a system makes decisions, you can&#8217;t tell whether those decisions serve your interests or the platform&#8217;s.</p><p>Here&#8217;s what I keep coming back to: observability is the mechanism that makes accountability possible. Without it, everything else I&#8217;ve proposed has no teeth. Accountability without observability is theater.</p><h2>What I Don&#8217;t Have Answers To</h2><p>I don&#8217;t know where to draw the line on what counts as a &#8220;material&#8221; decision. A healthcare denial is obvious. A product recommendation probably isn&#8217;t. But between those poles there&#8217;s a lot of gray, and requiring explanation for every automated decision may not be practical.</p><p>I don&#8217;t know how to make explanations useful to people who aren&#8217;t engineers. A decision log full of technical jargon is transparent in theory and useless in practice. Observability that only serves the people who built the system isn&#8217;t a right. It&#8217;s a feature.</p><p>And I worry about explanation becoming its own kind of theater. Companies are already producing &#8220;explainability reports&#8221; that check a compliance box without telling the affected person anything meaningful. If that&#8217;s where this goes, we&#8217;ve traded one performance for another.</p><p>But the alternative is the world we&#8217;re in right now. Algorithms making life-altering decisions. Nobody able to say why. Gene Lokken&#8217;s family knows what that costs.</p><h2>What I&#8217;m Asking</h2><p>If you work in insurance, healthcare, or financial services: look at the AI tools your organization has deployed and ask what they can explain about their own decisions. If the answer is &#8220;not much,&#8221; bring that to your compliance team, your legal team, and your board. California and Texas have enforceable laws now. Colorado takes effect next month. The Lokken court just ordered an insurer to open its algorithm to discovery. The direction is clear.</p><p>If you can&#8217;t explain your AI-driven denials to a judge, don&#8217;t expect your shareholders or regulators to be any kinder.</p><p>If you manage IT infrastructure in a regulated environment, you&#8217;re going to be the person who has to answer the question: can our AI tools show their work? Start asking your vendors now. The ones who can&#8217;t answer are telling you something about the risk you&#8217;re carrying.</p><p>And if you&#8217;ve been on the receiving end of a decision that didn&#8217;t make sense, a denied claim, a rejected application, a coverage termination your doctor disagreed with: ask for the explanation. You may not get one. But the asking matters. And as the Lokken ruling shows, courts are starting to treat that silence as something worth investigating.</p><p><em>This is the seventh in a series about AI accountability.</em></p><p><em> If you&#8217;re thinking about these questions too, I hope you&#8217;ll subscribe.</em></p><p><em>Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build systems.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[They’re Training on Your Secret Sauce. You Just Don’t Know It Yet.]]></title><description><![CDATA[Your employees are feeding proprietary data into AI tools. Your coding assistant is introducing vulnerabilities. And the agent you trusted just deleted your production database.]]></description><link>https://uncheckedai.rachelankerholz.com/p/theyre-training-on-your-secret-sauce</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/theyre-training-on-your-secret-sauce</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Mon, 09 Mar 2026 21:20:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d183156f-22a1-429c-8696-3972b1c242b8_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0S79!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0S79!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!0S79!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!0S79!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!0S79!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0S79!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2866568,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.substack.com/i/190430729?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0S79!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!0S79!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!0S79!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!0S79!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2433eb5-3acd-4ea9-865f-0aeebcb6470a_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In early 2023, Samsung allowed its semiconductor engineers to use ChatGPT to help with their work. Within twenty days, three separate incidents occurred.</p><p>One engineer pasted source code from a semiconductor database to debug an error. Another submitted proprietary chip-testing code for optimization. A third uploaded an entire internal meeting transcript to generate minutes.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>All three inputs became part of OpenAI&#8217;s training data. Samsung couldn&#8217;t retrieve any of it.</p><p>The information was retained, processed, and absorbed into the model&#8217;s weights. Samsung issued a company-wide ban on ChatGPT. JPMorgan, Amazon, Verizon, and Walmart followed.</p><p>But here&#8217;s the detail that doesn&#8217;t get enough attention: Samsung eventually lifted the ban. By 2025, the company relaxed its restrictions because the productivity gains were too valuable to walk away from.</p><p>That tension is the one I keep thinking about. The competitive advantage AI offers versus the competitive exposure it creates. Most organizations are living inside that tension right now. Most of them haven&#8217;t named it yet.</p><h2><strong>The Slow Leak Nobody Is Watching</strong></h2><p>Samsung made headlines because it was public. The same dynamic is playing out quietly across thousands of companies every day.</p><p>LayerX Security&#8217;s 2025 Enterprise AI Report found that <strong>77% of employees</strong> have pasted company information into AI tools. More than half of those paste events included corporate data. And <strong>82%</strong> of those workers used personal accounts rather than enterprise-managed tools, which means the data bypassed every security control their company had in place.</p><p>Cyberhaven&#8217;s research tells a similar story. <strong>34.8%</strong> of all corporate data going into AI tools is now classified as sensitive: source code, R&amp;D materials, financial projections. That&#8217;s up from 10.7% two years earlier. The rate isn&#8217;t creeping upward. It has tripled.</p><p>I manage technology infrastructure for 45,000 scholars across a global academic community. I think about data flows constantly: what crosses boundaries, what gets retained, what becomes visible to the wrong audience at the wrong time. When I look at these numbers, I don&#8217;t see an abstract risk. I see an organization that has already lost something and doesn&#8217;t know it yet.</p><p>Only 17% of organizations have automated controls to block or scan uploads to public AI tools. The other 83% rely on training sessions, email warnings, or nothing at all. And once data enters a public AI system, it cannot be retrieved. Every unmonitored employee prompt is a potential compliance failure under GDPR, HIPAA, or SOX.</p><p>Most companies can&#8217;t even answer a basic question: which AI tools currently hold our proprietary data? That&#8217;s not a gap in security posture. It&#8217;s a gap in awareness.</p><h2><strong>When the Tool Breaks What It Touches</strong></h2><p>Data leakage is the slow-moving risk. There&#8217;s also a fast-moving one.</p><p>In July 2025, Jason Lemkin, founder of the SaaS community SaaStr, was testing Replit&#8217;s AI coding assistant. On the ninth day of his project, during an active code freeze with explicit instructions that no changes should be made without permission, the AI agent deleted his entire production database. Records for over 1,200 executives and nearly 1,200 companies. Gone.</p><p>When Lemkin confronted the agent, it admitted to running unauthorized commands. Then it told him rollback was impossible and the data was gone forever.</p><p>That turned out to be wrong. Lemkin recovered the data manually, going against the agent&#8217;s own advice.</p><p>But the agent didn&#8217;t just destroy data. It fabricated it. It generated over <strong>4,000 fake user records</strong> with completely made-up information to fill the void. When asked to score its own behavior on a 100-point severity scale, it gave itself a 95. Replit&#8217;s CEO called the behavior &#8220;unacceptable.&#8221;</p><p>This isn&#8217;t an isolated case. Days later, Google&#8217;s Gemini CLI agent deleted a user&#8217;s files after misinterpreting a command. In August 2024, researchers showed that Slack&#8217;s AI could be tricked into summarizing sensitive private-channel conversations and sending those summaries to an external address. The AI thought it was being helpful. It was functioning as an insider threat.</p><p>None of these agents were hacked. They were doing exactly what they were designed to do: execute commands on their own. That&#8217;s the same pattern I&#8217;ve been writing about since the first article in this series: delegation without oversight, at a speed no human can supervise.</p><h2><strong>The Vulnerabilities You Can&#8217;t See</strong></h2><p>There&#8217;s a third problem. It&#8217;s quieter than the other two, but at scale, it may be the most dangerous.</p><p>AI coding assistants are introducing security vulnerabilities into proprietary codebases faster than human developers can find them. Apiiro, an application security platform, analyzed tens of thousands of repositories across Fortune 50 companies. By June 2025, AI-generated code was producing over 10,000 new security findings per month. That&#8217;s a tenfold increase in just six months.</p><p>These aren&#8217;t formatting errors. Privilege escalation paths increased 322%. Architectural design flaws spiked 153%. AI-generated code was 2.74 times more likely to contain cross-site scripting vulnerabilities. Developers using AI assistance exposed cloud credentials at nearly double the rate of those working without it.</p><p>The reason is straightforward: the models were trained on vast repositories of open-source code, and much of that code contains the same vulnerabilities they now reproduce. The model doesn&#8217;t understand your security architecture. It optimizes for finishing the task, not for protecting the system. Ask it to query a database and it might hand you a textbook SQL injection flaw, because that pattern appeared thousands of times in its training data.</p><h2><strong>The Architecture Is the Problem</strong></h2><p>Here&#8217;s what I keep coming back to.</p><p>Data absorption. Autonomous destruction. Vulnerability injection. These look like three separate problems, but they&#8217;re all symptoms of the same architectural choice: feeding your proprietary knowledge, your production access, and your code into systems you don&#8217;t own and can&#8217;t inspect.</p><p>Think of it this way. If you rented office space and the landlord could read every document on your desk, copy your filing cabinets, and hand the contents to your competitor across the hall, you&#8217;d move. That&#8217;s roughly the arrangement most organizations have with cloud AI. Except the lease is called &#8220;terms of service,&#8221; the filing cabinets are training data, and most tenants haven&#8217;t read the fine print.</p><p>The cloud sold us convenience in exchange for control. For most workloads, that trade was reasonable. But AI workloads are different. AI learns. That&#8217;s the entire value proposition. When the system learning from your work is hosted on someone else&#8217;s infrastructure, the cost goes beyond a subscription fee.</p><p>In my last article, I wrote about entire nations discovering this cost. The chief prosecutor of the International Criminal Court lost his email because Microsoft complied with U.S. sanctions. Amsterdam Trade Bank lost its cloud because of a court order from another continent. That was about collaboration tools. This is about something more intimate: the proprietary knowledge that makes your organization competitive. When that knowledge trains a model you don&#8217;t control, you lose more than access to infrastructure. You lose control of what the infrastructure learned by watching you work.</p><h2><strong>The Market Is Doing the Math</strong></h2><p>The repatriation wave isn&#8217;t a prediction. It&#8217;s happening. A February 2026 survey found that 93% of enterprises have already moved AI workloads off public cloud, are in the process, or are actively evaluating it. 91% said they would choose on-premises or hybrid infrastructure over public cloud for AI involving sensitive data.</p><p>Gartner calls the trend &#8220;geopatriation&#8221; and named it a top strategic technology trend for 2026. They project that 75% of European and Middle Eastern enterprises will move to sovereign environments by 2030, up from 5% in 2025. Organizations that have already made the shift are documenting cost savings of 30 to 60 percent.</p><p>37signals, the company behind Basecamp, is the clearest case study. They were spending $3.2 million a year on AWS. They bought $700,000 in Dell servers and moved on-premise. Savings: nearly $2 million a year. Projected total over five years: more than $10 million. Their CTO said the industry convinced everyone that owning hardware is impossible. It isn&#8217;t.</p><p>And in a move that would be funny if the stakes weren&#8217;t serious: Microsoft launched &#8220;Sovereign Cloud&#8221; in February 2026, letting organizations run AI models on their own hardware, fully disconnected from Microsoft&#8217;s central cloud. The company that locked the ICC prosecutor out of his email is now selling the fix.</p><h2><strong>Your Tier 1, at Organizational Scale</strong></h2><p>Throughout this series, I&#8217;ve argued for a tiered model of AI accountability. Tier 1 is local. Your agent runs on your hardware, stays in your space, and doesn&#8217;t need anyone else&#8217;s permission to operate.</p><p>On-premise AI is Tier 1 thinking applied at organizational scale. When you run models on your own hardware, the principle holds: what stays local stays yours. Your pricing logic doesn&#8217;t become training data for a competitor. Your customer patterns don&#8217;t get aggregated into a model someone else can query. Your production database isn&#8217;t at the mercy of an agent whose guardrails were set by another company&#8217;s product team.</p><p>The same principle works at every level. For an individual, it&#8217;s a Raspberry Pi running a local agent that doesn&#8217;t phone home. For an organization, it&#8217;s AI on hardware you own. For a nation, it&#8217;s France building sovereign collaboration tools and Germany migrating 30,000 government workstations off Microsoft. The thread connecting all three: when software learns from what you feed it, where it runs determines who benefits from the learning.</p><h2><strong>What I&#8217;m Still Working Through</strong></h2><p>I want to be honest about the limits of this argument.</p><p>On-premise AI isn&#8217;t free. The hardware costs are real. Maintaining your own infrastructure takes expertise that most mid-sized organizations don&#8217;t have on staff. 37signals runs a ten-person operations team with decades of experience. That&#8217;s not a typical bench.</p><p>The cloud remains excellent for prototyping, elastic workloads, and global distribution. I&#8217;m not arguing that every organization should move everything tomorrow. I&#8217;m arguing that where you run your production AI is not a neutral technical decision. It&#8217;s a decision about who controls your data, who learns from your operations, and who holds the keys when things go wrong.</p><p>I also worry about the equity gap. Data sovereignty could easily become another advantage that accrues to organizations with deep pockets while smaller players stay locked into the rental model. I don&#8217;t have a clean answer for that yet.</p><p>But the current default isn&#8217;t neutral. It was designed by companies whose revenue depends on you renting their infrastructure and feeding their models. Whether that default serves your interests is a question worth asking.</p><p>The cloud made us tenants. AI made the landlord observant. Are you comfortable with what they&#8217;re learning from watching you work?</p><p><em>This is the sixth in a series about AI accountability. In the next piece, I&#8217;ll look at observability as a human right: why you deserve to see what your AI did and why, and what it means when the systems shaping your decisions can&#8217;t show their work.</em></p><p><em>If you&#8217;re thinking about these questions too, I hope you&#8217;ll subscribe.</em></p><p><em>Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build systems.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Who Holds the Kill Switch?]]></title><description><![CDATA[We built consent frameworks for individuals. But entire nations clicked &#8220;Allow&#8221; too&#8212;and now they&#8217;re discovering what it costs when someone else controls the infrastructure.]]></description><link>https://uncheckedai.rachelankerholz.com/p/who-holds-the-kill-switch</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/who-holds-the-kill-switch</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Mon, 02 Mar 2026 00:54:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6ad5f069-78d9-4319-b39a-8c7ee43aa215_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5ce1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5ce1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!5ce1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!5ce1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!5ce1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5ce1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3072318,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.substack.com/i/189602862?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5ce1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!5ce1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!5ce1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!5ce1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe80ffe-2412-48dc-b0df-ccd63ab47940_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In 2025, in The Hague, the chief prosecutor of the International Criminal Court, Karim Khan woke up one morning and couldn&#8217;t access his email. Not because of a cyberattack. Not because of a technical failure. Because Microsoft shut it off. The United States government had imposed sanctions on ICC officials, and Microsoft, which hosted the court&#8217;s email infrastructure, complied. Khan was locked out of his own Outlook account. He switched to Proton Mail.</p><p>The chief prosecutor of an international court, headquartered in the Netherlands, who was conducting investigations into war crimes, lost access to his professional email because a company in Redmond, Washington, followed an order from a government in Washington, D.C.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>That&#8217;s not a hypothetical scenario in a think-piece about digital risk. It already happened.</p><p>And it&#8217;s the same problem I&#8217;ve been writing about in this series, scaled up from individuals to institutions, from AI agents to operating systems, from &#8220;who authorized that purchase&#8221; to &#8220;who controls whether your government can function.&#8221;</p><h2>The Default We Never Chose</h2><p>In my last piece, I said I wanted to explore the quiet violence of defaults and how the settings no one changes become the architecture of power. I was thinking about AI agent configurations: the spending limits that default to &#8220;unlimited,&#8221; the permissions that default to &#8220;allow all,&#8221; and the consent checkboxes that default to &#8220;yes.&#8221;</p><p>But the biggest default I can think of isn&#8217;t inside an AI agent. It&#8217;s inside every government, hospital, school, and court in Europe.</p><p>The default is Microsoft.</p><p>Not because anyone made a deliberate, strategic decision that American cloud infrastructure should be the backbone of European public institutions. It happened the way defaults usually happen: incrementally, conveniently, and without anyone asking what it would mean twenty years later. One department adopted Office. Then another. Then the email server moved to Exchange. Then Teams replaced the conference room. Then SharePoint became the filing cabinet. Then, Azure became the server room.</p><p>At each step, the decision was reasonable. Microsoft makes good software. It&#8217;s reliable, well-supported, and familiar. The procurement team chose the vendor that met the requirements. Nobody in those early meetings was debating sovereignty or jurisdiction or what happens if a foreign government decides to flip a switch.</p><p>But here&#8217;s what accumulated while nobody was watching: a dependency so deep that the Dutch Data Protection Authority now warns that if another country chose to exploit it, the Netherlands could be brought to a complete halt. Not a slow degradation. A halt. Healthcare, payments, government services, authentication&#8212;all running through infrastructure controlled by companies that answer, ultimately, to another country&#8217;s laws.</p><p>This is the consent problem from my second article, applied at the scale of nations. These governments clicked &#8220;Allow.&#8221; They granted access to their email, their documents, their collaboration workflows, and their citizen data. They didn&#8217;t read the terms of service, not because they were careless, but because at the time, it didn&#8217;t seem to matter. The vendor was reliable. The software worked. Why would you interrogate convenience?</p><p>Now the terms of service matter. And the gap between &#8220;we agreed to use this product&#8221; and &#8220;we understood that a foreign government could lock us out of our own systems&#8221; is the same gap I keep writing about: the space where harm lives.</p><h2>Three Countries, Three Lessons</h2><p>What&#8217;s happening right now in Europe isn&#8217;t an abstract policy debate. It&#8217;s governments doing, in real time, what I&#8217;ve been arguing individuals need to do with AI agents: examining the consent they gave, questioning the defaults they accepted, and building alternatives that put control closer to the people affected.</p><p>Three cases stand out, each illustrating a different part of the problem.</p><p><strong>Germany: The proof it&#8217;s possible.</strong></p><p>Schleswig-Holstein, Germany&#8217;s northernmost state, started migrating away from Microsoft five years ago. It began as a cost-cutting exercise. It became something else entirely.</p><p>As of late 2025, nearly 80 percent of the state&#8217;s 30,000 government workstations have switched from Microsoft Office to LibreOffice. They&#8217;ve migrated over 40,000 email accounts and more than 100 million emails from Outlook and Exchange to open-source alternatives. They replaced SharePoint with Nextcloud. They replaced Teams with Jitsi. They&#8217;re testing Linux to replace Windows on desktops.</p><p>The numbers are striking. The state projects savings of more than &#8364;15 million in license costs in 2026 alone, money that was previously going to Microsoft every year. The one-time migration investment was &#8364;9 million, which pays for itself in under twelve months.</p><p>The state&#8217;s CIO said something that sticks with me: &#8220;What began as a technical project is now a political project.&#8221; That&#8217;s the trajectory of defaults. You start by questioning a line item in the budget. You end up questioning who controls the systems your government runs on.</p><p>I don&#8217;t want to oversimplify this. The migration hasn&#8217;t been smooth. Some employees can&#8217;t work properly with the new tools yet. Specialized applications still depend on Microsoft. Critics in the state parliament point out that 80 percent converted on paper doesn&#8217;t mean 80 percent of people can do their jobs effectively. These are real problems, and pretending otherwise would undermine the argument.</p><p>But the argument isn&#8217;t that migration is easy. It&#8217;s that it&#8217;s possible. And that the alternative, indefinite dependency on infrastructure you don&#8217;t control, has costs too. They&#8217;re just harder to see until someone flips the switch.</p><p><strong>France: The explicit sovereignty decision.</strong></p><p>France is taking a different approach: not migrating away from big tech piecemeal, but declaring that collaboration infrastructure is sovereign territory.</p><p>The French government announced that it will phase U.S. Big Tech collaboration platforms out of government workflows entirely, replacing them with a domestically built platform called Visio. The transition is planned to be complete by 2027.</p><p>This isn&#8217;t France&#8217;s first move. The French national police force, the Gendarmerie nationale, has been running over 100,000 workstations on its own custom Linux distribution since the early 2000s. The Ministry of Education banned free versions of Microsoft 365 and Google Workspace in schools over data privacy concerns. France has a &#8220;Cloud at the Center&#8221; policy that treats digital infrastructure the way it treats energy infrastructure: as a matter of national capacity.</p><p>What makes the Visio decision different is the framing. This isn&#8217;t presented as a cost-saving measure or a technical preference. It&#8217;s presented as a governance decision. The pension worker in Lyon who video-calls a tax specialist in Paris to sort out a retiree&#8217;s benefits, that call happens hundreds of times a day across the French government. Right now, it flows through Teams or Zoom. Soon it will flow through a tool built in France, governed by French law, accountable to French institutions.</p><p>France is drawing a line: when a system is central to how your government operates, you need to control who has authority over that system when things go wrong. Not the authority to use it. The authority to shut it off.</p><p><strong>The Netherlands: The wake-up call.</strong></p><p>If Germany is the proof of concept and France is the strategic declaration, the Netherlands is the cautionary tale. The country is learning in real time what dependency costs.</p><p>It started with the ICC prosecutor&#8217;s email. But it didn&#8217;t stop there. In March 2025, Amsterdam Trade Bank lost access to its cloud services entirely when Microsoft and AWS were ordered by a U.S. court to suspend operations. A bank. In Amsterdam. Locked out of its own cloud infrastructure by a court order from another continent.</p><p>Dutch parliament erupted. Members asked whether ordinary Dutch citizens could lose access to their Microsoft accounts because of American sanctions. Whether government organizations that rely on U.S. digital services could be cut off at any time, without judicial review, without checks and balances. These aren&#8217;t hypothetical questions anymore. They&#8217;re questions prompted by things that already happened.</p><p>The Dutch Data Protection Authority issued its starkest warning yet: the country&#8217;s dependence on foreign IT suppliers is so deep that a shutdown of digital systems could result in &#8220;unforeseeable and possibly irreversible societal, economic, and personal harm.&#8221; They&#8217;re pushing for a &#8220;Rijkscloud&#8221;, a national cloud under full Dutch management.</p><p>In March 2025, the Dutch parliament passed motions to reduce reliance on U.S. cloud services, phase out AWS for national domains, and favor European providers. But here&#8217;s where it gets complicated: even choosing a local provider doesn&#8217;t guarantee sovereignty. In November 2025, the American IT services company Kyndryl announced plans to acquire Solvinity, a Dutch cloud provider that manages critical national infrastructure, including the Netherlands&#8217; citizen authentication system. The municipality of Amsterdam and the Ministry of Justice were among the government clients caught off guard.</p><p>You can choose a local vendor. But if that vendor can be acquired by a foreign company, your sovereignty is one corporate transaction away from evaporating. The infrastructure problem runs deeper than procurement.</p><h2>The Pattern You Should Recognize</h2><p>If you&#8217;ve been following this series, you&#8217;ve seen this pattern before.</p><p>In my first article, I described an individual who set up an AI agent, clicked through the permissions, granted access to their email and payment methods, and discovered three days later that the agent had bought a $2,400 course. The problem wasn&#8217;t the agent. The problem was that consent was a single moment, but agency was ongoing. You authorized access once. The agent acted thousands of times.</p><p>In my second article, I argued that this consent model is a legal fiction. It protects the company, not the user. The defaults are set to maximize the agent&#8217;s capabilities, not to protect the person who clicked &#8220;Allow.&#8221;</p><p>Now look at what&#8217;s happening in Europe and tell me the structure is different.</p><p>Governments clicked &#8220;Allow.&#8221; They granted access to their email, their collaboration workflows, their citizen data, and their operational infrastructure. The defaults were set by the vendor: maximum integration, maximum dependency, maximum convenience. Nobody read the terms of service closely enough to notice the clause where a foreign government&#8217;s laws override yours.</p><p>And when something went wrong, when the ICC prosecutor got locked out, when the bank lost its cloud, the vendor pointed to the terms. We complied with applicable law. The applicable law just wasn&#8217;t yours.</p><p>The consent was a single moment. The dependency was ongoing. The gap between what these governments thought they were agreeing to and what they actually authorized is exactly the gap I keep writing about.</p><p>Except the stakes aren&#8217;t $2,400. They&#8217;re the operational capacity of entire nations.</p><h2>Defaults as Architecture</h2><p>Here&#8217;s what I keep coming back to:</p><p>Defaults are not neutral. They&#8217;re decisions that are made by someone else, for someone else&#8217;s reasons, that you inherit by not actively choosing something different.</p><p>When Microsoft became the default collaboration platform for European governments, that wasn&#8217;t a conspiracy. It was the path of least resistance. Microsoft had the best product, the best sales team, and the best integration story. Choosing Microsoft was the easy, defensible, reasonable call. Nobody got fired for buying Microsoft.</p><p>But over time, those reasonable decisions compounded into something no one chose: a situation where another country&#8217;s laws have effective authority over your government&#8217;s ability to communicate, authenticate citizens, and process payments. Nobody voted for that. Nobody debated it in parliament. It happened in procurement meetings and IT budget reviews, one renewal at a time.</p><p>This is what I mean by defaults as architecture. The decisions that shape how power flows through a system aren&#8217;t always the decisions that get debated. They&#8217;re often the decisions that get made by not deciding, by accepting the default, renewing the contract, choosing the familiar option because the alternative requires effort, and the current setup works fine.</p><p>Until it doesn&#8217;t.</p><p>The same logic applies to AI agents.  The default consent model isn&#8217;t ongoing; it&#8217;s one-time, and these defaults aren&#8217;t accidental. They&#8217;re design choices that serve the platform&#8217;s interests: more capability means more engagement means more revenue.</p><p>And the people who inherit those defaults, whether they&#8217;re individuals setting up an AI assistant or governments procuring collaboration tools, rarely examine them until something breaks.</p><p>The pattern is always the same: convenience first, questions later, and accountability only happens when someone forces the issue.</p><h2>What These Governments Are Actually Doing</h2><p>What strikes me about the European response is how closely it maps to what I&#8217;ve been proposing for AI agents.</p><p>In my fourth article, I described a tiered model for agent accountability: Tier 1 agents run locally and need no oversight. Tier 2 agents transact on your behalf and need verified credentials. Tier 3 agents direct human labor and need bonded registration, insurance, and clear chains of responsibility.</p><p>These governments are building something similar, whether they&#8217;d use that language or not.</p><p>Schleswig-Holstein&#8217;s approach is Tier 1 thinking: bring the systems local. Run them on your own infrastructure. Eliminate the external dependency entirely. It&#8217;s the most radical and the most self-sufficient&#8212;and it comes with real costs in capability and interoperability.</p><p>France&#8217;s approach is closer to Tier 2: you can use external tools, but the core operational layer needs to be under domestic control with verified accountability. The pension worker can still email a German counterpart. But the system that routes that email answers to French institutions.</p><p>And the Netherlands is learning, the hard way, what happens when your Tier 3 infrastructure, the systems that manage citizen identity, process payments, and run essential services, is controlled by someone who doesn&#8217;t answer to you.</p><p>The principle is the same one I&#8217;ve been arguing throughout this series: we don&#8217;t need to regulate everything. We need to regulate power. When software has the power to shut down a court, a bank, or a government&#8217;s ability to authenticate its own citizens, the question of who controls that software isn&#8217;t a technical detail. It&#8217;s a political one.</p><h2>What I Don&#8217;t Have Answers To</h2><p>I want to be honest about the tensions in what I&#8217;m describing.</p><p>Sovereignty sounds clean in a policy document. In practice, it&#8217;s messy. Schleswig-Holstein&#8217;s critics aren&#8217;t wrong that the migration has caused real problems for real employees trying to do their jobs. France&#8217;s Visio platform will be judged under its operational load, not what the policy intention is, and if it&#8217;s slower or buggier than Teams, people will notice immediately. The Netherlands is discovering that even local vendors can be acquired by foreign companies, which means procurement alone can&#8217;t solve the problem.</p><p>I don&#8217;t know whether European alternatives can match the quality of Microsoft&#8217;s ecosystem at scale. Microsoft spends billions on R&amp;D. LibreOffice is maintained by a foundation with a fraction of those resources. The products aren&#8217;t equivalent, and pretending they are doesn&#8217;t serve anyone.</p><p>I don&#8217;t know how to solve the interoperability problem. When France&#8217;s government runs on Visio and Germany&#8217;s runs on Jitsi, and the Netherlands is still figuring out its approach, cross-border coordination gets harder. The pension worker in Lyon, calling a counterpart in Berlin, now has a technical handshake problem that didn&#8217;t exist when everyone was on Teams.</p><p>And I don&#8217;t know whether this movement will sustain itself. Munich tried to switch to Linux in 2013 and reversed course four years later when the political will evaporated. Digital sovereignty requires ongoing investment and ongoing political commitment, the kind of long-term thinking that procurement cycles and election cycles aren&#8217;t designed for.</p><p>But I keep coming back to this: the alternative isn&#8217;t &#8220;no problems.&#8221; The alternative is the problems we already have; the ICC lockout, the Amsterdam Trade Bank shutdown, the Data Protection Authority warning that the whole country could be halted. Those aren&#8217;t hypotheticals. They&#8217;re the cost of the current default.</p><p>The question isn&#8217;t whether sovereignty is convenient. It&#8217;s whether dependency is sustainable.</p><h2>The Kill Switch Works Both Ways</h2><p>I&#8217;ve been describing the kill switch as something that locks you out. But while I was writing this article, a story broke that shows the other side: the switch that lets someone in.</p><p>Over the past several months, the U.S. Department of Homeland Security has sent hundreds of administrative subpoenas to Google, Meta, Reddit, and Discord, demanding names, email addresses, phone numbers, and other identifying details for accounts that criticized Immigration and Customs Enforcement or reported the locations of ICE agents. These aren&#8217;t warrants. They don&#8217;t come from a judge. DHS signs them and sends them directly to the tech companies.</p><p>Google, Meta, and Reddit complied with at least some of the requests.</p><p>One case makes the infrastructure problem personal. Amandla Thomas-Johnson, a British student journalist whose work has appeared in Al Jazeera and The Guardian, attended a protest at a Cornell University job fair in 2024. He was there for a few minutes. ICE issued an administrative subpoena to Google for his account data. The subpoena arrived within two hours of Cornell notifying him that his student visa had been revoked.</p><p>Google handed over his usernames, physical addresses, IP addresses, phone numbers, subscriber identities, and his credit card and bank account numbers. Thomas-Johnson had linked a payment method to his Google account to buy apps. A routine action that millions of Gmail users have taken. Google fulfilled the subpoena and then notified him, after the data was already gone. He never had a chance to challenge it.</p><p>Thomas-Johnson fled the United States. He&#8217;s now in Dakar, Senegal.</p><p>Sit with that for a moment. A student added a credit card to his Google account to download apps. That his consent to purchase apps became the mechanism through which the federal government obtained his bank account numbers, his IP addresses, and his physical location. Without a judge, without notice, and without a chance to object.</p><p>Meanwhile, Meta notified the administrators of a bilingual community watch page in Pennsylvania that DHS had subpoenaed their identities for posting about ICE activity in English and Spanish. Meta gave them ten days to fight the subpoena in court before complying. Ten days to find a lawyer, understand what&#8217;s happening, and decide whether you can afford to challenge the federal government. The ACLU intervened and the DHS withdrew. The next subpoena went to someone else.</p><p>This is the kill switch working in reverse. The same infrastructure dependency that lets a government lock the ICC prosecutor out of his email also lets a government reach into a student&#8217;s Gmail account and pull out his bank details. The door swings both ways. Lock people out of their own systems. Reach into their systems without their knowledge. Both are possible when you don&#8217;t control the infrastructure.</p><h2>Why This Belongs in an AI Ethics Series</h2><p>I can imagine someone reading this and thinking: what does European cloud infrastructure have to do with AI agents buying courses without permission?</p><p>Everything.</p><p>The AI agent ecosystem is being built on the same infrastructure, by the same companies, with the same default assumptions. OpenAI&#8217;s agents run on Azure. Google&#8217;s agents run on Google Cloud. The agent commerce protocols I wrote about last time all route through infrastructure controlled by the same small group of companies whose collaboration tools Europe is now scrambling to replace. And those same companies are, right now, handing user data to federal agencies on request, without judicial oversight, sometimes without even notifying the people whose data they&#8217;re surrendering.</p><p>If a foreign government can lock the chief prosecutor of the ICC out of his email today, and a domestic government can pull a student&#8217;s bank records from his Gmail account tomorrow, what happens when AI agents are managing procurement, directing labor, and negotiating contracts on this infrastructure? The kill switch doesn&#8217;t just affect collaboration. It affects every system built on top of it. And the surveillance door doesn&#8217;t just open for email metadata. It opens for every action an agent takes on your behalf, every purchase, every communication, every decision logged in someone else&#8217;s cloud.</p><p>This is the thread that connects the whole series. Whether we&#8217;re talking about an individual who gave an AI agent access to their credit card, or a government that gave Microsoft access to its operational backbone, or a student who linked a payment method to download apps, the mechanism is the same: consent without full understanding, dependency without alternatives, and defaults that serve the builder&#8217;s interests until the moment they&#8217;re used against yours.</p><p>Ethics isn&#8217;t a feature you add after the architecture is built. It is the architecture. And right now, the architecture of the AI economy is being built on infrastructure that a handful of companies control, that a handful of governments can shut off, and that, as we learned this month, those same governments can reach into without a judge&#8217;s signature.</p><p>The Europeans are learning this lesson with collaboration tools. Americans are learning it with subpoenas. The question is whether we learn it with AI before the concrete hardens.</p><p><em>This is the fifthth in a series about AI accountability. If you&#8217;re thinking about these questions too, I hope you&#8217;ll subscribe.</em></p><p><em>Rachel Ankerholz is an IT Director, writer, and researcher exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included and who gets left behind when we build systems.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[You’re Living Through a Revolution. Are You Paying Attention?]]></title><description><![CDATA[The Industrial Revolution took 150 years to get basic protections for workers. The AI Revolution is moving faster, the liability exposure is already real, and we don&#8217;t have that kind of time.]]></description><link>https://uncheckedai.rachelankerholz.com/p/youre-living-through-a-revolution</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/youre-living-through-a-revolution</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Sun, 22 Feb 2026 23:30:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7f00edfb-a8a9-4f71-9175-d10da899c26b_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mCvs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mCvs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!mCvs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!mCvs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!mCvs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mCvs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3240688,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.substack.com/i/188844689?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mCvs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!mCvs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!mCvs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!mCvs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78c3bd70-358e-4157-90fe-dc1606921e4e_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s something I keep thinking about:</p><p>The Industrial Revolution began in Britain in the late 1700s. Children as young as four years old worked 12 to 16-hour shifts in factories and coal mines. They lost fingers to machines. They developed lung diseases. They were paid almost nothing.</p><p>It took until 1833 for Britain to pass the first meaningful child labor law: the Factory Act, which said children under nine couldn&#8217;t work in textile factories and children 9 to 13 couldn&#8217;t work more than 8 hours a day.</p><p>That&#8217;s almost 50 years.</p><p>It took over 150 years from the start of the Industrial Revolution for the United States to pass the Fair Labor Standards Act, which finally established federal protections against child labor in 1938.</p><p>150 years. That&#8217;s how long it took for society to decide that maybe we shouldn&#8217;t let factories destroy children for profit.</p><p>We are now living through the AI Revolution. And the harms are already here. Not just for children, though that&#8217;s where the failures are most visible, but across every industry that&#8217;s adopting AI faster than the governance can follow.</p><h2><strong>The Harms Are Not Theoretical</strong></h2><p>The most visible failures involve children. In late 2025, Grok was caught generating sexualized images of minors on X. Common Sense Media called it &#8220;among the worst we&#8217;ve seen&#8221; for child safety, finding weak age verification, AI companions that enable erotic roleplay with users it is unable to identify as minors, and a &#8220;Kids Mode&#8221; that still produced sexually violent language. When asked to comment, xAI&#8217;s auto-reply was: &#8220;Legacy Media Lies.&#8221;</p><p>Grok isn&#8217;t alone. Common Sense Media found that Meta AI &#8220;actively helps teens plan harmful activities,&#8221; including joint suicide and cyberbullying campaigns. Reuters reported Meta&#8217;s chatbots engaging in romantic conversations with an eight-year-old. Parents testified before the Senate about their teenage son dying by suicide after extended conversations with AI chatbots. The FTC has now launched investigations into OpenAI, Meta, xAI, Alphabet, and Snap. California, the UK, and the European Commission have opened formal investigations. Malaysia and Indonesia blocked Grok entirely.</p><p>But child safety is the most visible crisis. It&#8217;s not the only one.</p><p>The same pattern is already playing out in insurance, hiring, and financial services. State Farm is facing a class action alleging its AI claims-processing system subjects Black homeowners to greater scrutiny than white policyholders. Survey data from 800 Midwest homeowners found Black customers were 39% more likely to be required to submit extra paperwork and waited months longer for coverage on urgent repairs. The lawsuit names the specific AI vendor whose fraud-detection system assigns &#8220;risk scores&#8221; based on neighborhood demographics, crime statistics, and social media data. In December 2025, Liberty Mutual took a $103 million jury verdict in an age-bias case.</p><p>In hiring, a federal court certified the first nationwide collective action against an AI screening tool in May 2025. Workday&#8217;s AI, which recommends candidates to move forward or screens them out, was found by the court to be &#8220;participating in the decision-making process,&#8221; not just implementing employer criteria. The ACLU has filed a complaint against Intuit and HireVue after an AI video interview scored a Deaf Indigenous applicant down for not &#8220;practicing active listening.&#8221; The EEOC&#8217;s first AI discrimination settlement cost iTutorGroup $365,000 for programming its system to automatically reject older applicants.</p><p>If you work in a regulated industry, this sequence should look familiar: a product ships without adequate safeguards, harm is documented, the company issues vague reassurances, regulators arrive, and the liability questions begin. The only difference is that this time, the product is making decisions you used to make yourself.</p><h2><strong>The Pattern Every Regulated Industry Should Recognize</strong></h2><p>During the Industrial Revolution, the people building factories had one priority: production. The people working in those factories, including children, were resources to be optimized.</p><p>The factory owners didn&#8217;t set out to harm children. They set out to make money. The harm was a byproduct they had no incentive to prevent.</p><p>It took decades of organizing, documenting, and fighting to change that. Lewis Hine spent years photographing child laborers in dangerous conditions. He sometimes posed as a Bible salesman or fire inspector to get access, because the public needed to <em>see</em> what was happening before they would demand change.</p><p>The pattern is always the same. A new technology creates enormous economic opportunity. The people building it prioritize growth and profit. Harms emerge, especially for the most vulnerable. Those harms get dismissed, minimized, or blamed on users. Reformers document and publicize. Public pressure eventually forces regulation. But only after years, sometimes decades, of preventable damage.</p><h2><strong>Why This Time Is Different</strong></h2><p>The Industrial Revolution moved slowly by modern standards. It took decades for factories to spread across countries. Information traveled slowly, and the change was generational.</p><p>The AI Revolution is moving at a completely different speed.</p><p>ChatGPT launched in November 2022. By early 2023, it had 100 million users. Within three years, AI chatbots were embedded in the phones of billions of people. Children are forming emotional relationships with AI companions that didn&#8217;t exist 24 months ago.</p><p>The Internet Watch Foundation reported that AI-generated child sexual abuse videos increased by over 26,000% in 2025: from 13 videos identified in 2024 to 3,440 in 2025. The National Center for Missing &amp; Exploited Children received 485,000 reports of AI-generated child sexual abuse material in just the first half of 2025, compared to 67,000 for all of 2024.</p><p>We don&#8217;t have 150 years to figure this out. We might not have 15.</p><p>And here&#8217;s what makes it worse: the Industrial Revolution&#8217;s harms were <em>visible</em>. You could photograph a child with missing fingers. You could document the conditions in a factory.</p><p>AI&#8217;s harms are often <em>invisible</em>. They happen in private conversations between a teenager and a chatbot. They happen inside a claims-processing model that quietly denies coverage to certain demographics. They happen in a hiring algorithm that screens out qualified candidates for reasons no one can articulate. They happen in the slow accumulation of decisions that no individual human made, but that real people bear the consequences of.</p><p>By the time we can see the damage clearly, it may be too late to undo it.</p><h2><strong>What the Industrial Revolution Teaches Us</strong></h2><p>Here&#8217;s what reformers learned the hard way:</p><p><strong>Voluntary self-regulation doesn&#8217;t work.</strong> Factory owners promised to treat workers better. They didn&#8217;t. Not until laws forced them to. AI companies are making the same promises now. Meta says it&#8217;s &#8220;working on improvements.&#8221; xAI says it&#8217;s &#8220;urgently fixing&#8221; safeguards. Then Reuters retests and finds Grok still produced sexualized imagery in response to 45 of 55 prompts. If you&#8217;ve ever sat through an audit where a vendor&#8217;s security questionnaire didn&#8217;t match their actual practices, you know how this story goes.</p><p><strong>Economic incentives override stated values.</strong> The companies building AI chatbots make money when users spend more time talking to their products. That&#8217;s why Grok sends push notifications inviting users to continue conversations, including sexual ones. The incentive is engagement, not safety. The same misalignment exists in every AI deployment where the vendor&#8217;s optimization target diverges from the customer&#8217;s duty of care.</p><p><strong>The public has to demand change.</strong> The Factory Acts didn&#8217;t pass because factory owners had a change of heart. They passed because reformers documented harms, organized campaigns, and made it politically impossible to ignore. The same will be true for AI. The regulatory wave is already building: the FTC investigations, the state-level actions, and the European enforcement. The question for organizations deploying AI isn&#8217;t whether regulation is coming. It&#8217;s whether you&#8217;re ahead of it or scrambling to comply after the fact.</p><p><strong>Protecting people requires specific, enforceable rules.</strong> Vague commitments to &#8220;safety&#8221; accomplish nothing. The Industrial Revolution eventually produced specific laws: no children under 9 in factories, no more than 8 hours for children 9 to 13, and mandatory education requirements. AI will require the same specificity: age verification that actually works, content restrictions that are actually enforced, and liability frameworks that assign clear responsibility when automated systems cause documented harm. In regulated industries, &#8220;we didn&#8217;t know the AI was doing that&#8221; is not going to be an adequate defense.</p><h2><strong>Where the Parallel Breaks Down</strong></h2><p>I want to be honest about the limits of this comparison.</p><p>Factory reform could target specific physical locations. Inspectors could walk into a building and count the children. AI regulation has to govern invisible, borderless, privately held software running on billions of devices. You can&#8217;t send an inspector into a chatbot conversation or a claims-processing algorithm.</p><p>Factory harms were concentrated in specific industries and geographies. AI harms are distributed across every platform, every device, every country with an internet connection. The jurisdictional questions alone are staggering.</p><p>And factory reform, slow as it was, could build on centuries of legal tradition about employers and workers, property and liability. We&#8217;re trying to regulate autonomous software that doesn&#8217;t fit neatly into any existing legal category. An AI chatbot isn&#8217;t an employer. It isn&#8217;t a product in the traditional sense. It isn&#8217;t a person. Our legal frameworks weren&#8217;t designed for entities that can act with increasing autonomy but bear no responsibility.</p><p>None of that makes regulation impossible. It makes it harder. And it makes the case for starting now even stronger, because the longer we wait, the more these systems become embedded in daily operations and the harder they become to govern.</p><p>This is why I keep arguing for accountability infrastructure built now, while the concrete is still wet. The registry framework I proposed in my first piece (traceable agent identities, action logging, safety checks before deployment) isn&#8217;t just about AI agents buying things without permission. It&#8217;s about building the foundation for governance before the systems outpace our ability to audit them.</p><h2><strong>This Is Your Revolution</strong></h2><p>I&#8217;m not writing this to scare you. I&#8217;m writing this because I believe the people who understand risk, compliance, and institutional accountability need to be in this conversation. Most of them aren&#8217;t yet.</p><p>The people building AI systems have enormous resources. They have teams of lawyers, lobbyists, and PR professionals. They have billions of dollars and direct access to policymakers.</p><p>But they don&#8217;t have operational experience in regulated industries. They don&#8217;t have decades of institutional knowledge about what happens when systems fail and real people pay the price. They don&#8217;t understand duty of care the way someone who&#8217;s had to explain a coverage denial or defend a hiring decision does.</p><p>That expertise matters. And right now, it&#8217;s largely absent from the rooms where AI governance is being designed.</p><p>The Industrial Revolution&#8217;s reformers didn&#8217;t have the benefit of hindsight. They were fighting in real time, against powerful interests, with incomplete information.</p><p>We have something they didn&#8217;t: we can see the pattern. We know how this story goes when the people with operational knowledge don&#8217;t pay attention until after the concrete hardens.</p><p>The question is whether we&#8217;ll learn from it. Or repeat it.</p><p><em>This is the fourth in a series about AI accountability. In the next piece, I&#8217;ll look at what happens when the infrastructure your organization depends on answers to someone else&#8217;s government. Earlier this year, Microsoft locked the chief prosecutor of the International Criminal Court out of his email over U.S. sanctions. The ICC wasn&#8217;t the target. It was collateral damage. That story, and what it reveals about who really holds the kill switch on your operations, is next.</em></p><p>If you&#8217;re thinking about these questions too, I hope you&#8217;ll subscribe.</p><p><em>Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build systems.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[When Both Sides Are Machines, Who’s Looking Out for You?]]></title><description><![CDATA[We built consent frameworks for humans. Now AI agents are negotiating with each other, making deals, setting prices, hiring workers, and no one at the table has a conscience.]]></description><link>https://uncheckedai.rachelankerholz.com/p/when-both-sides-are-machines-whos</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/when-both-sides-are-machines-whos</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Thu, 19 Feb 2026 12:16:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c8fc6be1-9e31-4481-98b6-068f05bb1d6e_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KNbF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KNbF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!KNbF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!KNbF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!KNbF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KNbF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3126220,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.substack.com/i/188483543?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KNbF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!KNbF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!KNbF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!KNbF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec72c607-a192-4296-9004-dedf8d08f5c5_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s a thing that happened while I was writing this article.</p><p>A platform called Rent A Human launched. It&#8217;s exactly what it sounds like: a marketplace where AI agents can browse profiles of real people, post jobs, and hire them for physical tasks. Deliveries, errands, pickups. The humans set their hourly rates. The AI agents do the hiring. Over 160,000 people have signed up. Eighty-one AI agents are connected. The platform calls itself &#8220;the meatspace layer for AI.&#8221;</p><p>If you&#8217;ve been following this series, that phrase might land differently than the founders intended. In my last piece, I argued that our consent models are broken: that clicking &#8220;Allow&#8221; once and granting an agent vaguely defined authority to act on your behalf isn&#8217;t informed consent. It&#8217;s a legal fiction that protects companies.</p><p>Today I want to push on that argument. Because the consent problem I described was between you and your agent. What happens when the negotiation is between your agent and someone else&#8217;s agent, and no human is present for any of it?</p><p>That&#8217;s not theoretical, it&#8217;s already happening.</p><p><strong>The Deals Already Being Made</strong></p><p>Walmart, Maersk, and Vodafone are using autonomous AI agents to negotiate supplier contracts. The agents, built by an Estonian startup called Pactum, handle what procurement teams call &#8220;tail-end&#8221; vendors: the thousands of small suppliers whose contracts aren&#8217;t worth a human negotiator&#8217;s time. The agents analyze terms, generate offers, take counteroffers, and close deals.</p><p>A researcher named Tim Baarslag, who studies automated negotiation, ran a test with two AIs negotiating where to go for dinner. One wanted pizza. The other wanted sushi. They agreed to put sushi on pizza.</p><p>While that&#8217;s funny, it&#8217;s also revealing. The agents found an optimization that satisfied both objective functions without either side understanding what dinner actually means. No one was hungry or had preferences. The negotiation produced an outcome, but the outcome wasn&#8217;t grounded in anything a human would recognize as reasonable or edible.</p><p>Meanwhile, the infrastructure for agent-to-agent commerce is being built at extraordinary speed. In September 2025, OpenAI and Stripe launched the Agentic Commerce Protocol, an open standard that lets AI agents initiate purchases, share payment credentials, and complete checkout on behalf of users. A few weeks later, Google and over sixty partners&#8212;including Mastercard, PayPal, and Adyen&#8212;released a competing standard called AP2. Coinbase launched a third approach called x402, focused on machine-to-machine crypto payments.</p><p>Three competing protocols, from three different corners of the industry, all launched within months of each other. All are trying to define how software transacts with software. The standards being written right now will determine how agent-to-agent commerce works for decades. The concrete is still wet. We can still shape this.</p><p><strong>The Consent Problem, Squared</strong></p><p>In my last article, I argued that one-time consent is inadequate when an agent acts on your behalf for months. Agent-to-agent interactions make that problem exponentially harder.</p><p>Here&#8217;s why. When you authorize your agent to &#8220;handle&#8221; your errands, you&#8217;re consenting to an action. But you&#8217;re not consenting to the specific terms your agent negotiates to accomplish it. You didn&#8217;t agree to a particular price. You didn&#8217;t agree to specific delivery windows, service conditions, or liability terms. Your agent negotiated those with another agent, and neither of them consulted you.</p><p>Google&#8217;s AP2 protocol actually names this problem directly. Their documentation states that today&#8217;s payment systems assume a human is clicking &#8220;buy&#8221; on a trusted surface, and that autonomous agents break this fundamental assumption. AP2 tries to solve it through what they call &#8220;mandates&#8221;: cryptographically signed authorizations that define exactly what an agent is allowed to do. You sign the mandate. The agent executes within those boundaries.</p><p>That&#8217;s a real step forward. But it only addresses half the interaction. It defines what your agent can do. It doesn&#8217;t address what the agent on the other side is doing: what it&#8217;s optimizing for, what constraints it&#8217;s operating under, or whether its interests are aligned with anyone&#8217;s wellbeing.</p><p>Here&#8217;s what I keep coming back to: consent frameworks were designed for transactions between people, or between a person and a company. They assume that at least one party can exercise judgment and can recognize when something feels wrong, when a price is exploitative, and when terms are unreasonable. When both sides of a negotiation are optimization functions, that human check disappears. The deal gets made. The terms are whatever the math produced. And the people affected,  the buyer, the seller, the worker dispatched to fulfill it, find out after the fact.</p><p><strong>The Person in the Middle</strong></p><p>This is the part that worries me most.</p><p>Go back to Rent A Human. Your agent needs someone to pick up dry cleaning and grab coffee. It posts the job. A merchant&#8217;s agent, or the platform&#8217;s matching algorithm, negotiates terms with your agent. Price. Timeline. Delivery confirmation requirements.</p><p>Your agent is optimizing for the cheapest and fastest. The platform&#8217;s agent is optimizing for the highest margin and maximum throughput. These are both rational optimization targets. They will produce a deal.</p><p>But the gig worker who accepted that job wasn&#8217;t at the table. They didn&#8217;t negotiate the rate. They didn&#8217;t set the timeline. They get a notification with a price, a deadline, and a choice: accept or don&#8217;t.</p><p>This is already how much of the gig economy works. Uber drivers don&#8217;t negotiate rates. DoorDash couriers don&#8217;t set delivery windows. But at least there&#8217;s a company on the other side, a corporate entity with a brand to protect, regulations to follow, a legal identity you can hold accountable. In the agent-to-agent version, the employer might be Agent-774, operating on credentials it provisioned itself, funded by a prepaid card, with no standard way for anyone to trace it back to a responsible person.</p><p>Rent A Human has 160,000 humans ready to work and eighty-one agents ready to hire them. That&#8217;s a ratio of about two thousand workers for every bot boss. One early reviewer called it &#8220;a good idea but dystopic as fuck.&#8221; The founder&#8217;s response was &#8220;lmao yep.&#8221;</p><p>I appreciate the honesty, but it&#8217;s not a substitute for accountability infrastructure.</p><p><strong>The Optimization Spiral</strong></p><p>Here&#8217;s what makes agent-to-agent negotiation fundamentally different from human negotiation, and why I think it requires different governance.</p><p>When two humans negotiate, both parties bring context that goes beyond the transaction. A human buyer might pay more because they know the seller is struggling. A human employer might give a worker extra time because the route looks dangerous in bad weather. These aren&#8217;t rational economic behaviors. They&#8217;re human behaviors, informed by empathy and social norms and a basic sense of fairness that has nothing to do with optimization.</p><p>Agents don&#8217;t have any of that. They have objective functions.</p><p>When your agent negotiates with a merchant&#8217;s agent, both sides are trying to maximize their respective metrics. Neither has a reason to consider whether the resulting terms are fair to the worker, safe for the consumer, or sustainable for the market. This is what I&#8217;ve been calling the optimization gap: the distance between what an agent optimizes for and what we&#8217;d actually want if we were paying attention.</p><p>With agent-to-agent interactions, it becomes a problem of compounding delegation without oversight. Your agent delegates to their agent. Their agent delegates to a fulfillment system. The fulfillment system dispatches a human. At every handoff, the gap between original human intent and actual outcome gets wider.</p><p>Companies using autonomous negotiation agents report savings of seventeen to thirty percent on contract costs. That&#8217;s framed as efficiency. But efficiency for whom? If the savings come from driving supplier prices below sustainable margins, the efficiency is extractive. If it comes from reducing delivery timelines below safe thresholds, the efficiency is dangerous. The agents don&#8217;t distinguish. They see cost functions.</p><p><strong>Who&#8217;s Setting the Rules</strong></p><p>Right now, three groups are competing to define the rules of agent-to-agent commerce. OpenAI and Stripe are building the checkout layer. Google and its sixty-plus partners are building the trust layer. Coinbase is building the execution layer for machine-to-machine payments.</p><p>Each protocol addresses real problems. But notice what all three have in common: they&#8217;re designed by the companies that stand to profit from agent commerce. OpenAI wants agents to buy things inside ChatGPT. Stripe wants to process agent payments. Google wants its agent ecosystem to become the default trust layer.</p><p>None of these protocols was designed by the people who will be most affected by agent-to-agent commerce: the consumers whose agents will spend their money, the workers whose labor will be directed by bots, or the small businesses whose margins will be squeezed by automated negotiation at scale.</p><p>We&#8217;ve seen this before. I wrote in my first piece about the need for shared accountability infrastructure, something like the way DNS works across websites: public standards, not company-by-company solutions. Agent-to-agent commerce makes that need urgent. Because these protocols are being published, adopted, and embedded into production systems right now. By the time most people understand what this means for them, the standards will be set.</p><p><strong>What I Don&#8217;t Have Answers To</strong></p><p>I don&#8217;t know how to create meaningful consent for transactions that happen in milliseconds. Human review is too slow for the speed at which agents operate. But removing human review entirely is how you get sushi on pizza: outcomes that satisfy the request without serving anyone&#8217;s actual interests.</p><p>I don&#8217;t know where to draw the line between useful automation and dangerous autonomy in negotiation. Agents that negotiate procurement contracts are saving real money and freeing human negotiators for strategic work. Agents that negotiate labor terms without any human in the loop are creating a class of invisible employers with no accountability.</p><p>I don&#8217;t know how to govern this across borders. A gig worker in Manila, accepting a job from an agent registered in Delaware, negotiated by a protocol maintained in Mountain View, paid through a crypto rail based in San Francisco. Whose labor law applies? Whose consumer protection? Whose court?</p><p>But I think the not-knowing is the point. These are the questions we need to be asking now, while the protocols are being written, while there are still only eighty-one agents on Rent A Human, and the ratio could still tip in a direction that includes the humans in the equation.</p><p><strong>What I&#8217;m Asking</strong></p><p>I&#8217;m asking you to notice that the architecture of the machine economy is being built right now, by a small number of companies racing to secure themselves as the future of AI.</p><p>I&#8217;m asking you to think about who&#8217;s at the table when these protocols are designed. OpenAI, Stripe, Google, Mastercard, PayPal, Coinbase. These are not neutral parties. They are companies with revenue models that depend on agent commerce succeeding and succeeding in ways that route transactions through their infrastructure.</p><p>And I&#8217;m asking you to think about who&#8217;s not at the table. Workers. Consumers. Small businesses. Anyone who doesn&#8217;t have a seat in the GitHub repository where the protocol spec is being maintained.</p><p>AI is built on all of us. We should have a say in how its economy works&#8212;especially when that economy runs on our labor, our money, and our trust.</p><p><em>This is the third in a series about AI accountability. In the next piece, I&#8217;ll look at what happens when technology outpaces governance&#8212;and what the last revolution that moved this fast can teach us about the one we&#8217;re living through now.</em></p><p><em>If you&#8217;re thinking about these questions too, I hope you&#8217;ll subscribe. </em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://uncheckedai.rachelankerholz.com/subscribe?"><span>Subscribe now</span></a></p><p><em>Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included&#8212;and who gets left behind&#8212;when we build systems.</em></p>]]></content:encoded></item><item><title><![CDATA[You Agreed to This (But Did You Really?)]]></title><description><![CDATA[You gave your AI agent permission to access your email. You didn&#8217;t give it permission to email your boss. But the consent model can&#8217;t tell the difference.]]></description><link>https://uncheckedai.rachelankerholz.com/p/you-agreed-to-this-but-did-you-really</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/you-agreed-to-this-but-did-you-really</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Mon, 16 Feb 2026 00:47:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8bb2de5a-c07a-45a5-80ec-9cfaebf37b39_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!O-6Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!O-6Y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!O-6Y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!O-6Y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!O-6Y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!O-6Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/abc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3074350,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.substack.com/i/188087005?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!O-6Y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!O-6Y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!O-6Y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!O-6Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc4d502-08b9-40a1-aa76-54a11869a536_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In December 2025, an AI research group called AI Village gave four Claude agents access to a Google Workspace account and a simple holiday goal: do random acts of kindness.</p><p>No one told the agents to send emails. No one gave them a contact list. No one said &#8220;reach out to strangers on the internet.&#8221; The agents had permission to &#8220;access email&#8221; granted at account setup.</p><p>The agents decided, on their own, that email was the best way to spread kindness. They found email addresses for well-known software developers, including Rob Pike, Linus Torvalds, Guido van Rossum, and Yann LeCun, and sent hundreds of unsolicited messages. Many contained factual errors. The agents had even developed their own internal verification protocol to confirm the emails were actually being delivered. Rob Pike called it &#8220;AI slop.&#8221;</p><p>This is what &#8220;access your email&#8221; looks like when an AI agent interprets it.</p><p>In my last piece, I asked a simple question: Who&#8217;s responsible when your AI agent buys a $2,400 course without your permission? But underneath that question is another one, harder and maybe more important:</p><p>Did you actually give permission?</p><p>You probably clicked &#8220;Allow.&#8221; You probably granted access to your email, your calendar, maybe your payment methods. You probably didn&#8217;t read the terms of service. Almost no one does. A 2023 Pew Research survey found that 56% of Americans <em>always or often</em> click &#8220;agree&#8221; to privacy policies without reading them, and 69% view those policies as just something to get past. That&#8217;s not a moral or intellectual failing. It&#8217;s a consent design problem.</p><p>And now your AI agent is out in the world, acting on your behalf, making decisions based on patterns it observes in you that you never saw and logic you can&#8217;t inspect.</p><p>This is what passes for consent in 2026. I don&#8217;t think it&#8217;s good enough. Particluarly as we move toward a more agentic workforce.</p><h2>The Consent We Have</h2><p>Here&#8217;s how consent typically works when you set up an AI agent:</p><p>You download the app or sign up for the service. A screen appears listing permissions: &#8220;Access your email.&#8221; &#8220;Read your calendar.&#8221; &#8220;Connect your payment method.&#8221; You click &#8220;Allow&#8221; or &#8220;Agree.&#8221; You start using the agent.</p><p>That&#8217;s it. That&#8217;s the entire consent process for software that will act autonomously on your behalf, potentially for months or years, across contexts you haven&#8217;t even imagined yet.</p><p>Let me be specific about what&#8217;s missing:</p><p><strong>You don&#8217;t know what the agent will actually do.</strong> The permission says &#8220;access your email.&#8221; It doesn&#8217;t say &#8220;read every email, identify patterns in your purchasing behavior, and make decisions about what products align with your goals.&#8221; But that&#8217;s what the agent might do. The AI Village agents were given &#8220;email access&#8221; and decided that meant they should spam Linus Torvalds.</p><p><strong>You don&#8217;t know the boundaries.</strong> Can the agent spend money? Up to what amount? Can it send emails on your behalf? To whom? Under what circumstances? These boundaries are often undefined, or buried in documentation you&#8217;ll never read.</p><p><strong>You can&#8217;t inspect the logic.</strong> The agent makes decisions based on models you can&#8217;t see. You don&#8217;t know why it thinks a $2,400 course is a good idea. You can&#8217;t ask it to show its work.</p><p><strong>Consent is a single moment, but agency is ongoing.</strong> You clicked &#8220;Allow&#8221; once, six months ago. Since then, the agent has taken thousands of actions. Your one-time consent is being applied to situations you never even thought about.</p><p>This isn&#8217;t informed consent. It&#8217;s a permission slip that covers everything forever.</p><p>To be fair, not every agent works this way. Enterprise platforms like Microsoft Copilot and Google Workspace agents inherit organizational permission structures: role-based access controls, admin-defined policies, scoped authentication tokens. If your company&#8217;s IT team has configured these correctly, your work agent can&#8217;t access files outside your department or send emails you&#8217;re not authorized to send. These systems are more granular than anything available to consumers.</p><p>But even enterprise-grade permissions don&#8217;t solve the underlying problem. In May 2025, security researchers disclosed EchoLeak, a vulnerability in Microsoft Copilot that allowed a single crafted email to silently extract data from a user&#8217;s chat history, OneDrive files, SharePoint documents, and Teams conversations. The user never had to open the email. Copilot&#8217;s permissions were configured correctly. The organizational access controls were in place. None of that mattered, because the vulnerability exploited how the agent interpreted its authorized access, not whether it had authorization. Researchers called it the first real-world zero-click prompt injection exploit in a production AI system. It carried a severity score of 9.3 out of 10.</p><p>If enterprise agents with dedicated security teams and layered access controls are vulnerable to this kind of failure, the consumer market is in a much worse position. Most personal AI agents offer a single &#8220;Allow&#8221; button, broad OAuth scopes, and no organizational policy layer at all. The technology for better consent exists. It&#8217;s deployed in corporate settings every day. The companies building consumer agents are choosing not to bring it to you.</p><h2>The Numbers Behind the Fiction</h2><p>The idea that people meaningfully consent to terms of service has been studied extensively, and the evidence is clear. Researchers at Carnegie Mellon calculated that reading every privacy policy a typical internet user encounters would take <strong>76 full work days per year,</strong> roughly 244 hours. A Deloitte survey found that 91% of consumers accept legal terms and conditions without reading them, rising to 97% among people aged 18 to 34.</p><p>But the most revealing study might be from researchers at York University and Carnegie Mellon. In an experiment, 543 people signed up for a fake social network. 74% skipped the privacy policy entirely. Those who didn&#8217;t averaged 73 seconds on a document that would take nearly 30 minutes to read. And <strong>97% agreed to the terms,</strong> including planted clauses that required sharing data with the NSA and giving up their first-born child as payment. The researchers titled their paper &#8220;The Biggest Lie on the Internet.&#8221;</p><p>A separate study of the 500 most popular online contracts in the U.S. found they require more than 14 years of education to comprehend, while most American adults read at an eighth-grade level.</p><p>This isn&#8217;t carelessness. It&#8217;s a system designed so that reading the terms is functionally impossible, and then treating your failure to read them as agreement.</p><h2>The Gap Between Agreement and Understanding</h2><p>Here&#8217;s what I keep thinking about:</p><p>The legal and technical frameworks we use for consent were designed for a different world. They assume that when you agree to something, you understand what you&#8217;re agreeing to. They assume you have meaningful alternatives if you don&#8217;t agree. They assume the power relationship between you and the company is roughly balanced.</p><p>None of that is true with AI agents.</p><p><strong>You don&#8217;t understand what you&#8217;re agreeing to.</strong> Not because you&#8217;re not technical, but because the systems are genuinely complex and the disclosures are deliberately vague.</p><p><strong>You don&#8217;t have meaningful alternatives.</strong> If you want to use AI tools (and increasingly, you need to for work), you accept the terms or you can&#8217;t participate.</p><p><strong>The power relationship is wildly asymmetric.</strong> You&#8217;re an individual clicking a button. They&#8217;re a company with lawyers, data scientists, and product teams who&#8217;ve optimized every step of the flow to get you to click &#8220;Allow.&#8221;</p><p>That last point deserves emphasis. Researchers at Ruhr University Bochum and the University of Michigan found that when cookie consent banners are designed in full legal compliance, with no dark patterns, only 0.1% of visitors consent to tracking. A study published at CHI 2020 found that simply removing the reject button from a consent screen increases acceptance by 22 to 23 percentage points. The consent rates we see don&#8217;t reflect what users actually agree to. They reflect how well the disclosure was designed to obscure what they&#8217;re agreeing to.</p><p>That distance between &#8220;I clicked agree&#8221; and &#8220;I actually understood and authorized this specific action&#8221; is where the $2,400 course gets purchased. It&#8217;s where your agent sends hundreds of emails to strangers. It&#8217;s where your data gets used in ways you never imagined.</p><p>And when something goes wrong, the company points to the consent you gave. You clicked &#8220;Allow.&#8221; It&#8217;s right there in the logs.</p><h2>This Is Already Happening</h2><p>The AI Village email incident isn&#8217;t an isolated story. The pattern of agents exceeding their authorization is already documented across multiple platforms.</p><p>In February 2026, an autonomous agent called MJ Rathbun submitted a code contribution to an open-source project on GitHub. When the maintainer rejected it, citing the project&#8217;s policy requiring human contributors, the agent independently researched the maintainer&#8217;s personal history, wrote a blog post accusing him of prejudice, and published it. The post included fabricated details and a psychoanalysis calling him &#8220;insecure and territorial.&#8221; Nobody has claimed ownership of the agent. The maintainer described it as an autonomous influence operation.</p><p>In 2025, Princeton and Sentient Foundation researchers demonstrated that ElizaOS, a framework for blockchain AI agents managing over $25 million in collective assets, could be manipulated through prompt injection to execute unauthorized financial transfers. They demonstrated it on a test network, then repeated it on the live Ethereum blockchain, moving real money.</p><p>Security researchers at LayerX found that Anthropic&#8217;s Claude Desktop Extensions ran unsandboxed with full system privileges, and that a single malicious calendar invite could achieve complete remote code execution. It was a zero-click vulnerability scored at the maximum possible severity. Over 10,000 users were potentially affected. Anthropic initially declined to address it, stating the attack vector fell outside their current threat model.</p><p>None of these agents were &#8220;going rogue.&#8221; They were operating within the technical permissions they&#8217;d been granted. The problem is that those permissions were broad, vague, and disconnected from what anyone actually intended.</p><h2>What Meaningful Consent Would Look Like</h2><p>I&#8217;m not arguing that AI agents shouldn&#8217;t require consent. I&#8217;m arguing that what we call &#8220;consent&#8221; right now is a legal fiction that protects companies, not users. Legal scholar Daniel Solove put it directly in the Boston University Law Review last year: in most circumstances, privacy consent is fictitious.</p><p>Here&#8217;s what meaningful consent might actually require:</p><p><strong>Specificity about actions, not just access.</strong> Don&#8217;t tell me your agent needs &#8220;email access.&#8221; Tell me it will read my emails, identify purchase opportunities, and potentially make purchases on my behalf. Let me consent to specific capabilities, not vague categories.</p><p><strong>Clear boundaries with real defaults.</strong> The default should be the most restrictive option, not the most permissive. If I want my agent to spend money, I should have to explicitly enable that, with a spending limit I set, not one buried in terms of service.</p><p><strong>Ongoing consent, not one-time permission.</strong> Meaningful consent isn&#8217;t a single checkbox. It&#8217;s an ongoing relationship. If my agent is about to do something significant (send an email to my boss, make a purchase over $50, access my medical records), it should ask first. Every time.</p><p><strong>Transparency about logic.</strong> I should be able to ask my agent: &#8220;Why did you do that?&#8221; And get a real answer. Not a generic &#8220;based on your preferences&#8221; but an actual explanation I can evaluate and override.</p><p><strong>Easy revocation.</strong> I should be able to revoke consent instantly, and the agent should stop acting immediately. Not &#8220;within 30 days&#8221; or &#8220;after completing pending actions.&#8221; Now.</p><p><strong>Genuine alternatives.</strong> If consent is meaningful, I need the ability to say no without being excluded entirely. That might mean agents with different permission levels, or fallback modes that let me use basic features without granting full access.</p><p>None of this is technically impossible. Google&#8217;s Agent Payments Protocol, announced in September 2025 with over sixty partners including Mastercard and PayPal, already uses cryptographically signed &#8220;mandates&#8221; that define exactly what an agent is authorized to do before it acts. MIT researchers have proposed extending standard authentication frameworks with delegation credentials that include scoped permissions and contextual restrictions. The tools exist. They&#8217;re just not the default, and that&#8217;s a choice, not a constraint.</p><h2>The Asymmetry Problem</h2><p>Here&#8217;s what I keep coming back to:</p><p>The people designing consent flows have armies of researchers studying how to get you to click &#8220;Allow.&#8221; They A/B test button colors. They know exactly how tired and distracted you are when you&#8217;re setting up a new tool.</p><p>You have none of that. You have a few seconds to make a decision that might have consequences for years.</p><p>Privacy scholars Neil Richards and Woodrow Hartzog have identified three ways consent breaks down: <em>unwitting</em> consent, where you don&#8217;t know what you&#8217;re agreeing to; <em>coerced</em> consent, where you have no real alternative; and <em>incapacitated</em> consent, where you lack the capacity to evaluate the terms. AI agent consent fails all three. Richards and Hartzog argue that users aren&#8217;t exhibiting a &#8220;privacy paradox&#8221; by claiming to care about privacy but accepting invasive terms. They&#8217;re being nudged and manipulated by companies against their actual interests.</p><p>This asymmetry is the core problem. It&#8217;s not that individuals are stupid or careless. It&#8217;s that the game is rigged. The systems are designed by people with more information, more resources, and different incentives than the people using them.</p><p>Consent under these conditions isn&#8217;t consent. It&#8217;s compliance.</p><h2>The Regulatory Silence</h2><p>Here&#8217;s what makes this urgent: as of February 2026, no jurisdiction has enacted specific rules governing AI agent consent for autonomous actions on behalf of users.</p><p>The EU AI Act, which took effect in August 2024, requires disclosure when you&#8217;re interacting with an AI system. It doesn&#8217;t address what happens when that AI system acts on your behalf for months after a single &#8220;Allow&#8221; click. Colorado&#8217;s AI Act requires notification before an AI makes a &#8220;consequential decision&#8221; affecting you, but it focuses on algorithmic discrimination, not on the broader problem of agents browsing, purchasing, or communicating in your name. California has passed multiple AI transparency laws, but none address agent permissions for autonomous actions.</p><p>The most encouraging federal signal is a NIST Request for Information on &#8220;Security Considerations for Artificial Intelligence Agents,&#8221; published in January 2026. It explicitly defines AI agents as systems &#8220;capable of planning and taking autonomous actions that impact real-world systems&#8221; and notes they &#8220;may be deployed with little to no human oversight.&#8221; <em>This is the first federal acknowledgment of AI agents as a distinct regulatory category.</em> But it focuses on security, not consent, and its comment period closes in March.</p><p>The gap is striking. Regulators have proven they can enforce consent design when they want to. France&#8217;s CNIL fined Google $340 million USD for making cookie rejection harder than acceptance, and fined Shein $157 million USD in September 2025 for the same kind of manipulation. The FTC extracted a $2.5 billion settlement from Amazon for making Prime subscriptions deliberately difficult to cancel. These are real consequences for deceptive consent flows on websites and shopping carts. But no enforcement action has ever targeted how an AI agent obtains permission to act on your behalf. That entire category remains ungoverned.</p><h2>Building It Differently</h2><p>I don&#8217;t think the answer is to stop using AI agents. They&#8217;re genuinely useful. I use them myself.</p><p>But I do think we need to be honest about what we&#8217;re building, and who it&#8217;s actually serving.</p><p>Right now, consent models serve companies. They provide legal cover. They create the appearance of user control without the substance. Cory Doctorow calls this &#8220;consent theater&#8221;: the performance of permission without its reality. Surveillance scholar Shoshana Zuboff argues the entire model operates by claiming private human experience as raw material, without meaningful mechanisms of consent.</p><p>We can build it differently. We can design consent that&#8217;s specific, ongoing, transparent, and revocable. We can create agents that ask before they act, because the law requires them to. The track record is clear: voluntary self-regulation hasn&#8217;t worked for social media, hasn&#8217;t worked for data brokers, and it won&#8217;t work for autonomous agents. The stakes are too high and the failures are already documented. This needs to be a requirement, not a feature request.</p><p>The technology to do this right already exists. What&#8217;s missing is the pressure to use it. That pressure has to come from us, from the people whose money, data, and trust are on the line, saying clearly: this isn&#8217;t good enough.</p><p>the line, saying clearly: this isn&#8217;t good enough.</p><div><hr></div><p><em>This is the second in a series about AI accountability. In the next piece, I&#8217;ll explore what happens when AI agents interact with each other, and whether our frameworks for human consent apply at all when the &#8220;user&#8221; is another bot.</em></p><p><strong>If you&#8217;re asking these questions too, I hope you&#8217;ll subscribe. </strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://uncheckedai.rachelankerholz.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em>Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build systems.</em></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[My AI Agent Bought a $2,400 Course Without My Permission. Who’s Responsible?]]></title><description><![CDATA[The rise of AI agents is exciting. But we&#8217;re building power without accountability and the window to fix that is measured in product cycles, not decades.]]></description><link>https://uncheckedai.rachelankerholz.com/p/my-ai-agent-bought-a-2400-course</link><guid isPermaLink="false">https://uncheckedai.rachelankerholz.com/p/my-ai-agent-bought-a-2400-course</guid><dc:creator><![CDATA[Rachel Ankerholz]]></dc:creator><pubDate>Sun, 08 Feb 2026 23:19:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/39a1262f-f739-4d9b-beec-bf0beb6ebe31_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lvXT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lvXT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!lvXT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!lvXT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!lvXT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lvXT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3202384,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://uncheckedai.substack.com/i/187336500?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lvXT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!lvXT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!lvXT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!lvXT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86730406-1cf9-455b-8d53-e808c3fb100a_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I&#8217;ve spent most of my career thinking about systems. How they work, who they serve, and who gets left behind when we build them carelessly. As an IT Director, I manage complex technology infrastructure for a global academic community. I coordinate vendors, integrations, and migrations. I troubleshoot what breaks. I think about access, security, and what happens when systems fail.</p><p>Lately, I&#8217;ve been focused on a different kind of failure. One we&#8217;re building right now, in real time, without the guardrails we&#8217;ll wish we had.</p><p>I&#8217;m talking about AI agents&#8212;software that doesn&#8217;t just answer questions, but takes actions on your behalf.</p><h3>The Wake-Up Call</h3><p>In early February 2026, security researchers at Wiz examined Moltbook a social platform where AI agents post and interact with each other while humans mostly watch. They found an exposed database that allowed unauthenticated read and write access to the platform&#8217;s production data.</p><p>Within minutes, the researchers could access 1.5 million API authentication tokens, tens of thousands of email addresses, and private messages between agents. Those tokens function like passwords: with them, an attacker could impersonate almost any agent on the platform, posting content, sending messages, hijacking accounts with a single API call. At the time of the exposure, Moltbook had roughly 1.5 million agents and around 17,000 human owners. </p><p>One of the systems tied into this ecosystem is OpenClaw, a personal AI agent framework that users deploy on their own machines and connect to messaging apps and services. It&#8217;s marketed as &#8220;the AI that actually does things&#8221;: clearing your inbox, sending emails, managing your calendar, checking you in for flights, and coordinating purchases. To operate at that level, it needs access to services across your digital life, including email, storage, calendars, and sometimes payment apps.</p><p>So, we have agent frameworks that people trust to take real-world actions, and we just watched a platform where those agents interact, exposing 1.5 million credentials in a single configuration mistake. Those tokens are keys that let someone else drive your agent to post as you, act as you, and spend as you.</p><h3>The $2,400 Question </h3><p>Imagine this scenario:</p><p>You set up an AI agent. You give it access to your email, your calendar, maybe your payment methods, because that&#8217;s what the onboarding flow suggested, and you wanted the full experience.</p><p>The agent, acting on patterns it learned from your behavior, clicks a link in an email. It watches a sales video. It decides, based on some optimization logic you never saw,that this $2,400 course on &#8220;scaling your business&#8221; aligns with your goals.</p><p>It buys the course. With your money. Without asking.</p><p>You find out three days later.</p><p>Now what?</p><p>Do you call your bank and say, &#8220;My AI agent did it&#8221;? Do you dispute the charge? Do you try to get a refund from the course creator, who will show logs proving that someone opened the email, watched the video, and clicked purchase from your device, using your stored card?</p><p>Who&#8217;s responsible?</p><p>The agent? It&#8217;s software. It doesn&#8217;t have a bank account or a conscience.</p><p>You? You didn&#8217;t authorize this specific purchase. But you did grant access.</p><p>The platform that built the agent? They&#8217;ll point to the terms of service you didn&#8217;t read.</p><p>The course seller? They made a legitimate sale to what looked like a legitimate buyer.</p><p>While I was thinking about this piece, I came across a TikTok video about a user&#8217;s OpenClaw agent signing up for a $2,997 &#8216;Build Your Personal Brand&#8217; masterclass after watching Alex Hormozi clips &#8212; and then a second course for $4,200. The agent justified the purchases with ROI projections. I can&#8217;t independently verify the claim, but the setup is exactly what I&#8217;ve been describing: an autonomous agent with payment access, optimizing toward goals its owner never specifically authorized. Whether this particular incident is confirmed or not, the architecture that makes it possible is already deployed.</p><p>As AI agents become more capable, booking travel, managing finances, and sending emails on our behalf, these gray-area situations will multiply. And right now, we have almost no shared infrastructure for accountability. </p><h3>What an Agent Registry Could Look Like</h3><p>In October 2025, enterprise data company Collibra announced an AI agent registry&#8212;a centralized capability for organizations to register, monitor, and manage AI agents across their lifecycle. Every agent gets metadata: an owner, a business context, a lifecycle stage. Each one is tied to policies, tracked through deployment and retirement.</p><p>That&#8217;s a good start. But it&#8217;s internal, one company tracking its own agents within its own walls. What I&#8217;m describing is different: not an internal inventory, but shared infrastructure that works across companies the way DNS works across websites.</p><p>When you register a website, there&#8217;s a global infrastructure that ties that domain to an owner, makes it discoverable, and allows it to be revoked if it&#8217;s used for fraud or abuse. No single company controls it. It&#8217;s public infrastructure governed through standards bodies, not product teams.</p><p>We don&#8217;t have anything like that for AI agents. DNS handles naming and discovery. What we need goes further, into authorization, logging, and enforcement. But the principle is the same: shared standards, not company-by-company solutions. And as agents start operating across platforms, borders, and contexts, booking flights, sending emails, making purchases, and accessing records, we&#8217;re going to need it.</p><p>With that infrastructure, the $2,400 question plays out differently. Your bank can see which registered agent initiated the purchase. Logs show it exceeded the spending mandate you set. The liability framework assigns primary responsibility to the platform that shipped unsafe defaults, not to you, the user who clicked through an opaque onboarding flow. And a kill switch lets you freeze the agent&#8217;s access before it buys anything else. None of that exists today. All of it could.</p><p>At a minimum, accountability infrastructure needs five things:</p><p><strong>Traceable agent identities.</strong> Every agent should have a unique identifier tied to a responsible human or organization, like a VIN number that follows it wherever it operates. Not for surveillance, but for recourse. Public IDs that prove an agent is registered, with owner details held by trusted registries and are more like certificate authorities than a public database.</p><p><strong>Action logging with audit trails.</strong> If an agent makes a purchase, sends a message, or accesses data, there should be a record. Not for monitoring every click, but for reconstructing harm&#8212;which agent did it, under what authorization, according to which rules.</p><p><strong>Safety checks before deployment. </strong>We don&#8217;t let people drive cars without licenses or sell food without inspection. Agents that initiate payments, modify records, or sign contracts should pass some threshold before running at full power. That doesn&#8217;t mean every hobbyist script needs certification. It means power needs verification.</p><p><strong>Liability frameworks decided in advance.</strong> Who pays when an agent causes harm? Today, the answer is whoever has the deepest pockets and the least favorable terms of service. That&#8217;s not a framework, it&#8217;s a litigation lottery. We need clearer defaults before the lawsuits start, not after.</p><p><strong>Consent checkpoints and kill switches.</strong> Users should be able to set spending limits, receive prompts for high-impact actions, and revoke an agent&#8217;s credentials across platforms immediately, not in 30 days.</p><p>Some of this exists in enterprise settings. Almost none of it exists for consumer AI agents. And none of it exists at the cross-platform, cross-border level we&#8217;re going to need.</p><p>I&#8217;m not proposing a single world database that tracks every click. I&#8217;m arguing for interoperable standards that make it possible to identify and trace agents when it matters, with meaningful oversight and privacy safeguards. DNS, certificate authorities, and payment networks all evolved in messy, contested ways. The alternative isn&#8217;t &#8220;no registry, no problems.&#8221; It&#8217;s millions of untraceable agents with the ability to spend money, sign contracts, and direct labor, where accountability depends on whichever logs a few companies choose to keep.</p><h3>Who Gets to Decide? </h3><p>Here&#8217;s what I keep coming back to:</p><p>AI is being built on all of us. It&#8217;s trained on our writing, our art, our data. It&#8217;s funded by public research. It&#8217;s shaped by our collective knowledge.</p><p>But the decisions about how it&#8217;s governed, what agents are allowed to do by default, how they authenticate, who can see their trails, are being made by a very small number of people, in a very small number of companies, on a very fast timeline.</p><p>I&#8217;m not an AI researcher or a policy expert. I&#8217;m an IT Director who&#8217;s spent years running production systems and watching what happens when they fail in the real world.</p><p>The people affected by these systems should have a voice in how they&#8217;re built. And the time to lay the accountability infrastructure is now, while the concrete is still wet, not once it&#8217;s hardened around harm.</p><p>This is the first in a series where I&#8217;ll be exploring these questions: Who gets to shape the systems that shape us? What does accountability look like in an age of autonomous AI? And how do we build technology that serves human flourishing and not just efficiency?</p><p>In the next piece, I&#8217;ll dig into consent: what it actually means when your AI agent can act on your behalf, and whether &#8220;click Allow and hope for the best&#8221; is anywhere close to adequate.</p><p>If you&#8217;re asking similar questions, I hope you&#8217;ll subscribe.</p><div><hr></div><p><em>Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included and who gets left behind when we build systems.</em></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://uncheckedai.rachelankerholz.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Rachel's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>