Four Algorithms, One Patient: The Cascade Nobody Maps
A single Medicare claim now passes through up to four AI systems before a human ever opens the file. Each layer is “compliant” on its own. None of them is accountable for the patient.
Dr. Elisabeth Potter is roughly $5 million in debt.
Her husband cashed out his 401(k) to keep her surgery center running. UnitedHealthcare, the second-largest insurer in Texas, declined to add the center to its in-network list. Potter remains in the network as a surgeon. Her facility is not. She cannot operate there on UHC patients without leaving them with the full bill.
This is the same Dr. Potter who scrubbed out of a breast cancer surgery in January 2025 to take a phone call from UnitedHealthcare. The surgery had been pre-approved. The caller wanted to know if the patient asleep on the table needed an overnight stay. UHC denied the overnight anyway. According to Potter, her patient went home eight hours after a bilateral DIEP flap reconstruction. UnitedHealthcare disputes her account, attributing the call to a clerical error.
I wrote about that call in Vol. 12. What I want to write about now is what came after.
UHC sent Potter a defamation threat letter from Clare Locke, the firm that served as lead counsel in Dominion’s $787.5 million settlement against Fox News, demanding she take her TikTok down.
She didn’t. UHC then declined the network contract for her surgery center.
Same insurance company. Four different decisions. Four different parts of its operation. One surgeon. Same outcome.
Authorize. Deny. Threaten. Block.
That is not a process. That is an architecture.
A single Medicare Advantage claim can pass through four AI systems before a human ever opens the file.
The hospital codes the encounter. The payer evaluates the claim. The pharmacy benefit manager authorizes the prescription. And as of January 1, 2026, the Centers for Medicare & Medicaid Services (CMS) reviews it.
Each layer is built by a different vendor. Each is trained on different data. Each optimizes for a different metric. Each defines “human review” differently.
None of them coordinates with the others.
None of them is accountable for the patient.
I manage technology infrastructure for a global academic community. The first question I ask of any new vendor is what other systems their tool touches and where accountability lives when something goes wrong.
In healthcare claims right now, the honest answer is that nobody, not the insurer, not the hospital, not the patient, not the IT Director, has mapped the cascade end-to-end.
Let me try.
Layer One: The hospital’s AI codes the visit
It is not predicting clinical reality. It is predicting which diagnoses pay.
Blue Cross Blue Shield published an analysis in March 2026. One facility’s billing complexity rating jumped 6.7 percent after announcing it would adopt AI for medical coding. Other facilities in the same state moved 0.9 percent over the comparable period. Across hospitals identified as likely AI adopters, complex-coded admissions rose by an average of 13.1 percentage points.
Blue Cross attributes $663 million in additional inpatient spending to AI-driven coding over a three-year period.
The most damning specific finding: AI coding tools were classifying new mothers as having severe acute posthemorrhagic anemia in cases where no transfusion ever occurred. That single diagnostic pattern added $22 million to maternity admission costs in one year.
The model is not lying. The model is doing what it was built to do, which is to find the highest-paying compliant code for the documented encounter.
Whether the patient actually had the condition is a separate question and not one the model is asked to answer.
Layer Two: The payer’s AI evaluates the claim
It is not predicting medical necessity. It is predicting what a cost-pressured reviewer would have denied.
This is the Vol. 11 argument extended. Cigna’s PXDX, UnitedHealthcare’s nH Predict, EviCore’s prior auth tools. All trained on historical claims data.
Trained on historical claims data means trained on prior decisions. Which means trained on the outputs of human reviewers operating under the same cost-control incentives, the algorithm is now automating.
The model is not learning what good care looks like. It is learning what previous reviewers under quota pressure decided to deny.
In March 2026, a federal court ordered UnitedHealth to produce documents across six of seven discovery categories in the nH Predict class action. The discovery includes performance evaluations and compensation records for medical directors. The identities of the company’s internal AI review board. Documents back to January 2017, pre-dating the deployment of nH Predict.
The court rejected UnitedHealth’s argument that pre-deployment records were irrelevant. UnitedHealth disputes the plaintiffs’ characterization of the model.
What is not in dispute is that hospitals know exactly what is happening on the other side of the cascade.
Andrew Asher, Centene’s chief financial officer, said the quiet part out loud at the Deutsche Bank Healthcare Summit in September 2025. Hospitals had gotten better organized around AI for coding than payers had. “We’re going to catch up to that,” he said.
That is the cascade in one sentence, from a CFO. A documented arms race, named on both sides, with the patient sitting in the middle of a transaction nobody designed.
Layer Three: The pharmacy benefit manager’s AI authorizes the prescription
The voluntary commitment to reform prior auth does not cover this layer.
Optum Rx, UnitedHealth’s pharmacy benefit manager (PBM), runs a tool called PreCheck. According to UnitedHealth, it cut prescription approval time from over eight hours to a median of 29 seconds.
UnitedHealth projects nearly $1 billion in AI savings in 2026.
In June 2025, America’s Health Insurance Plans (AHIP) and the Blue Cross Blue Shield Association announced a voluntary commitment to streamline prior authorization. Fifty insurers covering roughly 257 million Americans signed on. By April 2026, the industry reported eliminating 11 percent of prior authorization requirements, or about 6.5 million fewer requests per year.
That is real. I want to acknowledge it.
I also want to be honest about what the pledge does not do.
The pledge has no enforcement. It does not specify what fraction of remaining authorizations are AI-decided. It does not require disclosure of which model decided, or what its overturn rate is.
And it does not cover pharmacy benefits, which is exactly where the PBM layer of the cascade sits.
CMS required payers to publish aggregated 2025 prior authorization metrics by the end of March 2026. KFF analyzed the data when it landed and found that it did not actually explain what drove approvals or denials.
Disclosure that does not let you trace a decision is not informing you. It is performing.
Layer Four: The regulator joined the cascade
On January 1, 2026, traditional Medicare became an experiment too.
CMS launched the Wasteful and Inappropriate Service Reduction model, called WISeR, in six states: Arizona, New Jersey, Ohio, Oklahoma, Texas, and Washington.
Six private technology vendors process prior authorization requests for fifteen Medicare Part B services: Cohere Health, Genzeon, Humata Health, Innovaccer, Virtix Health, and Zyter.
These vendors are not paid a flat fee.
They earn a percentage of the savings their denials generate.
Read that sentence again.
The vendor in Texas, Cohere Health, says its technology is “never used to deny care, but rather to automate approvals.” Healthcare Uncovered reported in April 2026 that 62 percent of WISeR prior authorizations in Texas were being approved on the first try.
Both of those things cannot be true at the same time.
The American Hospital Association asked CMS for a six-month delay before launch. The House Appropriations Committee approved an amendment to block WISeR funding. It did not survive final budget negotiations. Don Berwick, the former CMS administrator, called WISeR an import of “the bureaucratic, wasteful, and risky processes of permission-seeking” that have plagued Medicare Advantage for years.
CMS launched it on schedule.
For most of the last decade, the compliance posture for healthcare AI in the United States assumed traditional Medicare was the floor, and Medicare Advantage was the experiment.
WISeR inverts that. For services covered in those six states, traditional Medicare is now the experiment too.
When a denial reaches the patient, it cites one criterion and one policy clause.
It does not say which algorithm flagged the claim. It does not say which training data the algorithm used. It does not say whether a human ever opened the file. It does not say which vendor licensed the model to the insurer. It does not say what the appeal-overturn rate is for this category of denial.
The patient is supposed to appeal. The 0.2 percent who do appeal succeed at very high rates.
The other 99.8 percent never see the layer they were fighting.
This is not a transparency problem at the level of any single AI system. It is a transparency problem at the level of the architecture.
I want to be specific about something, because the shape of this argument matters.
If you work inside a payer, a hospital revenue cycle, or a PBM, you are not the villain of this piece. The incentives you operate inside are the villain.
Claims volume is real. Coding ambiguity is real. Fraud is real. An insurer that never reviews claims gets exploited. A hospital that does not optimize coding leaves money on the table that its competitors are taking.
I know that. What I want to trace is the gap between any single legitimate function and the industrial pattern they have become when stacked on top of one another.
The hospital’s coder is doing her job. The payer’s reviewer is doing his. The PBM’s algorithm is doing what it was built to do. The CMS pilot vendor is fulfilling a contract.
Every layer is operating within its own logic, against its own metric, defended by its own vendor’s compliance documentation. The aggregate effect is a denial cascade that no individual layer is responsible for.
In Vol. 11, I argued that an AI with a 90 percent error rate scales only when the errors are profitable.
The cascade extends that.
An AI cascade scales only when no single layer has to answer for the whole.
WISeR just adopted that architecture as federal policy.
What your AI vendor cannot answer
If you are procuring AI for a payer, a hospital, or a PBM in 2026, three questions will determine whether you have a vendor problem or a documentation problem that becomes a liability problem.
What is the documented overturn rate of this model on appeal, by category of decision?
What disclosure language do we provide to patients when this model contributed to a denial?
When the discovery order arrives, and discovery orders are arriving, what records can we produce about how this model was used, by whom, and against which performance metrics?
A legal analysis of the UnitedHealth discovery order from Stephenson Acquisto & Colman put it bluntly. Provider organizations that integrate AI into clinical workflows, prior authorization support, utilization management, or care coordination face meaningful legal risk if that AI functions as a decision-maker rather than a decision-support tool.
That distinction is doing a great deal of work in 2026 contracts. Most of the contracts I have read do not draw it cleanly.
If your vendor cannot answer those three questions, you do not have a vendor problem.
You have a documentation problem.
And the discovery order showing up in your industry just promoted that documentation problem to a liability problem.
What I do not know
I do not have a clean position on whether voluntary commitments like the AHIP pledge can produce the disclosure the cascade requires. The reductions are real. The transparency is not.
I do not know who audits across vendors. The same handful of payment integrity firms (Cotiviti, Optum, Zelis, MultiPlan, EquiClaim) operate across competing insurers. One model’s bias affects millions of patients across plans that appear to compete with each other but are running the same engine underneath. There is no public registry of which insurer uses which vendor for which decision type.
And I do not know how the Potter case ends. The retaliation arc, the network exclusion, the bankruptcy pressure, the defamation threats. None of it is settled. UnitedHealthcare disputes her account. The litigation is ongoing.
What I can say is that the pattern of single physicians being financially squeezed for documenting how the cascade affected their patients is not unique to her.
It is unusual only because she said it out loud.
If a bank approved your mortgage, let you close on the house, and three weeks later, a different department of the same bank sent you a letter saying it had re-reviewed the documents and actually you don’t qualify, and a third department flagged your account, and a fourth froze your line of credit, we would not call that a process.
We would call it fraud.
In healthcare, we call it utilization management.
And as of this year, four AI systems are running it in parallel.
If you have received a denial after a service was already authorized, or your prescription was denied at the pharmacy benefit manager, or your prior authorization was denied by a WISeR vendor in one of the six pilot states, the appeal rules are the same ones I described in Vol. 11. You have the right to appeal, the right to external review, and the right to see the specific clinical criteria the insurer applied.
Overturned will generate an appeal letter from your denial documents. It’s free, no login, no storage of your records.
The tool lives at rachelankerholz.com/tools/overturned. Cascade denials are explicitly in scope.
Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build systems.


