Authorized, Operated, Denied: The Approval That Wasn't
The insurance company approved the surgery. The surgeon performed it. The denial arrived after. The only new information in the file was proof the surgery happened.
On January 7, 2025, Dr. Elisabeth Potter was in the middle of a bilateral DIEP flap breast reconstruction — a complex surgery for a cancer patient — when a call came into the operating room.
The caller was a UnitedHealthcare representative. Her patient, already asleep on the table, had been pre-approved for the surgery. What the rep wanted to know was whether the patient’s overnight inpatient stay was justified.
Potter scrubbed out to take the call. The UHC representative on the line did not have access to the patient’s medical records.
UnitedHealthcare denied the overnight stay anyway.
When Potter posted about it on TikTok, the video got 5.5 million views. She then received a defamation threat letter from Clare Locke, the same firm that won the $787 million Dominion judgment against Fox News, demanding she take it down and apologize.
She didn’t. UnitedHealthcare then declined to add her new surgery center to its in-network list, a decision she says has put her $5 million in debt and forced her husband to cash out his 401(k).
Scrubbing in is the point of no return. The patient has committed. The anesthesia team has committed. The decision the insurance company is making in that moment is not about whether the surgery happens. It is about who pays for how long the patient recovers.
Potter’s call is not an outlier. It is the loud version of a pattern that plays out quietly every day in post-op recovery rooms and billing offices across the country.
A patient gets home from the hospital. Eight days post-op. The mail arrives.
Her claim has been denied pending additional clinical review. The letter asks for medical records her surgeon’s office already sent during prior authorization.
She calls billing. The coordinator has heard this before. The coordinator has a folder of these letters going back years.
The only new document in the file is the procedure note. The same insurer that approved the surgery now wants to re-review whether the surgery it approved was medically necessary — using clinical information it already had, adding only the evidence that its approval was acted on.
Three weeks becomes three months. The patient’s credit takes a hit. The surgeon’s office writes off a percentage. Somebody hit a quarterly number.
The industry has a word for this. It almost never reaches the patient.
It is called retrospective denial. KFF Health News documented the pattern in a case involving the Markley family, who incurred medical debt after Anthem Blue Cross and Blue Shield revoked preapproval for a battery of tests performed at the Mayo Clinic. When insurers pull the decision to pay after the service is completed, patients are legally on the hook for the bill.
Martha Gaines, who directs the Center for Patient Partnerships at the University of Wisconsin Law School, co-authored a JAMA piece on this in 2020. “How broken can you get?” she asked. “How much more laid bare can it be that our health care insurance system is not about health, nor caring, but just for profit?”
That was 2020. It has not improved.
I want to be specific about something, because the shape of this argument matters.
If you work inside a health plan running utilization management, you are not the villain of this piece. The incentives you operate inside are the villain. Claims volume is real. Fraud is real. Documentation errors are real. An insurer that never reviews claims is an insurer that gets exploited.
I know that. I want to trace the gap between a legitimate review function and the industrial pattern it has become.
Retrospective review was originally a fraud control. It asked: Did the provider bill for a service they didn’t perform? Does the documentation match the procedure? Was the patient eligible on the date of service? These are good questions.
What retrospective review has become, at scale, is a second medical necessity decision on claims the plan has already approved. That is a different function. It runs on the assumption that the authorization was provisional and that the real decision happens after the money is at stake.
Two algorithms are running. The harm is in the gap between them.
The first algorithm approves. Cigna’s PXDX system processed approximately 300,000 denials in two months. Medical reviewers allegedly spent an average of 1.2 seconds per case.
At 1.2 seconds, there is no review happening. The algorithm makes the decision. The human clicks “approve” on what the algorithm has already decided, which is how the insurer meets the “human in the loop” requirement on paper while functionally automating the outcome. The human is not reviewing the algorithm. The human is ratifying it.
The second algorithm denies. Post-service claims get screened against different rules, often by different vendors. 46 percent of healthcare organizations already use AI for revenue cycle management. Another 49 percent plan to within a year.
Cotiviti partners with more than 100 health plans on payment accuracy, explicitly marketing retrospective review and AI-enabled clinical chart validation. Optum, the UnitedHealth subsidiary that acquired Change Healthcare, runs revenue cycle tools used across the industry. Zelis, MultiPlan, and EquiClaim work the same territory. One Cotiviti case study claims a Blue Plan achieved “triple its original projected findings” after adopting their AI-powered clinical review.
The AI making the denial is often not built by the insurer named on the letterhead. It is licensed from a vendor whose product, marketed in those exact terms, is more denials sustained on appeal.
Now, the training data question, which nobody is asking loudly enough.
The AI systems running authorization and retrospective denial are trained on historical claims data. Which means they are trained on prior decisions. Which means they are trained on the outputs of human reviewers who were themselves operating under the same cost-control incentives.
The model is not predicting medical necessity. The model is predicting what a cost-pressured reviewer would have denied.
That is a critical distinction. A model trained on historical denials learns to reproduce historical denials. If the training data contains systematic bias against expensive procedures, older patients, or complex cases, the model does too. The bias gets encoded, then laundered through a layer of algorithmic objectivity, then sold back to the same industry that generated it.
This is the pattern Cigna’s PXDX, UnitedHealthcare’s nH Predict, and EviCore’s prior-auth tools share. They are not performing a medical review. They are automating the inherited judgment of reviewers whose incentives were never aligned with the patient in the first place.
And now the detail that broke me.
According to reporting on UnitedHealth’s internal practices, the company explored using AI to predict which denials were likely to be appealed, and which of those appeals were likely to be overturned.
Read that sentence twice.
That is not an algorithm predicting medical necessity. That is an algorithm predicting who will fight back. And denying accordingly.
The logic is simple. If the model predicts an appeal is unlikely, deny. If the model predicts an appeal would likely succeed, deny anyway if the patient is unlikely to file one. A KFF analysis found that in 2021, only 0.2 percent of denied claims were ever appealed.
0.2 percent. The model does not need to be right. It needs to be right often enough to withstand the 0.2 percent of cases where someone fights.
nH Predict, the algorithm at the center of the UnitedHealth class action, has an alleged 90 percent reversal rate on appeal. Ninety percent. A coin flip would do better. The reason it gets deployed anyway is that nine out of ten denied patients never appeal.
A 90 percent error rate is only broken if the errors cost the company something. For 99.8 percent of patients, they don’t.
The speed asymmetry is the whole game.
Denial runs at algorithmic speed. Appeal runs at human speed. The insurer’s system flags a claim in milliseconds. The appeal takes the patient weeks of phone calls, records requests, and letters. The provider’s billing team appeals in aggregate because they don’t have the labor to fight every denial individually.
Between an algorithm that denies in milliseconds and a human who appeals in months, the house always wins. Not because the algorithm is right. Because the patient gave up, or died, or paid.
I manage technology infrastructure for a global academic community. When I think about what would happen if my systems operated the way insurance claims review operates, I do not have to guess. It would be a disaster.
Imagine a system that approved a user’s access to a resource, let them use it, and three weeks later revoked the approval retroactively and charged them for the time they had already spent. In my world, that is not a utilization management program. That is a breach of contract, an incident report, and a board conversation.
When I look at the retrospective denial pattern, I see the same infrastructure failure. A commitment was made. The commitment was relied on. The commitment was revoked after reliance had created harm.
A 2024 Premier survey found an average of 3.2 percent of all claims denied had been pre-approved via prior authorization. More than 54 percent of denied claims were ultimately paid after appeal. Denials skewed to high-cost claims, averaging charges of $14,000 and up.
Three details from that survey matter. First, 54 percent of denied claims eventually get paid, which tells you the denial was wrong. Second, the denials concentrate on expensive claims, which tells you the optimization target. Third, the denials concentrate on procedures that were already pre-approved, which tells you what authorization is actually worth.
A Senate Permanent Subcommittee on Investigations report criticized UnitedHealthcare, Humana, and CVS for using AI automation to deny Medicare Advantage post-acute care. The report raised concerns about substituting medical necessity with financial calculations.
That investigation focused on front-end denials. The back-end version — retrospective denial, running through licensed vendor AI — has not received the same scrutiny.
If a bank approved your mortgage, let you close on the house, and three weeks later sent a letter saying they had re-reviewed your application and actually you don’t qualify — we would not call that a process. We would call it fraud.
In healthcare, we call it utilization management. And we let AI run it.
What I Don’t Have Answers To
I don’t know how to distinguish, at policy scale, legitimate retrospective review from the pattern I am describing. Fraud happens. Upcoding happens. Documentation errors happen.
I also don’t know who audits the second algorithm. Cotiviti, Optum, Zelis, MultiPlan, EquiClaim — these vendors operate across dozens of insurers, which means one model’s bias affects millions of patients across plans that appear to compete with each other but are all running the same denial engine underneath.
When a denial letter cites “additional clinical review,” I cannot tell you which vendor’s model flagged the claim or what criteria it applied. Neither can the patient. Neither, in many cases, can the appeals coordinator at the insurance company.
And I don’t have a clean answer for Dr. Potter. She did the right thing. She scrubbed out to take the call because she knew if she didn’t, the stay would be denied. She is now $5 million in debt and facing defamation threats for saying so publicly. I don’t know how to build an incentive structure that doesn’t punish that decision.
If you have a denial letter for a service that was previously authorized, the appeal rules are the same ones I described in Vol 11. You have the right to appeal, the right to external review, and the right to see the specific clinical criteria the insurer applied.
Overturned will generate an appeal letter from your denial documents. It’s free, no login, no storage of your records.
The tool lives at rachelankerholz.com/tools/overturned. Retrospective denials are explicitly in scope.
Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind, when we build systems.


