The AI Wrote That You Consented. Your Chart Says So.
Vendor contracts put consent obligations on providers, not vendors. A class action says patients were never informed. Their charts say they were. The AI wrote that part.
Your health system probably signed an ambient AI scribe contract in the last 18 months. Maybe it was Abridge. Maybe Nuance DAX Copilot. Maybe Ambience Healthcare. The deal came through clinical operations, or IT, or both. Your legal team may or may not have seen the final BAA. Your patients almost certainly saw nothing at all.
That gap is now a class action lawsuit.
In July 2025, Jose Saucedo went to Sharp Rees-Stealy Medical Group for a routine physical. He spoke with his doctor. He left. A few weeks later, he logged into his patient portal to review his visit notes. His medical record stated that he had been “advised” that the visit was being audio recorded. It said he had “consented.” Neither thing had happened. The recording had been made, transmitted to a third-party vendor’s cloud, and processed by an AI tool called Abridge. The consent documentation in his chart was, according to the lawsuit he filed in November 2025, false.
The proposed class action covers anyone who had a medical visit with Sharp on or after April 1, 2025, the date Sharp announced its Abridge partnership. That is potentially over 100,000 patients.
The AI didn’t just record him without his knowledge. It wrote a consent record proving he had agreed to something he never agreed to. That is not a failure at the edges of how this technology works. That is the system performing exactly as designed, with no one watching the output.
Before we get to the law, here’s the operational problem your board needs to understand.
Once the recording is made, your health system is in one of two bad positions, and your vendor contract put you there.
Delete the audio. Many vendors do this within 30 days. Privacy problem addressed, at least technically. But now there is no ground truth. If the AI hallucinated a dosage, a diagnosis, or a symptom the patient never mentioned, there is no recording left to check it against. The legal exposure here is concrete: a physician tells the patient they’re prescribing 0.5mg. The AI transcribes 5mg. The physician doesn’t catch it before the prescription is filled. Months later, a malpractice claim arrives. The audio is gone. The AI-generated note is the only record. The note is what the opposing attorney has.
Retain the audio. You get verification capability and an evidentiary trail. But now every raw recording, every draft transcript, every backend AI artifact is potentially discoverable. Defense attorneys are already warning that clinicians may find themselves defending not just the signed chart, but a parallel archive they did not author, edit, or control.
Delete it, no receipts. Keep it, the receipts may be used against you.
Neither option is clean. That is the architecture your vendor sold you, and your contract formalized it.
Here’s what I keep coming back to: the C-suite conversation about ambient AI in healthcare is almost entirely about efficiency, and almost entirely missing the question that will matter most when the next lawsuit lands.
The efficiency case is real. The Permanente Medical Group, Kaiser’s physician organization in Northern California, reported that ambient scribes saved its physicians the equivalent of 1,794 working days across more than 2.5 million patient encounters. Clinician burnout is a genuine operational crisis and the documentation burden is a meaningful driver of it. Every health system IT leader I’ve spoken with understands that argument.
What they’re less clear on is who owns the liability when the tool goes wrong.
The Larridin 2026 State of Enterprise AI Report, which surveyed more than 350 senior leaders at organizations with 1,000 or more employees, found that 92% of C-suite executives expressed full confidence in AI impact, while 58% said they couldn’t identify who in their organization owned AI performance accountability, and 62% lacked a comprehensive inventory of AI applications currently in use. The confidence and the visibility are not moving together.
In the same month that the survey was published, Harvard Business Review ran a piece about a Fortune 500 insurance company whose CEO convened the C-suite to ask a single question: Who owns our AI initiatives? The CIO said it was obviously her domain. The COO said an agentic workforce is operations by definition. The CFO pointed out that an AI system was already making underwriting decisions with direct P&L impact. The Chief Risk Officer noted that autonomous decision-making is a major risk exposure. No one had a clean answer.
Lauren’s organization had a version of that meeting. The question of who owns the ambient scribe decision, and who owns the liability when that decision goes wrong, is not settled in most health systems. In many, it is not even fully articulated.
I manage technology infrastructure for a global academic community. When I evaluate a vendor relationship, my first questions are always about data: where does it go, who can see it, how long does it stay, and what does the contract say when I want to leave. These are basic infrastructure questions. They are also, as I’ve been arguing across this series, accountability questions.
Here’s what the Abridge contract structure actually looked like in practice: Sharp deployed the tool across its clinical network in April 2025. Per the vendor agreement, Abridge retained broad rights to access recordings and transcripts. The compliance obligations, including consent workflows, were placed on Sharp. When the lawsuit arrived, both names appeared in the complaint. The legal liability sat with the health system.
A February 2026 analysis in Medical Economics put it directly: many health systems are signing AI vendor agreements without clear answers to who owns the patient data, what happens if an AI output contributes to a clinical error, and what exiting the relationship actually looks like. The vendor’s broad disclaimers are standard language. Those disclaimers do not change the fact that under the current law in most states, the institution remains responsible for whatever makes it into patient care.
That is governance arbitrage. The vendor captures the revenue. The provider carries the risk. The contract made it so.
Sharp is not a one-off. It is one instance of a blueprint.
The consent architecture underneath these tools does not work the way most health system leaders assume it does.
In California, recording a conversation requires consent from all parties. Under the California Invasion of Privacy Act, violations carry $5,000 per incident in statutory damages. Applied across tens of thousands of patient encounters, that exposure can threaten an organization’s financial position, not just its reputation. Sharp’s case is still active.
But Sharp is California. The picture outside California is more complicated, and in some ways more alarming.
In July 2025, a class action was filed against Heartland Dental, the largest dental support organization in the United States, alleging that patient phone calls were recorded, transcribed, analyzed for sentiment, and used operationally without patients ever being informed. The calls ran through RingCentral’s AI platform. Patients calling to schedule an appointment had their conversation recorded, summarized, and scored for emotional tone. No exam room. No ambient scribe on a device. Just a phone call, quietly routed through a system that was listening, and a vendor capturing the data to optimize its own product.
In January 2026, a federal court dismissed the original wiretapping claims, ruling that AI transcription fell within an “ordinary course of business” exception to the Federal Wiretap Act. Because recording and analyzing calls is what the product does, the court found, it is not eavesdropping under federal law. It is a feature.
The case is continuing. The plaintiff filed an amended complaint in February, arguing that RingCentral’s AI tools are a separate optional product from its phone service, not a core function, and that the company is using those patient calls to train its own models. That distinction, if it holds, could close the loophole. The outcome isn’t settled. But the original ruling is already on the books, and other courts will cite it.
This is the same blueprint as Sharp. RingCentral provides the service, captures the operational value, and sits behind a “core service” defense when liability appears. Sharp and Heartland are not two isolated lawsuits. They are the same contract architecture playing out in different rooms. Exam room. Phone call. The vendor captures the capability. The institution carries the exposure.
In most of the country, the Heartland ruling is currently the answer to whether recording your patients without telling them violates federal law. The legal protection patients might reasonably assume exists does not, if the AI recording is the vendor’s core service.
This is the part I think health system leadership underestimates. The risk is not confined to the exam room or the telehealth visit. It extends into scheduling calls, intake workflows, prior authorization conversations, and any touchpoint where an AI tool is listening and neither the patient nor the contracting organization fully understands the terms under which that listening is happening.
In an earlier piece in this series on consent, I wrote about how the current model of digital agreement has become a legal fiction: you agree to something general, the system does something very specific and seemingly out of scope of your original consent. The gap between those two things is where your accountability goes to live unexamined. That piece focused on software consent flows. The ambient scribe problem is the same architecture applied to your exam room.
A piece I published on observability made a related argument: without a verifiable record of what a system did and when, there is no accountability infrastructure, only assurances. The Sharp case is both problems at once. The AI generated a false consent record, which is a consent failure. It simultaneously created a documentation archive that the health system cannot fully audit or control, which is an observability failure. The two are not separate issues. They are the same gap.
Your symptoms. Your medications. Your mental health history. The conversation you had with your doctor about your marriage because it was affecting your blood pressure. All of it, processed through a system your institution contracted for, under terms your patients never saw.
There is a clinical argument that runs the other direction, and it deserves honest treatment, because dismissing it would cost real patients real harm.
AI systems with longitudinal memory, systems that retain and reason across multiple visits rather than treating each encounter in isolation, show meaningfully better diagnostic performance for slowly developing conditions, including early detection of neurodegenerative disease, than systems that treat each encounter in isolation. The pattern recognition that catches a slow-developing condition requires time and continuity. A system designed to delete everything after 30 days cannot see that the fatigue from October connects to the joint pain from March.
The system designed to protect your patients’ privacy is also the system least equipped to help them. I don’t have a clean answer for that tension. The field doesn’t either.
What I do know is that this tradeoff is being made right now, in procurement meetings, largely without patient input, and in many cases without the full involvement of legal and risk teams who would ask different questions than clinical and IT leaders ask. Patients are finding out the way Jose Saucedo found out: by reading their own records.
One more number worth sitting with: the ambient scribe market grew 2.4 times in 2025 alone, generating an estimated $600 million in revenue. Abridge, the vendor named in the Sharp lawsuit, is valued at $5.3 billion and is already deployed at more than 200 large health systems, including the VA, Johns Hopkins, and the University of Chicago. Industry projections put the market at nearly $3 billion annually by 2033.
If you are in healthcare leadership and you are not certain whether your organization uses one of these tools, the answer is probably yes. Which means the question is not whether to evaluate this risk. It is whether you are in the position Sharp was in before the lawsuit, or after it.
What I Don’t Have Answers To
The clinical utility argument is real. Ambient scribes are providing genuine value to overwhelmed physicians. The burnout crisis is not a talking point. If I were advising a health system’s IT leadership today, I would not tell them to stop evaluating these tools.
I also don’t know what meaningful consent looks like in emergency contexts. If a patient arrives unconscious, the consent framework breaks immediately.
I don’t have a clean read on where the federal preemption question lands. The Trump administration’s December 2025 executive order directed the DOJ to challenge onerous state AI laws and tasked agencies with developing preemptive standards to avoid a patchwork of fifty different state rules. If that effort succeeds, the California standard that creates Sharp’s exposure may be weakened nationally. If it doesn’t, health systems operating across state lines are navigating that patchwork now, with contracts that were often written before the legal landscape clarified.
And the Heartland case is still moving. If the amended complaint succeeds in arguing that RingCentral’s AI tools are a separate product from its phone service, the “ordinary course of business” loophole narrows. The legal ground is shifting faster than most procurement cycles.
What I’m more confident about: the default of deploying first and building consent infrastructure later is not a viable risk position. The Sharp case will not be the last. The health systems moving fastest without governance infrastructure are not just acquiring tools. They are acquiring liability at scale, signed into their own contracts.
The contracts are signed. The tools are running. The question is whether the governance has caught up.
If you’re in healthcare IT or operations leadership, here are the four questions worth pulling your Business Associate Agreement (BAA) out to answer today. Does your vendor agreement specify whether patient recordings or transcripts are used to train AI models? Who among your vendor’s staff can access those recordings, and under what conditions? What does your consent workflow look like for a patient who arrives without prior notice? And what happens to the data, all of it, when you terminate the contract?
If you don’t have clear written answers to all four, you have a gap your legal team needs to see before the next procurement cycle. Not after the next lawsuit.
In the next piece, I’ll step back from sector-specific cases and look at something that runs underneath all of them: what it would actually mean to build AI systems where the data stays with the people it came from. Not as a privacy feature. As a market mechanism that changes who benefits from the value your patients’ information creates.
If you’re working through these questions in your own organization, I’d be glad to have you along.
Rachel Ankerholz is an IT Director and writer exploring the intersection of AI ethics, accessibility, and human-centered technology. She writes about who gets included, and who gets left behind.


