top of page
Search

The Ghostwriter in the Lab Coat

  • Writer: Rebecca Chandler
    Rebecca Chandler
  • Jan 12
  • 3 min read

I’ve been dealing with a shoulder injury for a while now. Did everything by the book—physical therapy, anti-inflammatories, a year of injections. I jumped through every medical hoop so that when my specialist declared we would apply for an authorization for surgery, we were both certain it would be approved.


Our enthusiasm was quashed two weeks later with yet another denial for treatment. A physician signed it.


The reason? I “never tried injections.” I’m looking at a year of injection records in my own file, and a licensed doctor signed a denial claiming they don’t exist.

 

The Rubber Stamp

It was clear that human eyes never saw the supporting docs sent with the RFA. An algorithm ghostwrote a medical decision, and a licensed professional functioned as its rubber stamp.


This is the new architecture of denial. Not a claims adjuster reviewing your file or a doctor weighing your treatment history. An algorithm scanning for keywords, generating a reason, and routing it to a physician who signs 500 of these a day. The signature is decoration. The decision was made before any human touched it.


When a doctor uses AI to ghostwrite medical denials, and that AI hallucinates a reason that the patient’s own records disprove, that’s not a glitch. That’s negligence. A surgeon performing an operation while impaired. In an underwriting environment, a hallucination should be a breach of care. But not in the current system.


I’ve been gaslit by bureaucracies before. Most of us have. The wrong code entered. The fax that “never arrived.” The prior authorization that expired while you were waiting for a callback. But this was different. This wasn’t a human being sloppy or overworked. This was a system designed to produce denials at scale, with a human signature as decoration.


Maybe we need to start issuing medical licenses to AI and certificates of participation to the “doctor” that approves the drivel.

 

Pattern Mining Disguised as Efficiency

OpenAI launched a healthcare platform this week, marketing it as an “efficiency tool.” The press release talked about streamlining records, reducing administrative burden, helping doctors spend more time with patients.


The long game is pattern mining—turning our medical histories into predictive shadows. Pre-existing conditions that haven’t happened yet. If you searched for back pain remedies three years ago, that’s pattern. If you filled a prescription for anxiety medication in 2019, that’s pattern. If you skipped a follow-up appointment because you couldn’t get time off work, that’s pattern too.


Insurance companies have been building these profiles for decades. But they only saw the medical silo. Now companies like Amazon and Google are racing toward a fuller picture—pharmacy records, shopping habits, search histories, voice assistants listening in kitchens. When those worlds converge with medical data, your health insurance won’t just reflect what happened to you. It will predict what’s going to happen. And price accordingly.


If insurers use this to mine patterns and automate denials, I’m going to use AI to audit every word they produce.

 

The Complexity Shield is Cracking

For years, patients have been told the system is too complex to challenge. Too many codes. Too many regulations. Too many layers between you and the decision. Institutions counted on the paper burial to win. Every denial was an “individual medical judgment.” How do you fight that?


But when the same faulty algorithm ghostwrites the denials, the “uniqueness” disappears. And we can scrutinize at the same level as insurers.

The power dynamic shifts.


When we begin to compare notes across thousands of claimants, we can prove an algorithm is systematically hallucinating to deny care. Class-action suits become backed by audited data, not anecdotes, and take on a new level of accountability.

The defense used to be: “Every case is different. Every denial reflects individual medical judgment.” That defense falls apart when the denials use the same language, cite the same non-existent gaps in treatment, and get rubber-stamped by the same rotating cast of physicians who never opened the file.

 

The Arms Race

As insurers use AI to automate denials, we will use AI to audit them. Both sides now have the same tools. I don’t want an arms race. I want my shoulder fixed. But if the only way to get care is to out-document the algorithm that denied me, I’ll do it. And I’ll share what I learn.


California is starting to push back. SB 1120 mandates that a human doctor must oversee any AI-driven medical denial. AB 489 prohibits AI from “impersonating” a professional. These are floors, not ceilings. But they create new grounds for accountability.


Somewhere in the middle, we can begin to get the care we need without the hurdles.

 

What Happened

I caught the hallucination. I flagged the rubber stamp. I documented everything—the injection records, the dates, the provider notes that directly contradicted the denial.

Three months later, my insurer sent me a 3-page approval. The education continues.


 

 
 

562-713-5106

  • substack
  • Medium
  • Linkedin
  • Instagram

 

© 2025 by FutureGenesis.AI, EthicalDesign.AI and The Chandler Group LLC. Powered and secured by Wix 

 

bottom of page