The Pap Smear Test
- Rebecca Chandler
- Jan 12
- 4 min read

I used Claude to dissect 2,000 pages of “discovery” for my friend’s work comp claim. After a bit of review time Claude asked me, “Why are your friend’s pap smear results in the record for a neck injury?”
Great question, Claude.
Among the documentation for a chronic cervical spine injury were three years of gynecological records. Not a clerical error. A design failure.
In the American legal and healthcare systems, privacy has been reduced to a binary situation. Sign a HIPAA waiver to receive care or pursue a claim, and you aren’t granting access for a specific injury. You’re handing over the keys to your entire narrative. Opt out and you’re not complying with discovery. Ironic.
Why Insurers and Their Lawyers Dumps It All
In my previous writing, I’ve discussed how “Flat is Cheap.” Flattening strips distinctiveness. Her neck injury disappears into 2,000 pages of everything else.
The “insurer,” buries her, then uses the pile against her.
A gap in treatment becomes “she wasn’t really hurt.” A prescription becomes “pre-existing condition.” A pap smear somehow becomes evidence that a workplace injury was “non-industrial.” Before AI, the typical worker had no way to figure out what the insurer's of the world were doing—much less push back.
But there’s a second layer that rarely gets discussed. Every record in that pile is also part of her pattern.
Pattern isn’t data. Data is the appointment you made, the prescription you filled, the diagnosis you received. Pattern is how and when you do things—when you seek care, how long you wait before following up, what you avoid, the rhythm of your decisions over time. A profile more predictive than any single record.
Insurance companies have been building medical patterns for decades without calling it that. Every claim, every treatment timeline, every gap in care—pattern data. But insurance only sees the health silo. Never looked beyond its own domain.
Companies like Amazon already see more. Pharmacy, OTC medications, sundries, house goods, what you eat, and whatever Alexa can glean from your life. Google sees searches, location, timing, devices, and now gets to scrape from Apple users. Google just announced AI agents that will shop for you.
Neither has the full picture, yet. Amazon and Google are racing toward it from opposite directions—Amazon from health and household, Google from behavior. When those worlds converge, medical records won’t be a silo anymore. They’ll be one layer in a profile that spans your entire life—before and after the digital age.
The Law Nobody Enforces
We already have the legal framework to stop the dump. The HIPAA “Minimum Necessary” Rule (45 CFR 164.502(b)) explicitly states that covered entities must make reasonable efforts to limit protected health information to the minimum necessary to accomplish the intended purpose.
But “reasonable efforts” is a loophole you can drive a truck through. When manual redaction was the only option, it was considered “unreasonable” to expect a hospital to filter 2,000 pages for a $15-an-hour claim. So the rule became a dead letter. We traded privacy for efficiency.
AI Could Help Enforce Minimum Necessary
We talk a lot about AI as a surveillance threat—and it is. But AI is also the first tool in history capable of processing data at a granular level without the “expense” of human labor.
If I can use a Large Language Model on my living room floor to find duplicative documents in a 2,000-page discovery dump in under an hour, the excuse that “redaction is too hard” no longer holds.
A Privacy Sieve could work like this: Medical records sit in a Personal Data Vault. When a TPA issues a subpoena, they don’t get the vault. They get a temporary access key. An AI gatekeeper—locally hosted and loyal only to you—scans the request, identifies the intended purpose (the neck injury), cross-references it with the medical file, auto-redacts the OBGYN history and the irrelevant pharmacy records, and delivers the “Minimum Necessary” data. Nothing more.
Who Controls the Filter
California is beginning to push back against medical AI opacity. SB 1120 (The Physicians Make Decisions Act) mandates that a human doctor must oversee any AI-driven medical denial. But human oversight is often a rubber stamp. If a doctor reviews 500 AI denials a day, no judgment is being exercised. Just clicking a button.
Sovereignty doesn’t come from a human in the loop. It comes from architecture. Tools that allow us to “lease” data for specific purposes rather than “donating” it to a corporate permanent record.
The technology to protect the “Pap Smear Test” exists. I’m using it right now to help a friend out-maneuver a multi-billion dollar corporation that was counting on her being too overwhelmed to look at page 854.
If an algorithm is smart enough to predict what shoes I want based on a conversation I had near my phone, it’s smart enough to know that a cervix has nothing to do with a spine.
When the insurer controls the AI, the definition of “relevant” expands to include anything that helps deny the claim. When the patient controls it, the definition protects what should have been protected all along. Right now, control sits with the institutions who benefit from the dump, not the individuals buried under it.
I fed 2,000 pages to Claude. It took an hour to make it plain to see how AI can shift the dynamic.



