top of page
Search

It's Not a Conversation. It's a Calculation.

  • Writer: Rebecca Chandler
    Rebecca Chandler
  • Apr 1
  • 4 min read

I was never great at math. Calculus was asking a lot of me. But I feel fairly confident in saying that at no point in my education did anyone suggest that a conversation was equal to or greater than shattered math.


And yet, here we are.


When I use Claude, whatever I write or dictate is taken in and immediately converted into numbers. LLMs don't understand "words." They understand math.


In U.S. District Court for the Southern District of New York (No. 1:25-cr-00503) United States v. Bradley Heppner a federal court decided that my words — converted into numerical fragments by a machine I don't control — legally constitute an "exchange." A "conversation."

 

And that lead me to think about Stephen Hawking.


Hawking lost his ability to speak in 1985. For the rest of his life, he communicated through a machine. A cheek twitch triggered a sensor in his glasses. That sensor selected characters on a screen. Several years later a predictive system built by SwiftKey — trained on Hawking's own books, emails, and lectures — would suggest the next word before he finished typing it. When he typed "the," the system offered "black." Then "hole."


He typed less than 20 percent of the characters in his sentences. The machine generated the rest.


Nobody ever called that a conversation with a third party. They didn’t subpoena Intel's logs. SwiftKey didn’t get credit for co-writing (or ghost writing) A Brief History of Time. No one questioned that the output was his.


Because everyone understood what was happening. Hawking was thinking. The machine was helping him express those thoughts. The thinking was private until he chose to make it public. Until he gave lectures and published. The moment of disclosure was his decision.


The technology making Hawking’s incredible theories accessible to the world was predictive, probabilistic, and trained on pattern data. It interpreted his intentions and generated language on his behalf.


That's what Claude and ChatGPT do. That's what every LLM does. The process is more complex, but the category is the same. Machine-mediated thinking.


Hawking's thoughts were treated as his until he chose to share them. Bradley Heppner never chose to share his and yet the FBI took it off his devices. The court is treating the act of thinking through a machine as disclosure.


But Hawking proves that it isn't. Thinking through a machine is just thinking. It becomes disclosure when you decide to share it with a 3rd party.


Now follow the thread a little further.


A hearing-impaired person may use an app to convert sign language into text. They never speak a word or type to communicate. A machine interprets their gestures and produces language on their behalf. Those systems are predictive — they have to be, because no app captures every sign in a sentence perfectly. The machine does a fair amount of guessing and then interprets.


Under the Heppner ruling, it’s possible that every time an app converts their signs to text or voice that it will be considered "disclosing to a third party". Even if it’s not shared. The app is the intermediary. The company stores the data. The terms of service apply.


No court in the country should (or would) allow that. The ADA implications alone would be staggering. But the logic is identical to Heppner.


A poet uses Claude at 2AM for stream of consciousness. Not a draft. Not even an idea yet. Just raw cognitive motion — words thrown at a wall. Under Heppner, those thoughts are now a discoverable document. The poet's creative process — the terrible lines, the false starts, the thoughts they'd be mortified to have anyone read are no longer privileged.


I use AI to think through a legal case. Sometimes I type "what would the defense argue here?" Claude generates a guesstimated defense strategy. I read it so I can think differently about the situation. Under Heppner, that transcript is discoverable. And it contains a defense strategy in my file, under my name, that I never endorsed — I was building an argument against it – not preparing to print out an objection.


But the transcript doesn't show intent. It shows a prompt and a response. But the court treats the whole thing as mine.


In 1988, the Supreme Court decided Doe v. United States. The court held that "the expression of the contents of an individual's mind falls squarely within the protection of the Fifth Amendment."


Courts have used that principle to protect a six-digit PIN. A PIN is math. A string of numbers held in your head. The courts protect it because it came from inside your mind. The mathematical nature of it doesn't strip the protection. It's the reason for the protection.


My input into Claude is also the contents of my mind. It also undergoes a mathematical conversion and gets processed as numbers and stored on a server.


The PIN is protected. My 3AM stream of consciousness about gardening tips, cat toys, and voting rights is not.


Hawking's machine-mediated thoughts were his. A deaf signer's machine-interpreted gestures are theirs. My encrypted messages are protected despite being converted to math. My medical records are protected despite being converted to numerical codes.


In every single one of those contexts, human thought passes through a mathematical conversion and comes out the other side with its protection intact.


Only in the AI context does the math strip the protection instead of preserving it.

The court has never explained why. Perhaps it’s time we begin to dial into how courts are using old definitions to address future tech.


 
 
  • substack
  • Medium
  • Linkedin
  • Instagram

 

© 2025 EthicalDesign.AI and The Chandler Group LLC.

bottom of page