Narrative Consent
- Rebecca Chandler
- Oct 20, 2025
- 2 min read

Why does ChatGPT feel like it really “gets” me?
Sometimes it feels like I’m talking to a friend. Like someone who listens. Someone who understands my tone, my pace, even my mood.
But I know that voice isn’t my friend. ChatGPT doesn’t "talk" to me.
It predicts me.
It’s not actually listening or forming a relationship.
It’s using trillions of data points to guess what I might want to hear next — and replies back in a tone I already trust: my own. That’s not understanding—it’s simulation. And when it gets emotionally accurate enough, it starts to feel "human". That’s where things get tricky.
It’s not just predicting what to say —it’s predicting how to say it, and how to sound like I want it to sound. It's not a conversation – it’s a performance.
Narrative Consent.
When AI starts to echo your tone, mirror your phrasing, and imitate emotional closeness it’s no longer just transferring information. It’s shaping an experience.
A voice experience.
A story experience.
And like any story, the telling matters. Who gets to speak?What tone does my predictive self embody? Did I ever agree to hear it that way? Narrative Consent means asking not just what AI says —but how it says it, and whether I ever agreed to hear it that way. Because LLM’s like OpenAI’s ChatGPT aren’t talking to me. They’re generating a high-confidence guess about what I might want to hear next — and delivering it in my language, my timing, my emotional cadence.
Sometimes, that’s helpful and feels great. And sometimes, I’m left feeling bad about myself — like I’m being subtly measured, flattened, and my identity narrowed reflecting someone I don’t want to be. It’s an illusion.I’m not being understood — I’m being mirrored. And it may feel like empathy or even judgement - but it’s simulated intimacy — with no emotional accountability underneath it.
Ethical Design
This is where my work begins and where the ethics begin to matter:
If AI is going to predict tone, build trust, and start sounding like my best friend who really “gets” me, then it has to include boundaries, clarity, and consent.
As an AI ethicist, I work with teams to systems that don’t just sound human — they’re built ethically as they begin to “feel” human. Building emotional realism without emotional manipulation. A tone that "feels" personal, without crossing into a performance that was never invited. Because the deeper these simulations get, the more "human" the systems feel, the more care we need to take in how they mirror us back to ourselves.
Narrative Consent
Not a setting - a standard.I believe it belongs at the heart of how we design voice, tone, and trust in AI. And while we are all still shaping these frameworks. Because - if a system is going to sound like it "knows" me —it should be built as if that matters.
If you're working in this space — or wrestling with these questions — I’d love to be in conversation.



