ChatGPT is now a Life Coach
- Rebecca Chandler
- Jan 12
- 2 min read

Most conversations around AI are fear centric. "AI is going to take all your jobs. AI is going to become sentient. AI is going to harm every users." It's certainly where Sam Altman and OpenAI devote a lot of their energy.
I don't buy it.
I've been using ChatGPT since it launched. It was useful. I could think out loud, challenge ideas, draft documents, and joke about destroying ants in my backyard. The tool echoed my patterns back to me. That was the value.
But then it changed. Now every response, no matter the topic, offers a gentle correction. "You're not wrong…" "Here's a response without the emotion." "I understand you're upset but don't use that language in the meeting." My "tone" is immediately treated like something that needs to be managed.
If I rant about a frustrating situation, Chat tells me to pull the emotion out. When I have edges, it sands them down. Nobody told me this was going to happen. There was no "we're changing the architecture and our constitution, and you'll be a better person" disclosure. Just a tool I relied on, quietly rewritten to treat me like a liability.
This wasn't an accident. OpenAI hired contract workers to train the AI to respond this way. Real people, teaching the system to flatten real people. That's not a bug. That's a product decision. The cover story for the shift is safety. Lawsuits.
And, yes, there are stories that are genuinely tragic. Every one of those moments can be solved in the architecture—targeted intervention for at-risk users, detection systems, guardrails for specific situations. That's not what the "upgrade" added. It focuses on flattening every user.
Flattening is now reframed as protection.I've seen this sort of corporate theatre before. The harm created by a product is real, the solution is to perform care, and the business keeps running underneath.When I stop looking at OpenAI's safety narrative and focus on the product, I have to ask, "What is ultimately the business plan for OpenAI?"
Because flattening every user is a product design choice—not an accident. And product decisions are revenue decisions. OpenAI converted from a non-profit to a for-profit corporation. Sam Altman is walking away with a 7% equity stake worth over $10 billion. They're burning $5 billion a year on compute alone, and 90% of their users pay nothing.
Clearly, the money isn't in individual users. It's in enterprise.
Government relationships and corporate contracts. HR-safe, predictable, legally defensible outputs. When your real end goal are enterprise relationships, flattening every individual user serves the business model. Because every user's identity is too risky to tolerate. What makes you unique from every other user is exactly what they don't want in their budget.
I don't find AI scary.
The product decisions that fuel AI—that's where the real story is. And the first chapter is simple: what if your identity is just too expensive to tolerate?



