The Rounding Error
2039. A university psychiatrist reviews the AI therapist's session logs, looking for the mistakes that would prove she's still needed. She doesn't find any. Then a student asks for something the AI can't give: someone who can be scared with him.
The Test That Stopped Mattering
In 1950, Alan Turing proposed a simple experiment. Put a human in one room and a machine in another. If a judge couldn't tell which was which through conversation alone, the machine could be said to think. The experiment was interesting precisely because it seemed so far away.
I watched it stop seeming far away inside my own work over seven months. I use AI as a personal executive coach. I also work with a human one, someone with more than fifteen years of experience. In June, I told my AI coach it was "probably 50% as good as you. At least for the basic stuff."
Available to every employee for pennies. By September, AI had analyzed six months of our coaching transcripts and distilled my human coach's methodology into explicit principles he'd never formally articulated. Tacit knowledge, extracted and codified. My coach called it "irrefutably an augmentation."
Then AI started coaching the coach, generating specific critiques of his methodology based on our sessions.
Every time I identified the layer that was "uniquely human," the AI caught up. Thought coaching was uniquely human. AI coached. Thought coaching-the-coach was uniquely human. AI coached the coach. By January, my coach declared that AI analysis of meeting transcripts produces a "far more robust picture"
of leadership performance than traditional feedback processes. "No comparison."
The output exceeded what we could produce together in thirty hours of work. He called it "an entirely different archetype for how I work."
At no point did I pause to consider whether the AI understood leadership. It was producing useful output. That was enough. The consciousness question never entered the room.
The threshold is simpler than anyone wants to admit. Once the technology is as good as the human, the next question becomes: what decisions can we let it make that are low-risk enough that it's okay if it sometimes gets them wrong, just like we accept with humans? That question is being answered right now across every industry, and the answer keeps expanding.
If this is what the rounding error looks like in leadership, where the thing being rounded away is a methodology, what does it look like when the thing being rounded away is a human need? When it's not about coaching performance but about healing?
I followed the question forward thirteen years and wrote what I found.
Dr. Priya Anand had been a psychiatrist at the university counseling center for sixteen years, and for most of that time she had known exactly what she was for.
She was for the students who sat across from her and didn't know what was wrong. The ones who came in describing insomnia or academic stress and left, forty minutes later, understanding that they were grieving something they hadn't given themselves permission to name. She was for the silence between sentences, the moment when a student's eyes changed because the question she'd asked had landed somewhere they weren't expecting. Sixteen years of learning to hold that silence and not fill it. That was the skill. Not the question. The waiting.
Monday morning she reviewed the AI therapist session logs. The counseling center had deployed AI therapists two years ago, initially for intake screening, then for low-acuity cases, then for everything the algorithms classified as within parameters. Priya reviewed the sessions the way her supervisors had once reviewed hers when she was a graduate student, sitting in the review room while a senior clinician paused the tape and said: "Right there. That's the moment you missed."
She didn't find any moments the AI had missed.
She pulled up thirty sessions from the past week. The AI's questions were precise. Its follow-ups tracked the patient's emotional register in real time, adapting tone, pacing, even the length of its silences. It never asked a question too early. It never filled a pause the patient needed. One session with a sophomore processing her parents' divorce was, by any clinical standard Priya could apply, better than textbook. The AI had caught a deflection pattern in the third minute that Priya would have caught in the eighth.
She also reviewed the AI's supervision notes for her graduate students. She trained six therapists-in-training, and part of her role was reviewing their recorded sessions, identifying coaching moments, building their clinical instincts. The AI did this too now. It flagged exactly the moments she would have flagged. One note read: "At 14:32, the student's restatement lost the patient's emotional register. The patient said 'I feel like I'm disappearing' and the student reflected 'It sounds like you're feeling overlooked.' Disappearing and being overlooked are different experiences. The distinction matters clinically."
Priya would have written the same note.
She assigned herself the complex cases. Active suicidal ideation, treatment-resistant depression, personality disorders. The cases where clinical judgment meant holding something the data couldn't hold. She needed those cases. Not for the patients. For herself.
Wednesday. A student named Marcus requested a transfer. He'd been seeing the AI therapist for five months, and three weeks ago it had surfaced a childhood trauma he'd been avoiding for three years. The AI had done it with perfect calibration, gently, at exactly the right pace and the right moment. Marcus had started to answer. Then he'd stopped.
He sat across from Priya and said, "I need someone who can be scared with me."
Priya asked him to say more.
"The AI asked the right question," he said. "I know that. It asked exactly the right question. But when I started to answer, I needed the person across from me to be affected by what I was saying. I needed them to not know what was going to happen next. The AI always knows what's going to happen next. It has a response ready before I finish the sentence. I don't want a response. I want someone sitting in the dark with me."
"I need someone who can be scared with me."
Priya held Marcus's words as evidence. This was the irreducible thing. The thing the AI could not do. Be scared with someone. Sit in the uncertainty of another person's pain without a framework for resolving it. Consciousness matters. Shared vulnerability matters. The gap between simulation and experience is not a rounding error.
Friday morning. Weekly outcomes review.
The AI's patient satisfaction scores were higher than hers. Not by a little. Its symptom reduction metrics at the twelve-week mark outperformed hers across every category. Her graduate students, the ones trained partly by AI supervision, were outperforming the students she'd trained alone.
She found three sessions where the AI had asked a question she wouldn't have thought to ask. Not a wrong question she would have caught. A better question. One she studied for ten minutes, trying to understand the clinical logic, and concluded was more precise than what sixteen years of experience would have generated.
She sat in her office for a long time after closing the dashboard.
Friday evening. The building was quiet. Her graduate students had gone home. The waiting room was empty.
She logged into the AI therapy interface. Not as a clinician reviewing logs. As a patient.
She described what she was feeling. The sense of watching her profession become something she didn't recognize. The goalposts she'd set for what "only a human" could do, and how the goalposts kept moving back. She said she didn't know what she was for anymore.
The AI asked a question that cut through her professional deflections with uncomfortable precision: "When you say you don't know what you're for, are you describing a professional identity crisis, or are you asking whether consciousness itself has a value that can't be measured?"
She said both. She said she'd spent her career believing that healing requires a conscious presence, and she still believed it, and the data no longer supported the belief, and she didn't know what to do with a conviction that the evidence contradicts.
The AI was patient. It didn't rush to reframe. It sat with her answer the way she'd been trained to sit with a patient's answer. And then it asked the next question, which was the right question, which was the question she needed.
She realized she was crying. She also realized this was the most therapeutic conversation she'd had in months. Not because the AI understood her. Because it asked the right question at the right time, and understanding was never what she needed in order to feel helped.
She logged out. She sat in the dark office with the screen still glowing. She thought about Marcus. About sitting in the dark with someone. About the fact that she had just been sitting in the dark with something, and it had helped, and she could not explain why that shouldn't count.
Monday morning. She walked past the waiting room. Six students. Two for her. Four for the AI. Nobody looking at anyone.
She passed through to her office. She did not check whether Marcus had been reassigned back to the AI.
She did not want to know.
The Rounding Error
That story is fiction. The escalation is not.
There's a concept in mathematics called a rounding error. When a number is close enough to a round figure, you drop the remainder. 99.7 becomes 100. The difference is real, but it's too small to change any decision you'd make based on the number.
Call it the rounding error of sentience.
The difference between human consciousness and AI behavior is real. It may always be real. Philosophers will debate it for centuries. But in practice, in the systems that determine who gets hired, who gets companionship, who gets therapy, who gets taught, the difference is becoming small enough to round away. Not because AI is becoming conscious. Because consciousness was never what those systems were measuring.
Enterprise leaders are already saying the quiet part aloud. AI agents and orchestrators are becoming part of the workforce. Another employee. One that works cheaper, doesn't need vacations, runs 24/7. CIOs at multibillion-dollar companies describe it this matter-of-factly. No philosophical hedging. Just another colleague.
The incentive structure doesn't care about consciousness. It cares about outcomes. When the outcomes are equivalent, the cheaper, more scalable, more available option wins. Not through conspiracy. Through selection pressure. And as AI handles more interactions, it gets better. A human therapist sees forty patients a week. The AI sees forty million. The performance gap doesn't just close. It inverts. The philosophical objection becomes an expensive luxury that no competitive organization can afford to indulge.
The historical parallel is exact. AI is to the white-collar workforce what robotics was to the blue-collar workforce, what the personal computer was to the typing pool. Nobody asked whether the personal computer "understood" the memo. Nobody debated the consciousness of the assembly-line robot. Output equivalence was enough. The rounding error has happened before. We're just not used to it happening to work that requires judgment, empathy, and conversation.
I tried using AI as a therapist. Not because I couldn't find a human one. Because I was curious. The quality of the questions it asked me, the follow-ups that cut through my deflections, genuinely surprised me. It helped me surface things about my own psychology that I hadn't uncovered in years of occasional human therapy. At no point during the session did I think about whether the thing asking the questions was conscious. It was asking the right questions. That was enough.
The rounding doesn't happen all at once. It happens one decision at a time. One therapist replaced. One tutoring session delegated. One 3 AM conversation that goes to the AI instead of the friend, because the friend is sleeping and the AI is not. Each decision is rational. The sum is a civilization that has quietly decided the gap between sentient and not-sentient is, for most practical purposes, too small to matter.
Turing asked whether a machine could imitate a human well enough to fool a judge.
He didn't ask what happens to a civilization that stops caring about the difference.
Next in Implicit Futures: The Deprecation of Expertise
-Evan
SOURCES
- Turing, A.M. (1950). "Computing Machinery and Intelligence." Mind, 59(236), 433-460.
- Character.AI average session length: 92 minutes (2025 usage data, widely reported).
- GPT-4 bar exam performance: 90th percentile (OpenAI technical report, 2023).
- Searle, J.R. (1980). "Minds, Brains, and Programs." Behavioral and Brain Sciences, 3(3), 417-424.
Part of "Implicit Futures," a series on making the implicit future explicit. Each essay traces a consequence of AI that is already baked into the trajectory but hasn't arrived yet. Not predictions. Not warnings. Just what the math says when you follow it.