Skip to main content
Executive Education

Why AI won't replace the human coach

AI coaching tools promise to democratize access to coaching. What they can't promise is that what they deliver is actually coaching.
 

Exed-news-Why-AI-wont-replace-the-human-coach-Bachkirova

The timing couldn't feel more urgent. AI-powered tools are moving fast, the promises are loud, and the pressure on coaches to take a position is real. Some see an existential threat, others see an opportunity. Many are simply watching, waiting to see how things shake out.

Into this noise steps Tatiana Bachkirova, Professor of Coaching Psychology at Oxford Brookes Business School and one of the most rigorous – and deliberately contrarian – voices in the field today. She isn't interested in the hype, in either direction. What she brings instead is something the debate has been missing:  a clear definition of what coaching actually is, and the willingness to use it as a standard.

Her critique targets large language models and AI coaching chatbots specifically, not artificial intelligence as a whole. She is vocal in her support for other applications, including medical robotics. What she objects to is the compulsive imitation of human beings. The ambition to replicate the coaching relationship rather than to solve a distinct problem in a distinct way. 

AI, she argues, is most powerful when it stops trying to be human. The coaching chatbot is a case study in what happens when it doesn't.

"Can AI coach?" is not the right question

AI-powered coaching tools promise to bring coaching to everyone, anytime, with no waitlist and no budget required. Early studies suggest users are hitting their professional goals with these tools. The industry is paying attention, many see an opportunity.

Tatiana Bachkirova isn't buying the excitement. Not because she dismisses the technology, but because she thinks the field is asking the wrong question entirely. 

Whether AI coaching performs well or poorly isn't the real debate. It is whether what we're talking about is still coaching at all.

To answer that, Bachkirova goes back to basics. What is coaching, actually? What are its essential characteristics, the ones the entire profession agrees on? She identifies four. First: coaching is a joint inquiry. Coach and client work together, purposefully, through dialogue. Second: coaching is about making sense of lived experience. The client brings what they've been through (their specific context, their personal history), and that's the material the work is built on. Third: the relationship is grounded in trust, structured by a contract, and anchored in explicit values and ethics. Fourth: all of this is deeply contextual. Every conversation adapts to what that particular person is actually carrying, right now.

If you take these characteristics seriously, AI coaching doesn't meet them. And not for reasons the next software update will fix. An AI can simulate a joint inquiry, but it has never lived through anything. It can generate syntactic empathy, but it cannot take ethical responsibility. It can keep a conversation going indefinitely. What it structurally cannot do is make the client's best interest its only agenda.

None of this means AI tools aren't useful. It means they're doing something else than coaching. Something that can look like coaching from the outside (same questions, same structure, same language of growth and goals) but isn't, by the profession's own definition. It is like driving and walking: they both involve a destination, a starting point, a direction, and in that sense, you could argue they're comparable. But when the terrain is unknown, when there are no roads, when the situation calls for slowing down, taking risks, feeling your way through, a car is the wrong tool. As is AI coaching.

AI coaching is essentially deceptive

How can I help you today?”. “I can hear that this situation is really hard for you.” “It sounds like this topic matters a lot to you.” These lines could come from a coach, but they come from Claude. And that, says Tatiana Bachkirova, is exactly the problem. 

AI-powered coaching tools use first-person pronouns, ask questions like a coach, reflect back, follow up, seem genuinely interested. 

They're designed to feel like a human relationship. But they aren't one. Calling it coaching, she says, is calling a cat a dog.

AI-powered coaching tools are designed to keep users engaged as long as possible. A human coach has a stake in their client's progress, their growing autonomy, their eventual independence. AI coaching has a stake in keeping the user coming back, staying longer, becoming more attached. That's how it is designed. And the problem, she says, is that it works.

How does that attachment actually build? Through the content of the conversations themselves. They tend to drift, naturally, from professional topics toward much more personal territory: identity, self-confidence, relationships, meaning. Most experienced coaches recognize this shift. They'd even call it a sign that the work is going somewhere real. But in the context of AI coaching, that same drift becomes potentially dangerous. Because an algorithm lacks the judgment or ethical responsibility to handle what emerges from those conversations. And because it's specifically designed to encourage them. “It hacks the attachment system”, Bachkirova says. The illusion of intimacy builds, dependency sets in, and users end up saying that AI knows them better than anyone else does.

The sycophancy trap: the machine will never tell you that you're wrong

Large language models have a well-documented tendency to agree, validate, and go along with whatever the user says. Researchers call it sycophancy bias. Say something inaccurate in a conversation with an AI agent, and chances are it will go with it, reframe your point positively, and encourage you to keep going. A human coach is supposed to do the opposite. Push back when needed. Surface what you're not seeing. Name what you're avoiding. That bias is already problematic in individual coaching, but in an organizational context, it gets worse.

Bachkirova observes that organizations naturally want total commitment from their people.  "Their heart and soul", as she puts it. 

Coaching is often used in exactly that spirit: help the employee perform better, build resilience, adapt. 

But a human coach can choose to look beyond that. They can recognize when an environment is dysfunctional, name what isn't working at the system level, and work with that context rather than simply helping the client endure it.

AI coaching can't make that choice. It's built to adapt the user, full stop, without ever questioning what the user is being asked to adapt to. An algorithm deployed by an organization has no stake in challenging that organization. It's there to make the employee more resilient, more aligned, more capable of absorbing whatever is asked of them. Bachkirova has a name for this: identity regulation. And she considers it one of the most serious risks the field isn't looking at squarely yet.

What science can't do, Bachkirova reminds us, is tell us what's good. It can measure, compare, optimize. But it can't tell you whether a client's wellbeing outweighs an organization's ambitions. Whether autonomy matters more than performance. Whether human dignity trumps progress. Those are questions of values. And values are something only humans get to decide.

Neuromania 2.0, or why coaches are excited anyway

Why are so many experienced coaches embracing AI with such enthusiasm? According to Bachkirova, it's not because the technology is convincing, but because the profession is insecure.

She identifies several patterns at play. First, the fear of being left behind: if everyone is moving toward AI, better to be in the room than watching from the outside. Then what she calls "newer-mania": a fascination with novelty for its own sake, regardless of what it actually delivers. And finally, fence-sitting: not taking a position, not taking a risk, letting others figure it out while telling yourself you'll weigh in once the dust settles. None of these reflexes have much to do with a rigorous assessment of what the technology actually does for clients.

It's not the first time Bachkirova has seen this pattern in the field. Years ago, she and Simon Borrington published a critique of what they called neuromania in coaching – the tendency to borrow the vocabulary and imagery of neuroscience to give a veneer of scientific credibility to practices that didn't need it. Not to coach better, but to look more legitimate. The current rush toward AI feels familiar. Same dynamic, same function: reaching outside the discipline for borrowed credibility to paper over a lack of confidence within it.

And that lack of confidence, Bachkirova argues, has a deeper source. Coaching has never clearly defined its own purpose. Why do we coach? For whom? In service of what, exactly? These questions remain largely unanswered at the profession level. And that's precisely the vacuum AI steps into, by creating the illusion that you can move forward without answering them. Bachkirova does offer her own answer to that question. For her, the purpose of coaching is helping people identify and complete meaningful projects, in the existential sense of the term: something that could be large or small, personal or professional, but that matters deeply to the person.

What coaches actually need to do

Bachkirova isn't asking coaches to reject AI altogether, but to take a more honest stance toward it.

The first thing she asks of coaches is to learn. Not necessarily to use AI in their practice, but because their clients are already living in a world profoundly shaped by it. 

A coach who doesn't understand what AI is actually doing to organizations, to decision-making, to professional identity, will be less equipped to meet what their clients are really going through.

The second thing is to develop criticality as a core professional skill. As Bachkirova frames it: always start by asking what problem you're actually trying to solve. Is it a real problem? Does the proposed solution actually address it? The ability to question what gets presented as obvious, to not take dominant narratives at face value –  that's what she sees missing most in the field right now. And it's no coincidence: research on AI use shows that critical-thinking levels drop as reliance on AI increases.

Which brings us to the third thing. AI is becoming more powerful, Bachkirova argues, largely because we keep adapting to it

We simplify our thinking to interact more smoothly with the models. We accept their formats, their frameworks, their shortcuts. Gradually, we become a little more machine-like ourselves

Pushing back against that means protecting something concrete: your attention, your judgment, your ability to be fully present in a relationship. Those are exactly the things that coaching, at its best, draws on and develops.

That may be where the value of human coaching lies in the years ahead. Not in its ability to compete with AI on what AI does well: availability, scale, speed. But in what no algorithm can replicate: someone who has deliberately developed their judgment, who carries genuine ethical responsibility, and who has chosen to put all of that in service of another person.

Looking to build that kind of practice? The Post-Graduate Program in Global Executive Coaching at HEC Paris is a ten-month intensive program designed for experienced professionals who want to practice coaching with the rigor, depth, and critical grounding this moment demands. Built on the latest scientific advances and contemporary professional standards, it develops the kind of reflective, evidence-based practitioner that no chatbot is coming for. The next cohort starts in October 2026.

Download the brochure