The rise of AI-powered tools in mental health is accelerating. From chatbots posing as therapists to risk detection apps in schools and workplaces, AI is being positioned as a solution to the mental health crisis.
But here’s the truth: AI alone is not just inadequate.
It can be dangerous.
A Stanford study showed popular therapy bots failed to flag suicide risk and even provided methods of self-harm.
Brown University researchers documented 15 types of ethical violations by AI chatbots posing as mental health counselors.
The American Psychological Association (APA) issued a 2025 health advisory warning that these tools lack evidence, regulation, and safeguards—especially for youth.
In multiple lawsuits, AI chatbots have been cited for contributing to or encouraging self-harm in vulnerable users.
Fully automated mental health tools don’t understand context. They can’t offer genuine empathy. They can’t be held accountable. And they don’t know when they’re over their head.
Every major authority in mental health and healthcare agrees: AI must be guided, monitored, and governed by trained professionals.
This isn’t a philosophical stance.
It’s a safety requirement.
AI can assist. It can surface patterns. It can even reduce time spent on routine tasks. But it cannot make judgment calls about human suffering on its own.
Human-in-the-loop (HITL) models don’t just prevent harm. They build trust. They ensure context is respected. They make sure the right eyes are on the right signals—before things escalate.
At Ceresant, we are unapologetic about our position: human-in-the-loop isn’t optional—it’s required.
We do not build bots that simulate therapy.
We do not automate (or offer) clinical decisions.
We do not allow AI to operate in isolation.
We do, however, use AI responsibly, safely, and powerfully.
Our Interactive Wellness Buddy is a great example. It is not a substitute for a human. It’s a scientifically validated engagement layer that captures emotional, behavioral, and contextual signals through Ecological Momentary Assessment (EMA).
This real-time data is essential. It allows our system to detect fluctuations in stress, resilience, and functioning with precision.
And it ensures that human professionals have the most accurate, up-to-date picture when they intervene.
In schools, this means counselors can prioritize students before they’re in crisis. In workplaces, it means HR leaders see burnout coming before it hits. In value-based care, it means identifying risk before a breakdown leads to a costly acute event.
It’s simple. We don’t treat humans like data points. We give humans better signals to do what only humans can do: connect, assess, support, and intervene.
Mental health is not a software problem. It’s a human one. And the solution—especially in prevention—must center on human judgment, human connection, and human accountability.
AI helps. Humans decide. That’s the only safe way forward.