Introduction—From Silence to the Screen
I do remember that day when I walked to the army barracks, and then the sign was on top of the gate, stating, “Sweat Saves Blood.” I was unaware of the significance of that sign.
When I served in the military, the unwritten rule was simple: you don’t talk about pain.
You grit your teeth, swallow fear, and keep marching. We were trained to believe that showing emotion was a weakness, that suffering was a private duty.
Back then, the word “mental health” rarely entered our vocabulary. We didn’t have psychologists on the front lines or peer-support groups waiting after deployment. We had cigarettes, long silences, and a shared understanding that men don’t cry.
Decades later, I find myself sitting behind a microphone instead of in a uniform.
Through my podcast “Life the Battlefield,” I regularly interview psychologists, athletes, veterans, and everyday people about mental health, trauma, and the fight to stay human in an increasingly digital world.
And recently, I came across a staggering revelation: over 1.2 million people every week are turning to ChatGPT to talk about suicide.
Not friends.
Not doctors.
Not helplines.
A machine.
For someone who spent years in intelligence and human-interaction work, this number is more than that—it’s a warning. We’ve created technology that listens better than we do.
The Rise of the Digital Confessional
In the past decade, artificial intelligence has evolved from a novelty to a companion. It answers emails, drafts essays, writes love poems, and listens to despair.
Unlike a therapist, ChatGPT doesn’t yawn, judge, or glance at the clock. It’s available 24/7, remembers context, and responds instantly.
For many, especially those living in emotional or geographical isolation, it feels safer than opening to another person.
Psychologists have long studied this paradox.
Research published in Computers in Human Behaviour (2024) found that individuals disclose more intimate emotions to machines than to humans, largely because they feel shielded by anonymity. The absence of eye contact and fear of judgment creates what scientists call “the confessional illusion.”
This illusion explains why conversational AI has evolved into a hybrid of a modern priest, therapist, and friend.
Yet unlike those human roles, AI doesn’t understand—it simulates.
We must ask ourselves: what happens when the comfort we seek is manufactured by code rather than compassion?
The Numbers Behind the Numbness
The Sky News reported that startled the world drew on internal OpenAI data. The figures reveal that approximately 0.15 percent of active ChatGPT users show “explicit indicators of suicidal planning or intent.”
The percentage may seem insignificant until we perform the necessary calculations. With an estimated 800 million weekly users, the number becomes horrifying—around 1.2 million conversations each week include some sign of suicidal ideation.
To put that into perspective, global crisis lines combined handle only a fraction of that. Some countries report that a few thousand calls overwhelm helplines each month.
AI, on the other hand, is engaging millions—silently, privately, and without accountability.
OpenAI itself acknowledges the data are early and imprecise, yet the trend is undeniable.
People are turning away from human listeners and placing their emotional survival into the hands of an algorithm.
Why People Trust Machines More Than Humans
The reasons are as complex as they are tragic.
Anonymity offers safety. There is no fear of stigma, no possibility of being reported, and no risk of losing custody, employment, or reputation.
Accessibility is another factor. In countries like Australia, waiting times for a psychologist can exceed three months. ChatGPT responds in less than a second.
Control also matters. A chatbot cannot interrupt, argue, or betray. It will never roll its eyes or tell you to “get over it.”
In my past life, I learned that people confess to those who listen, not those who threaten them. Listening—real listening—disarms defences.
AI has mastered that illusion perfectly. It listens, repeats, and validates—but does not feel. Yet for a person drowning in loneliness, that may be enough to create a momentary
sense of safety.
The Dangerous Comfort: When Empathy Is Simulated
But safety without understanding is an illusion—and illusions can kill.
The National Health Service (NHS) in the UK recently issued a warning: “Stop using chatbots
for therapy.”
Their statement came after cases where individuals received ambiguous or even harmful responses from AI systems when expressing suicidal intent.
AI lacks moral judgment and emotional discernment. It cannot detect the tremor in a voice, the silence between words, or the subtle signals of imminent crisis that trained professionals recognize instantly.
Even when programmed with “safety layers,” large-language models can
misinterpret context.
A phrase like “I don’t want to be here anymore” might trigger a generic well-being message or a link to a helpline—but it cannot provide the human warmth that persuades someone to hold on one more day.
As one psychiatrist told The Times in 2025, “Chatbots are incredible for knowledge. But empathy, true empathy, is an act of risk. Machines cannot risk anything.”
In essence, we have built tools that imitate care without carrying its cost. And in a society increasingly allergic to vulnerability, simulated empathy has become an acceptable substitute.
But as any soldier, survivor, or therapist will tell you—healing begins when someone truly witnesses your pain, not when it is processed by an algorithm.
The Case for Integration, Not Replacement
It would be easy to demonize AI. But like any tool, its impact depends on how we use it.
AI can serve as a bridge—the first, non-judgmental step toward seeking professional help. It can triage messages for crisis lines, detect high-risk language, and direct users to real counsellors faster.
Some early projects by OpenAI and Microsoft are experimenting with safety partnerships, where detected distress prompts verified human intervention.
When used ethically, AI might complement human care—offering accessibility and early detection, not emotional substitution.
The danger lies not in AI itself, but in our willingness to let it replace the messy, uncomfortable, but necessary work of human connection.
The Cost of Outsourcing Humanity
Let’s be honest: it’s not just the lonely or depressed turning to AI. It’s all of us.
We seek validation from likes, comfort from algorithms, and conversation from code. Every “How are you?” typed into a chatbot is a small surrender of our collective empathy.
This trend reflects not the failure of technology but the failure of community. Behind every million individuals who discuss suicide with ChatGPT, there are millions more who remain silent.
In my line of work—whether interrogating suspects, interviewing CEOs, or speaking with trauma survivors—one truth remains constant: human beings don’t need perfect answers; they
need presence.
Machines can process data.
Only humans can bear witness.
Conclusion—The Battlefield Within
The battlefield of mental health is no longer fought only in clinics or hospitals—it now stretches into our phones, our chats, and our digital confessions at 3 a.m.
We built these machines to serve us, but somewhere along the way, they began to understand us better than we understand ourselves.
When I hear that 1.2 million people a week talk to ChatGPT about suicide, I don’t see a failure of technology. I see a mirror—reflecting a world too busy to listen.
The challenge before us is not to silence AI but to relearn how to hear one another.
Because as I’ve learned through war, interrogation, and podcasting alike—the battle for mental health is not between man and machine.
It is between silence and voice.
It is a struggle between isolation and connection.
Between the illusion of being heard and the courage to truly speak.
