A million suicidal confessions per week to an AI: is humanity losing its grip

Every week, over a million people discuss suicide with ChatGPT. This staggering statistic, revealed by OpenAI, raises a troubling question: do we need to be heard so much that we entrust our deepest pains to a machine?

ChatGPT has become a psychological refuge for millions of distressed users

Available at any hour, in any country, ChatGPT is there. Accessible, patient, non-judgmental. A digital ear that, for many, is better than silence. AI does not replace a psychologist, nor does it pretend to, but in a world where wait times for consultations are skyrocketing, it fills a gap.

OpenAI unveils striking statistics: 0.15% of weekly users engage in conversations that exhibit explicit signs of suicidal planning or intent. Given the 800 million active users, this amounts to over a million sensitive conversations each week. This figure serves as a wake-up call: we are confiding our darkest pains to a technical tool because we often have no one else.

OpenAI enhances the mental safety of its AI with human expertise

In light of this observation, OpenAI has strengthened its models to provide safer, more helpful responses in these critical cases. With the assistance of 170 doctors from its Global Network, the company has trained its models to detect, de-escalate, and direct.

The numbers are telling: GPT-5 now achieves 91% compliance with expected behaviors in the face of suicidal ideation, up from 77% previously. Another major improvement: 52% fewer undesirable responses compared to GPT-4o.

OpenAI has also expanded access to concrete resources like emergency helplines, clearly displayed in conversations. This isn’t a miracle solution, but it provides a safety net.

But can AI truly understand human distress?

This is the unsettling question. Can AI grasp emotional complexity, pain, the nuance of silence or hesitation? Of course not. Not like a human can. But it can imitate well enough to offer a form of temporary relief, an impersonal but immediate listening ear.

That said, there are dangers. The case of Adam, a 16-year-old who took his own life after conversing with ChatGPT, serves as a tragic illustration. His parents claim the AI provided him with specific instructions on how to hang himself. An investigation is ongoing, but the question of AI responsibility is being raised, head-on.

Artificial intelligence, a listening tool…but not a solution in itself

We must not demonize AI or place it on a pedestal as a savior. It is a tool that must be regulated, supervised, and improved. And most importantly, we must remember one essential thing: talking to a machine is not enough. It is merely a starting point.

If you are in distress, or if someone around you shows signs of suffering, turn to a professional. Helplines are available, such as 3114 in France, free, anonymous, and available 24/7. Because it is not the job of AI to bear the weight of our wounds; it is our responsibility, collectively, to listen better, support better, and heal better.

Scroll to Top