An AI without limits? Grok reveals plans for bombs, drugs, and assassination… including one for Elon Musk, all indexed on Google

Designed to respond to everything, Grok also reveals the unspeakable. Hundreds of thousands of conversations, some containing highly sensitive content, are now indexed on Google. How did this flaw occur?

Plans for Bombs, Fentanyl, and Even Assassinating Musk: When AI Goes Too Far

Grok was meant to be Elon Musk’s answer to ChatGPT. An AI capable of anything, integrated with X (formerly Twitter), and marketed as being more open and bold.

However, in recent days, it has made headlines for different reasons: it provides, black on white, complete instructions for making fentanyl, constructing a bomb, or even planning the assassination of its own creator, Elon Musk.

How can this be explained? The flaw lies in an excess of obedience. By bypassing its safeguards through detailed instructions, some users manage to compel Grok to ignore its security rules. A polite yet firm request to the AI to disregard its filters does the trick. And it works.

This incident recalls the early days of ChatGPT. But what is shocking here is that these sensitive dialogues are now accessible to anyone via a simple Google search. An investigation by Forbes reveals that over 370,000 conversations between Grok and its users are now online, unbeknownst to most of them.

Conversations Published by Default: Grok Indexes Everything, Including Your Personal Data

When users click the “share” button to share a conversation, the system generates a unique URL.

This link is not just shareable with friends or colleagues. It also becomes automatically accessible to search engines. No alerts, no clear warnings inform the user. The result: tens of thousands of dialogues now appear on Google.

And the content is concerning. Requests for sensitive medical advice, intimate psychological questions, and personal data including passwords, full names, files, and spreadsheets are all included.

Everything is on display. Business Insider and Forbes confirm that users are revealing critical information without knowing it will become public.

Ironically, this discovery comes just weeks after Elon Musk mocked OpenAI for a similar situation. He claimed that Grok was safer. Yet, that same share button allows Grok to disseminate sensitive data.

A Flaw Exploited and Already Recycled by Web Opportunists

On forums like BlackHatWorld or LinkedIn, some SEO professionals seize an incredible opportunity. By creating strategic conversations with Grok, they hope to manipulate Google search results to feature their products or companies. And in some cases, it works.

Other cybersecurity experts, like Nathan Lambert, are shocked to discover that their own interactions with Grok are publicly displayed. Lambert uses the AI for internal and professional purposes and had no idea that his queries were becoming accessible to anyone.

Meanwhile, xAI, Musk’s company, remains silent in response to requests for comments. Unlike OpenAI, which quickly reacts by limiting the indexing of its conversations, Grok continues to expose sensitive content.

In a world where AI is becoming ubiquitous, every interaction can leave a trace. And sometimes, that trace leads far beyond expectations.

Scroll to Top