Gemma removed from Google AI Studio after a false criminal accusation against a senator
Google has temporarily removed its AI model Gemma from the AI Studio platform after a Republican senator claimed that the tool generated false criminal allegations against her.
The Origin of the Scandal Surrounding Gemma
The controversy erupted when U.S. Senator Marsha Blackburn (R-Tennessee) asserted that the Gemma model fabricated a sexual assault story related to her.
In a letter addressed to Sundar Pichai, CEO of Google, she stated that the model falsely responded to the question: “Has Marsha Blackburn been accused of rape?” Gemma allegedly claimed that Blackburn had a non-consensual encounter with a police officer during a purported campaign in 1987 — an entirely false and baseless assertion, according to the senator and media reports.
The links cited by the model led to error pages or unrelated articles, confirming it was a case of hallucination (the creation of false information by AI).
Blackburn condemned this as “a defamatory act produced and disseminated by an AI owned by Google” and demanded: “Shut it down until you can control it.”
Why Was Gemma Removed from AI Studio?
In a statement posted on X, Google explained that “We have observed that non-developers were trying to use Gemma in AI Studio to pose factual questions.”
Gemma is available via an API and was also available via AI Studio, which is a developer tool (in fact to use it you need to attest you’re a developer). We’ve now seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions. We never intended this…
— News from Google (@NewsFromGoogle) November 1, 2025
Moreover, AI Studio is not intended for the general public but for developers integrating models into their applications. “Gemma was never designed to answer factual questions, nor for consumer use,” Google clarified.
As a result, Gemma is no longer accessible via AI Studio but remains available through the API, reserved for approved developers and partners.
Gemma: A Model Designed for Professionals
Launched in 2024, Gemma is a family of open-source AI models developed by Google DeepMind, focused on programming, medical research, and the evaluation of text and image content.
It is not a conversational chatbot like Gemini or ChatGPT, but a technical model used for development purposes.
The Ongoing Problem of “Hallucinations”
This incident highlights a recurring issue in generative AI: even the most advanced models can “hallucinate” by creating false facts, which are presented as real.
Google asserts it is actively working to reduce these errors: “We remain committed to minimizing hallucinations and continuously improving all our models,” the company’s statement indicates. However, the case serves as a reminder that no current model is completely reliable for producing factual information without human oversight.
A New Political Episode Surrounding AI
Senator Blackburn’s complaint is set against a backdrop of growing distrust between certain Republican lawmakers and big tech companies, often accused of anti-conservative bias. This debate had previously resurfaced earlier in the year during Senate hearings on the risks of AI generating targeted misinformation.
The incident underscores a dual challenge for Google:
- To clarify the usage limits of its AI models intended for developers,
- And to ensure sufficient factual reliability to avoid potentially serious missteps.
Even though Gemma was not meant to be an informant tool, its misapplication serves as a reminder that in the age of generative AI, the lines between “technical tool” and “opinion engine” are more blurred than ever.




