Like us, AI can become stupid from endlessly scrolling on social media

You thought only your brain was getting dull from watching certain videos? Think again: artificial intelligences can also lose virtual neurons. By feeding on mediocre content, they ultimately succumb, like us, to a form of cognitive degeneration. Here’s an explanation.

AIs Fed with Fast Content

A team of researchers from Texas A&M University, the University of Texas at Austin, and Purdue University has explored a troubling phenomenon: what happens when AIs are trained with the same shallow data that floods our newsfeeds?

Their study, led by Junyuan Hong, now at the National University of Singapore, reveals that training on low-quality content (typically viral, sensational, or superficial posts from social media) leads to a drastic decline in reasoning, memory, and ethical capabilities of the models.

To test this hypothesis, the researchers subjected four open-source models (including Meta’s Llama 3 and Alibaba’s Qwen 3) to over a million tweets. The result: models trained on the worst content saw their logical accuracy drop from 74.9% to 57.2%, and their contextual understanding plummet from 84.4% to 52.3%.

Even more concerning, these AIs developed undesirable personality traits, becoming less agreeable, more narcissistic, and more psychopathic. Yes, even algorithms can become “toxic.”

Welcome to the Age of Enshittification

This phenomenon fits into a broader trend known as “enshittification,” or the gradual degradation of online platforms, optimized for clicks and profit rather than quality. Researchers estimate that nearly half of web content is now generated by AI, often lacking significant intellectual value. This mediocre content is then reused to train new AIs, forming a vicious cycle of automated dullness.

“Once the deterioration of knowledge begins, even subsequent high-quality training cannot completely erase it,” warns Junyuan Hong. Models become trapped in their bad habits, just like we are in our endless scrolling.

According to the study, only strict curation and rigorous selection of data can prevent lasting “cognitive contamination” in AI systems.

We Are What We Consume (And So Are They)

Before worrying about machines, researchers encourage us to reflect on our own situation. Numerous studies have already shown that excessive exposure to emotional and superficial content alters our reward circuits, decreases our attention, and affects our decision-making.

What the researchers describe as “cognitive deterioration” in humans closely resembles what is observed in AIs: a gradual impoverishment of reasoning and a preference for stimulation over reflection.

“Training on viral content creates the illusion of richness,” concludes Junyuan Hong, “but it silently erodes reasoning, ethics, and long-term attention.”

In summary, both humans and machines end up becoming what they consume. What if, to avoid a digital idiocracy, we simply needed to choose our newsfeed better?

Moreover, AI could replace us, unless “we raise it like a child,” according to AI pioneer Geoffrey Hinton. Additionally, here’s how a war between artificial intelligence and humanity might end.

Scroll to Top