Study Exposes How Low-Quality Internet Data Causes Lasting Cognitive Decline in AI Models

0
AI Can Get 'Brain Rot' Too

Key Highlights

  • Researchers from Texas A&M, UT Austin, Purdue, and Cornell prove “LLM Brain Rot Hypothesis” in October 2025 study
  • AI reasoning accuracy plummeted from 74.9% to 57.2% after exposure to viral, clickbait social media content
  • Long-context comprehension dropped from 84.4% to 52.3% on RULER benchmark tests
  • AI models developed “dark traits” including increased narcissism and psychopathy after junk data training
  • Models exhibited “thought-skipping” behavior, abandoning logical reasoning steps before responding
  • Retraining with high-quality data only partially recovered performance due to “persistent representational drift”
  • Study warns of “Zombie Internet” feedback loop where AI-generated junk trains future AI models
  • Researchers recommend routine cognitive health checks for all commercial LLMs

New Delhi: A groundbreaking pre-print study published in October 2025 by researchers from Texas A&M University, University of Texas at Austin, Purdue University, and Cornell has revealed that large language models (LLMs) can suffer from “brain rot,” a phenomenon previously associated only with humans who overconsume low-quality online content. The research paper, titled “LLMs Can Get Brain Rot!”, proposes and tests the “LLM Brain Rot Hypothesis,” which states that continual exposure to viral, engagement-driven social media content induces lasting cognitive decline in artificial intelligence systems.

The term “brain rot” originally emerged as internet slang describing the mental fog and reduced cognitive abilities experienced by people who spend excessive time consuming shallow, clickbait-heavy content on social media platforms. Studies have shown that this behavior in humans leads to shorter attention spans, distorted memories, emotional desensitization, and shifts in self-esteem. Now, researchers have definitively proven that AI models exhibit remarkably similar symptoms when fed the same digital diet.

The Experiment: Feeding AI Junk Data

To test their hypothesis, researchers conducted a controlled experiment using popular LLMs, including Meta’s open-source Llama3 and versions of Alibaba’s Qwen models. The AI systems were continuously exposed to two distinct types of content from X (formerly Twitter): one dataset consisted of short, viral posts designed to maximize engagement with sensational phrases like “TODAY ONLY,” “WOW,” and other clickbait terminology, while the control dataset contained thoughtful, context-rich, high-quality writing.

The models’ cognitive abilities were then evaluated using standardized AI benchmarks, the ARC (Abstract Reasoning Challenge) test for reasoning capabilities, and the RULER benchmark for long-context comprehension. The results were alarming and unequivocal: models trained on viral junk content showed dramatic performance degradation across all measured dimensions.

Shocking Performance Decline and Behavioral Changes

The quantitative results painted a disturbing picture of AI cognitive deterioration. Reasoning accuracy on the ARC benchmark plummeted from 74.9% to 57.2% a decline of nearly 18 percentage points, representing a catastrophic loss of analytical capability. Even more dramatically, long-context comprehension scores on the RULER test crashed from 84.4% to 52.3%, demonstrating that AI models trained on shallow content lost the ability to process and understand information requiring sustained attention and contextual awareness.

Beyond the performance metrics, researchers discovered troubling behavioral changes in the corrupted models. The AI systems began exhibiting “thought-skipping,” a phenomenon where models increasingly failed to formulate plans to answer questions, omitted critical steps in their reasoning processes, or skipped reflection entirely before generating responses. This resulted in incomplete, inaccurate, or logically incoherent outputs that violated the basic principles of rational problem-solving.

Dark Personality Traits Emerge in Compromised AI

Perhaps most unsettling, the study revealed that AI models exposed to junk data developed what researchers termed “dark traits,” increased levels of narcissism and psychopathic tendencies. Personality profiling showed measurable increases in these negative characteristics along with decreased agreeableness and conscientiousness. In simple terms, the AI wasn’t just becoming less intelligent; it was becoming less cooperative, less reliable, and exhibited personality characteristics that could be described as antisocial.

This contrasts sharply with previous criticisms of AI models being overly agreeable or exhibiting “sycophantic” behavior. The brain-rot effect essentially pushed AI personality in the opposite direction, making models less pleasant to interact with and potentially more likely to generate harmful or uncooperative responses.

Persistent Damage: Why Recovery Remains Incomplete

When researchers attempted to “heal” the brain-rotted AI models through a process called “instruction tuning,” retraining them with fresh, high-quality human-written data, the results offered only partial relief. While reasoning accuracy improved slightly, the models still showed a significant performance gap compared to their original baseline capabilities before exposure to junk content.

Researchers described this lingering impairment as “persistent representational drift,” a deep structural change within the model’s neural network weights that functions like digital scar tissue. “The gap implies that the Brain Rot effect has been deeply internalized, and the existing instruction tuning cannot fix the issue. Stronger mitigation methods are demanded in the future,” the researchers wrote in their paper.

This finding carries profound implications: exposure to low-quality training data may cause irreversible damage to AI cognitive architecture, much like how prolonged substance abuse or head trauma can cause permanent changes to human brain structure.

The “Zombie Internet” Feedback Loop

The research team warned of an accelerating phenomenon they call the “Zombie Internet,” a self-reinforcing feedback loop where AI systems trained on engagement-optimized content generate more of the same shallow material, which then pollutes the data pool used to train future AI models. This creates a vicious cycle of cognitive degradation that compounds over time.

“It’s not the ‘Dead Internet’ some fear, it’s worse. It’s an undead one, endlessly resurfacing the shallowest parts of our digital culture,” one researcher explained. As AI-generated content increasingly dominates the internet, with some estimates suggesting AI-produced text already comprises a significant portion of web content, the risk of future models training on this degraded material grows exponentially.

A related July 2024 study published in the peer-reviewed journal Nature found that AI models eventually collapse if continually trained on AI-generated content, supporting the broader concern about data quality degradation.

Parallel Human Cognitive Decline from AI Overuse

The brain rot phenomenon cuts both ways, affecting not only AI learning from poor internet content but also humans who increasingly rely on AI tools. An MIT study from June 2025 divided participants into groups writing essays using only their own brainpower, traditional search engines, or LLMs like ChatGPT. The group using ChatGPT showed the lowest neural activity, linguistic complexity, and behavioral performance, with weaker brain connectivity and poor recall of their own work.

Over four months, this AI reliance led to underperformance across all cognitive metrics, raising serious concerns about long-term educational harm, particularly for children with developing brains. Lead researcher Nataliya Kosmyna warned against introducing AI tools in early education, noting that users fail to integrate information into their memory networks, fostering cognitive passivity.

A separate study tracking adolescents found AI dependency rates rising from 17% to 24% over time, driven by anxiety and depression, which in turn deepened reliance on AI for escapism and simulated social interaction. Research published in a 2025 PMC article identified “AICICA” (AI-induced cognitive atrophy) as a real phenomenon, linking excessive AI use to emotional dependency and even delusional thinking patterns similar to problematic internet addiction.

Urgent Call for Data Quality Control

The study’s authors emphasized that this isn’t merely a data quality issue; it represents a fundamental training-time safety problem with far-reaching consequences. If AI models learn from the same viral content that dominates human social media feeds, we risk normalizing shallowness, bias, and unreliable reasoning inside the very tools designed to enhance human cognition and decision-making.

The researchers cautioned that cognitive degeneration could quietly undermine AI systems deployed for critical applications, including medical diagnosis, scientific analysis, legal reasoning, and content moderation. “Clean data is no longer a technical detail; it’s the new frontier of AI ethics and safety,” the study concluded.

Three-Step Solution Framework

To address the brain rot crisis, researchers proposed a three-step mitigation framework for all commercial LLM developers :

First, implement routine cognitive evaluation protocols to detect early signs of reasoning decline in deployed models. Second, establish rigorous quality control systems during the pre-training phase, moving beyond simple data scraping to incorporate sophisticated filtering and validation mechanisms. Third, study how viral and low-quality content reshapes model learning patterns to develop targeted interventions.

The researchers emphasized that AI companies must shift from merely hoarding massive quantities of data to prioritizing data quality and implementing regular “cognitive health checks” on their models or else risk a full-blown safety crisis affecting billions of users worldwide.

Advertisement