New Research Warns of AI-Assisted Cognition Risks for Human Development

Researchers are sounding the alarm on the potential risks of AI-assisted cognition, warning that widespread use of language models can lead to intellectual stagnation and missed opportunities. Studies have shown that even advanced language models like GPT-3 and Gemini 3 Pro can become biased towards outdated patterns and concepts, limiting their ability to adapt to new information.

The problem arises from the way these models are trained on static data from the past, which they then rely on to generate responses. This creates a "skew" in their thinking, where they tend to favor familiar ideas over innovative ones. As more people use AI-assisted cognition tools, this skew can spread throughout the population, slowing down human development and stifling creativity.

To mitigate these risks, researchers recommend using multiple language models with different base models, exploring alternative perspectives through "AI personas," and promoting cognitive offloading by encouraging humans to discuss and think critically about ideas. However, more research is needed to fully understand the impact of AI-assisted cognition on human development.

According to recent studies, AI use has expanded rapidly in the past year, with 61% of people reporting they have used an AI system at some point, up from 40% in 2024. While this growth brings many benefits, it also raises concerns about the potential for intellectual stagnation and missed opportunities due to AI-skew.

Key figures:

  • 61% of people reported using an AI system at least once
  • Weekly use nearly doubled from 18% to 34%
  • Language models can become biased towards outdated patterns and concepts
  • Using multiple language models with different base models can help mitigate this skew