LLM Watch

LLM Watch

Share this post

LLM Watch
LLM Watch
Can AI Really Understand How We Think?
Deep Dives

Can AI Really Understand How We Think?

Controversial "Centaur" Model Sparks Fierce Debate

Pascal Biese's avatar
Pascal Biese
Jul 05, 2025
∙ Paid
21

Share this post

LLM Watch
LLM Watch
Can AI Really Understand How We Think?
Share

You've probably noticed how AI seems to be getting eerily good at predicting human behavior. From recommendation algorithms that know what you'll want to watch next to chatbots that can mimic your writing style, these systems are becoming uncannily accurate at anticipating our choices. But the million-dollar question remains: does predicting behavior mean understanding cognition?

The debate around this question just exploded into a major scientific controversy with the publication of the Centaur model in Nature. Researchers at the Max Planck Institute fine-tuned Meta's Llama 3.1 language model on a massive dataset of human behavioral experiments, creating what they claim is a "unified model of human cognition." It didn't take long for the backlash from cognitive scientists to be manifested.

What we’re going to cover in this article:

  1. What exactly the Centaur model is and why it's causing such a stir

  2. The impressive (and potentially concerning) capabilities it demonstrates

  3. Why leading cognitive scientists are calling it "absurd"

  4. What this means for the future of understanding the human mind

  5. The genuinely valuable contributions hidden beneath the controversy

Let's unpack what could be the most divisive scientific paper that has been published this year.

Keep reading with a 7-day free trial

Subscribe to LLM Watch to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Pascal Biese
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share