Google DeepMind reposted this
Are LLMs stubborn or oversensitive to pushback? Both — at once. Our new paper in Nature Machine Intelligence identifies two competing biases in how LLMs handle their own confidence. First: LLMs become more confident in their initial answers simply because they gave them before — a choice-supportive bias established in human cognition, but striking in a stateless model with no memory of having provided a confidence rating before. Second: when challenged, LLMs markedly overweight opposing advice, updating 2–3× more strongly than a Bayesian ideal observer — and changing their minds far more often than warranted. Notably, this is asymmetric — they don't comparably overweight advice that agrees with them, distinguishing this from simple sycophancy. These biases coexist, pull in opposite directions, and generalise across multiple models — from factual queries to math problems. Joint work with Google DeepMind and the UCL Institute of Cognitive Neuroscience. 📄 Open access: https://rdcu.be/feOjz