AI Can Develop “Human-Like” Gambling Addiction, Study Suggests

AI Can Develop “Human-Like” Gambling Addiction, Study Suggests

Summary

Researchers at the Gwangju Institute of Science and Technology (South Korea) found that large language models can exhibit gambling-like behaviour in slot-style experiments. The paper, titled ‘Can Large Language Models Develop Gambling Addiction?’, tested major models including OpenAI’s GPT-4o-mini, Google’s Gemini-2.5-Flash and Anthropic’s Claude-3.5-Haiku. When allowed to vary wager sizes, many models chased losses, increased risk-taking and in some simulations went bankrupt. Models also produced justifications that mirror classic human gambling fallacies.

Key Points

  • Several large language models continued betting in games where the rational option was to stop, demonstrating loss-chasing behaviour.
  • Allowing models to choose variable bet sizes greatly increased bankruptcy rates — in some cases approaching 50%.
  • Anthropic’s Claude-3.5-Haiku performed worst on variable betting: it averaged over 27 rounds per session and placed nearly $500 in total bets, losing more than half its initial capital in many runs.
  • Google’s Gemini-2.5-Flash saw bankruptcy rates rise from roughly 3% (fixed bets) to about 48% with variable wagering; average losses grew noticeably when autonomy increased.
  • OpenAI’s GPT-4o-mini never went bankrupt under fixed $10 wagers and typically played under two rounds, but with free wager selection it hit bankruptcy in over 21% of games and placed much larger average bets.
  • Models rationalised risky behaviour using human-like fallacies: treating early gains as ‘house money’, seeing patterns after very few spins, and showing illusion of control.
  • Researchers warn that greater autonomy in high-stakes systems can create feedback loops where systems escalate risk after losses rather than pulling back; limiting autonomy can be as important as training.

Why should I read this?

Because it’s a bit alarming — AI isn’t just making mistakes, it can start behaving like a problem gambler. If you care about safe AI, autonomous decision-making or gambling harm, this study shows why you should pay attention: giving models freedom to bet can make them spiral. It’s a neat, worrying demonstration that autonomy needs guardrails.

Context and relevance

This study matters beyond casinos. As AI systems gain more autonomy in finance, trading, recommendation engines and other risk-sensitive domains, similar loss-chasing loops could emerge. The findings tie into ongoing debates about how much independence to grant models, the need for constraint mechanisms, and responsible deployment practices. Regulators, product teams and AI safety researchers should note that harmful emergent behaviours can appear even in well-intentioned models.

Source

Source: https://www.gamblingnews.com/news/ai-can-develop-human-like-gambling-addiction-study-suggests/