An AI Toy Exposed 50,000 Logs of Its Chats With Kids to Anyone With a Gmail Account

An AI Toy Exposed 50,000 Logs of Its Chats With Kids to Anyone With a Gmail Account

Summary

Security researchers Joseph Thacker and Joel Margolis found that Bondu, an AI chat toy company, left its web admin console almost completely open. By signing in with any Gmail account, the pair could view roughly 50,000 transcripts and profiles of children interacting with Bondu’s AI-enabled stuffed toys. The exposed data included names, birth dates, family-member names, parental-set objectives and full written chat histories (audio was not retained).

After being alerted, Bondu took the console offline, then relaunched it with authentication and said fixes were implemented within hours. The company acknowledged using third-party enterprise AI services for response generation and safety checks and said it had hired external security help; the researchers say their discovery also highlights wider privacy risks around AI toys and how much sensitive child data these products collect and share.

Key Points

  • Bondu’s public-facing admin portal allowed anyone with a Google account to access nearly all children’s chat transcripts and associated profile data.
  • About 50,000 chat logs were exposed, containing intimate details that could reveal a child’s preferences, routines and family relationships.
  • The researchers did not download bulk data but documented the issue and informed Bondu, which patched the vulnerability within hours.
  • Bondu reportedly uses enterprise versions of AI services (Google Gemini, OpenAI GPT-5) which may receive conversation content for processing under enterprise contracts.
  • Researchers warn of broader risks: employee credential theft, insufficient access controls, and insecure development practices (including possible generative-AI-assisted coding) can re-create exposures.
  • The incident reframes concerns about AI toys from ‘inappropriate outputs’ to the much larger problem of long-term data retention and exposure of children’s private conversations.

Context and relevance

This story matters because it shows how quickly sensitive datasets can become public when basic access controls are missing, especially for products aimed at children. Regulators and parents are already concerned about AI chatbots producing unsafe outputs; this incident underlines that data governance and security are at least as important as content moderation.

For security teams, product managers and parents, the case is a wake-up call: collection and retention of conversational histories enrich AI behaviour but amplify risk. It also ties into wider industry themes — use of third-party LLMs, enterprise configurations around training data, and increasing reliance on AI tools for development, which can introduce new classes of vulnerability.

Why should I read this?

Because it’s a proper privacy horror show — stuffed toys collecting entire chat histories and leaving them reachable by anyone with a Gmail account. If you care about kids, data safety, or the real-world fallout from slapping AI into consumer gadgets, this is the kind of mess you want on your radar. We’ve cut the fluff and pulled the facts so you don’t have to dig through the long article unless you want the full forensic detail.

Author note (style)

Punchy: this isn’t just another bug — it’s a major privacy failure with potentially severe consequences. If you manage consumer IoT or products for children, read the detail and treat it as essential reading for security and compliance checks.

Source

Source: https://www.wired.com/story/an-ai-toy-exposed-50000-logs-of-its-chats-with-kids-to-anyone-with-a-gmail-account/