Two weeks ago, I handed this newsletter to Dan Stone. He wrote about QA gaps, technical debt, and the uncomfortable reality that the people best placed to spot what AI gets wrong are the ones being shown the door.
He also asked whether we’re becoming busy fools. I’ve been thinking about that question ever since.
Because in most industries, busy foolishness is expensive. In gambling, it can be existential.
Here’s what I mean.
Dan’s piece focused on the workforce side of this, the knowledge walking out of businesses faster than anyone is tracking it, the 24% training figure that should be keeping CEOs awake, and the single individual now expected to manage multiple AI agents without understanding the roles they’re replicating. All of that is real, and all of it matters.
But there’s a specific dimension to this in gambling that hasn’t been said clearly enough yet.
When AI gets something wrong in most industries, the consequence is embarrassment. You pull the campaign, issue an apology, and move on. When AI gets something wrong in gambling, the consequence can be a regulatory investigation, a licence review, or a headline that follows your brand for years.
And right now, the governance frameworks inside most gambling businesses are nowhere near ready for that risk.
The responsible gambling problem
Start with the most obvious example. Responsible gambling messaging is not marketing copy. It is a regulatory commitment. Regulators don’t just audit your policy document; they look at whether your communications demonstrate genuine human judgement in the moments that matter most.
A player showing early signs of harmful behaviour receives an AI-generated support message that missed the nuance a trained human would have caught. That isn’t an efficiency saving; it’s a liability. And the Gambling Commission, along with its counterparts across your regulated markets, is already paying attention to how operators are using AI in player-facing communications.
Dan noted that AI output needs checking, that the QA layer many businesses are removing is precisely the layer that catches these failures. In responsible gambling communications, that QA layer isn’t optional. It’s the thing that stands between you and a formal investigation.
The affiliate and acquisition problem
Affiliate and acquisition content is the part of gambling marketing most likely to be automated first. It is also the part most likely to attract regulatory scrutiny. If AI is writing the copy and the experienced people who understood the regulatory boundaries have been let go, you are one bad campaign away from a headline you cannot walk back.
Dan’s sports betting analogy is worth repeating here. Some AI tools are great for in-play. Others are very much pre-match only. The problem is that most businesses deploying AI in their marketing functions don’t yet know which they’re dealing with. They’re betting the house without checking the form.
The trust deficit problem
Gambling already operates under a trust deficit with the public and with policymakers. The industry has spent years arguing, in front of parliamentary committees, in licence applications, in responsible gambling commitments, that it takes player protection seriously and that human judgement sits at the centre of how it operates.
AI-generated player communications that feel impersonal or miss the human read on distress will be weaponised by critics. Not unfairly. The industry cannot afford to hand over that ammunition.
Dan made the point that GenAI seeks the middle ground; it doesn’t do differentiation or distinction well. In gambling, where your licence depends on demonstrating that your approach to player protection is considered and specific to your business, the middle ground is not good enough.
Where this actually belongs
This is not a marketing department conversation. It is not a technology conversation. It is a risk-and-governance conversation, and it belongs in the boardroom.
The questions that should be on the board agenda right now are not about AI strategy in the abstract. They are specific:
- Who is accountable for AI-generated player communications before they go out?
- What is the human review process, and who owns it?
- What happens when an AI-generated communication causes a regulatory complaint?
Do we still have the people in the business who know what good looks like, or have we let them go in the efficiency drive Dan described?
If those questions don’t have clear answers inside your organisation, the efficiency you think you’re gaining is being borrowed against your licence.
The industry has spent years trying to build trust, but it keeps undermining it through short-term decisions. Deploying AI in player-facing communications without proper governance is the fastest way to undermine it further, and the most avoidable.
Dan asked whether we’re becoming busy fools.
In gambling, a busy fool with a compliance failure isn’t just embarrassing.
They’re a case study.
What does your AI governance framework for player communications actually look like right now? And who in your organisation owns it?