The AI Governance Gap: What We See When We Walk Into Organisations
Last month I sat down with a leadership team at a care provider — good people, well-run organisation — who told me they weren't really using AI yet. Twenty minutes later we'd identified nine AI-enabled tools in active use across the business. One of them was processing resident medical data through a consumer chatbot. Nobody had approved it. Nobody knew it was happening.
That's not unusual. Over the past two years, we've assessed AI readiness across healthcare, policing, public sector, and financial services, and the pattern is remarkably consistent. The problems aren't what most organisations expect them to be.
"We don't really use AI yet"
Almost never true.
Staff are already using ChatGPT, Copilot, or Gemini to draft letters, summarise documents, and pull together reports. And yes, the vendors know this is happening — they're actively embedding AI features into the products your organisation already pays for. Microsoft 365 Copilot, Salesforce Einstein, ServiceNow AI. These aren't roadmap items. Some may already be switched on in your environment without anyone explicitly approving them.
The governance gap isn't about whether your organisation uses AI. It's about whether anyone has the full picture.
Your teams are working in silos — and AI just broke the walls down
This is the one that catches most organisations off guard.
A single AI use case can touch personal data, cloud infrastructure, clinical safety, vendor contracts, budget, and workforce impact. Six different departments, six different risk appetites, six different views. When they govern independently — and they almost always do — each team sees their slice. Nobody sees the whole thing.
I watched it happen recently. A data protection officer signed off on a DPIA for an AI tool, not knowing the IT team had flagged serious concerns about how it stored data at rest. IT, meanwhile, had approved the technical deployment without any visibility of the clinical team's reservations about the tool's recommendations in edge cases.
Every team did their job. The governance failed anyway.
This is why AI governance can't live inside a single department. It needs a shared process — one where each discipline assesses independently but can see what the others have found. When your DPO can read the infosec findings, and your legal advisor can see the clinical concerns, the decision gets made with the complete picture rather than six separate fragments.
Your existing frameworks won't cover this
"We'll add AI to our ISO 27001 scope." We hear this a lot. It sounds sensible. It doesn't work.
ISO 27001 operates on annual surveillance cycles. AI capabilities change weekly. A generative AI tool you assessed in January has different features, different data handling, and a different risk profile by June — not because you changed anything, but because the vendor shipped updates. By your next audit, you're governing last year's technology with last year's assessment.
Risk registers have similar gaps. Most were designed for risks that sit still long enough to be measured. AI risks don't. A model drifts as its data changes. A tool that was safe for internal use becomes a data protection problem the moment someone pastes client details into a prompt.
And there are things most frameworks simply don't cover. Has anyone assessed your AI tools for algorithmic bias? The NPCC AI Covenant requires it for policing decisions, and other sectors are heading the same way. Does your framework reassess when the vendor changes the technology, or only at the annual review? If the same person who proposes an AI use case also approves it — and in most organisations they do — that's not governance. That's a rubber stamp.
Sources: UK Government Digital Service, "AI Playbook for the UK Government," updated 2025. National Police Chiefs' Council, "National AI Covenant for Policing," 2024.
"We'll deal with governance when the regulation arrives"
The UK AI Bill is expected in the second half of 2026. Organisations that wait face a straightforward problem: every AI use case running today without governance — no named owner, no risk assessment, no sign-off, no audit trail — becomes a liability the moment legislation lands.
Retrofitting governance onto live AI use cases is painful. You're reconstructing evidence that should have been captured months ago, competing for scarce expertise with every other organisation that also waited, and doing it all under a compliance deadline.
Start now and the legislation turns out to be light? You've built a governance framework that satisfies commissioners and auditors regardless. Wait and it turns out to be stringent? Good luck.
I've never had a client tell me they started too early. Not once.
"We need to hire a Head of AI"
The talent market is brutal, one person can't cover six disciplines, and you need governance now — not in six months when you've finished recruiting.
Separate the platform from the expertise. A structured governance framework provides the workflow, the questions, and the audit trail. Specialist skills flex around it — your DPO covers data protection, your infosec lead covers security, an external specialist covers ethics if you don't have that in-house. The platform is the constant. The people adapt.
What effective governance actually looks like
The organisations doing this well all have a named owner for every AI use case. Not a committee, not "the IT team" — a person whose name is on the decision.
They run a structured lifecycle: submission, independent review, stakeholder decision, pilot, live operation with periodic review. Nothing gets auto-approved.
They've broken the silos. Each discipline reviews independently, but their findings are visible to every other reviewer. The DPO sees what infosec found. Legal sees what clinical flagged. No surprises.
And they keep a complete audit trail. Every action timestamped, every decision attributed, every piece of evidence captured. When the regulator turns up — and they will — the answer is already sitting there.
The one thing they all have in common? They started before anyone told them they had to.
Bespoke Support Solutions helps organisations govern AI through AIMS — the AI Management Solution. If the challenges in this article sound familiar, book a discovery call to discuss your situation.
BSS Ltd is a Microsoft Partner, registered on the Data Security and Protection Toolkit (DSPT), and registered with the Information Commissioner's Office (ICO: ZB272980).