← Back to Our Thinking

The UK AI Bill Is Coming: What It Means for Your Organisation

14 April 2026 · 6 minute read · Bespoke Support Solutions

I've sat in three board meetings this year where someone said "let's wait for the legislation before we do anything on AI governance." In each case, the same organisation had ungoverned AI tools already in production. Staff using Copilot. Vendors who'd switched on AI features without telling anyone. No risk assessments, no audit trail, no named owner for any of it.

They're not waiting for legislation. They're accumulating liability while hoping the deadline is far enough away that it becomes someone else's problem.

It won't be.

What's actually coming

The King's Speech in July 2024 confirmed an AI Bill. The direction since then has been consistent: the UK is moving from voluntary principles to enforceable requirements. The AI Safety Institute is evaluating frontier models and publishing findings. The Data Use and Access Act 2025 introduced new provisions for automated decision-making. The Bletchley Declaration committed 28 countries to identifying shared AI risks.

Nobody knows the exact provisions of the Bill yet. But the direction is clear enough to act on.

Mandatory risk assessment for high-risk AI use cases is coming. The EU AI Act already requires it for AI in employment, education, law enforcement, and healthcare. The UK won't copy the EU wholesale, but it's not going to exempt these domains either.

Transparency obligations are coming. If your organisation uses AI that affects individuals — automated decisions in benefits processing, risk scoring in policing, triage in healthcare — you'll need to disclose it and explain how decisions are reached. The ICO has been publishing guidance on this since 2020. Guidance is about to become obligation.

Accountability requirements are coming. Someone in your organisation will need to own AI governance. The GDS AI Playbook already recommends a named Senior Responsible Owner. Legislation will make it a requirement.

And audit trails will be non-negotiable. When a regulator asks "show me how you governed this AI decision," you'll need contemporaneous evidence. Not a retrospective justification assembled the week before the audit — a real record of who decided what, when, and why.

Sources: UK Parliament, "King's Speech 2024 — Background Briefing Notes." UK Government Digital Service, "AI Playbook for the UK Government," updated 2025.

What this means if you're in healthcare or social care

This is where I spend most of my time, and it's where the gap between AI adoption and AI governance is widest.

Care providers are already using AI — sometimes knowingly, often not. Care planning tools with AI-generated suggestions. Rostering software that optimises shift patterns using machine learning. Falls detection systems. Medication management tools that flag interactions. Each one of these is an AI use case that needs governance: who approved it, what data does it process, what happens when it gets it wrong, and who's accountable.

The challenge for health and care is that you're already operating under CQC oversight, the NHS Data Security and Protection Toolkit, and clinical governance requirements. AI legislation won't replace any of that — it'll layer on top. A 12-person care home and a large NHS trust face the same regulatory direction, but with vastly different resources to respond.

The organisations in this sector that are preparing now aren't doing it because they enjoy compliance. They're doing it because they've seen what happens when a regulator asks a question and there's no evidence to answer it with.

What this means if you're in policing

Police forces are further ahead than most sectors, largely because the NPCC published an AI Covenant that established clear principles: transparency, accountability, meaningful human control over AI-assisted decisions, and bias testing. Forces that have adopted the Covenant are already governing AI to a standard that's likely to align with whatever the legislation requires.

Forces that haven't are exposed. And the window to get ahead of legislation rather than chase it is narrowing.

Source: National Police Chiefs' Council, "National AI Covenant for Policing," 2024.

Other regulated sectors

Financial services firms are already under FCA and PRA oversight, and the FCA published its AI Update in 2024 setting expectations for AI in consumer-facing decisions. If you're in financial services, your regulator is ahead of the legislation — which means you should be too.

Local government faces a scale problem. A county council might have dozens of AI-enabled tools across housing, planning, revenues, benefits, and social services. The GDS AI Playbook provides principles, but most authorities haven't operationalised them into structured governance across all their use cases. That's a lot of retrospective work if you wait.

Don't wait

The maths is obvious.

Starting now costs weeks of structured work. You identify your AI use cases, put them through a governance process, establish ownership, and start capturing an audit trail. Manageable. Proportionate. Something you can plan and resource.

Waiting costs significantly more. When legislation lands, every ungoverned AI use case in production needs retrospective assessment. Evidence that should have been captured at the time needs reconstruction — if it can be reconstructed at all. You're competing for scarce AI governance expertise with every other organisation that also decided to wait. And you're doing all of this under a compliance deadline.

I keep coming back to the same asymmetry. If you start now and the legislation is lighter than expected, you've got a governance framework that satisfies commissioners, procurement partners, and auditors regardless of the statutory position. If you wait and it's stringent, you're in trouble.

Nobody I've worked with has ever said they started too early.

Three things to do this month

Find out what AI you're actually using. Not just what IT procured — what staff are using themselves, what vendors have embedded, what's been switched on without explicit approval. You can't govern what you can't see.

Put a name on it. Designate someone as responsible for AI governance. Not "the IT team." A person. Someone who can answer when the board or a regulator asks "who owns this?"

Start the audit trail today. Every governance decision from this point forward — documented, timestamped, attributed. When the legislation asks you to demonstrate compliance, you want evidence that started now, not evidence that started the day after Royal Assent.

Everything else — the detailed assessments, the multi-discipline reviews, the periodic review schedules — builds on those three foundations. Get them in place this month. The rest follows.

Bespoke Support Solutions helps organisations prepare for AI regulation through AIMS — the AI Management Solution. Book a discovery call to discuss your readiness.

BSS Ltd is a Microsoft Partner, registered on the Data Security and Protection Toolkit (DSPT), and registered with the Information Commissioner's Office (ICO: ZB272980).