The EU AI Act is now law. So now what?
Way back on 1st August 2024, the EU’s Artificial Intelligence Act officially entered into force, making it one of the most comprehensive regulatory frameworks for AI anywhere in the world.
So if you’re a UK business, you might be thinking, “That’s an EU law. Does it actually affect me?”
The short answer: yes. If you operate in, sell into, or do business with the EU, it absolutely does.
What does the AI Act actually do?nbsp;
At its core, the AI Act is built around one principle… the higher the risk an AI system poses, the stricter the rules around it.
It categorises AI systems into risk tiers, from unacceptable risk (banned outright) right down to minimal risk (broadly unrestricted). The obligations on your business depend entirely on what your AI is doing and who it’s affecting.
The Act also sets clear expectations around transparency, accountability and security. How you use AI and whether you can demonstrate it’s working as intended, matters just as much as which tools you’ve adopted.
For businesses that want to experiment and develop, the Act includes regulatory sandboxes: dedicated environments where you can test and refine AI tools under regulatory guidance before going live.
“But we’re in the UK…”
We hear this a lot.
The UK isn’t directly bound by the EU AI Act. BUT that doesn’t mean it is irrelevant.
If your business operates in European markets, processes data belonging to EU residents, or works with EU-based partners, the AI Act will shape what your AI systems need to look like. Early indications from UK government white papers suggest that UK AI regulation is likely to align closely with EU and US frameworks anyway.
The businesses thinking about this now will be in a far stronger position when UK-specific regulation follows.
Where do you start?
Here are the 4 areas to focus on right now.
Understand your risk category
Not all AI is treated the same. AI used in healthcare, finance, HR, or critical infrastructure faces the most stringent requirements. Start by mapping what AI your business is currently using (or planning to use) and assess where it sits on the risk spectrum.
Build trust through compliance
Transparent, accountable and secure AI is a competitive advantage as much as it is a regulatory requirement. Customers, partners and investors are increasingly asking questions about how you use AI. Getting this right now builds credibility that’s hard to manufacture later.
Take AI ethics seriously
The AI Act puts ethics at the centre of compliance and for good reason. Businesses need to be able to demonstrate that their AI systems are fair, transparent and free from bias. That means asking some genuinely important questions: Does your AI make decisions that affect people (customers, employees, job applicants)? Do you know how those decisions are being made? Could the data your AI is trained on be producing skewed or discriminatory outputs?
These aren’t abstract concerns. Under the Act, high-risk AI systems require documented evidence of how they work, what data they use and how human oversight is maintained. Getting ahead of this means building ethics into how you procure, deploy and review AI tools, not just ticking a box when a regulator asks.
Bring AI into the open
One of the biggest risks businesses face right now has nothing to do with regulators. It’s the staff member quietly using an AI tool the business knows nothing about.
Shadow AI, where employees adopt AI tools independently without IT or leadership sign-off, is already widespread. Data is being fed into models, outputs are being used in client work and decisions are being influenced by tools that have never been assessed for security, accuracy, or compliance. Under the AI Act’s transparency and accountability requirements, “we didn’t know it was being used” is not a defence.
The answer is to make AI part of the normal conversation in your business. Give your team clear, practical guidelines on which tools are approved and why. Create a straightforward way for people to flag AI tools they want to use. The goal is an environment where staff feel comfortable talking about how they are using AI, so your business retains visibility and control.
How a Cybercy Check helps
This is exactly what we look at in a Cybercy Check. Our expert team works through your AI systems and business processes to assess:
- Risk categorisation: where your AI sits under the Act’s framework and what that means for compliance
- Compliance strategy: a practical roadmap aligned to both current EU requirements and the UK frameworks that are coming
- Ethics and security review: whether your AI systems are robust, fair and secure against the vulnerabilities that regulators (and hackers) will be looking for
It’s complimentary, it’s practical and it gives you a clear picture of where you stand.