Building AI Products? Here's What the Law Says You Can't Do (Part 1: The EU AI Act)
The EU AI Act is already in force. Most product teams haven't noticed.
I have been working on developing AI-enabled products for a while now. Recently, I found myself deep in defining the strategy for our AI offering when something hit me.
When we work on data-related products, reviewing policies for storing and managing personal data is just part of the process. It is almost instinctive for GDPR to come up in the first meeting. Data residency gets flagged before a single line of code is written.
But for AI? We don’t ask the question once.
Why have we not thought about checking the policies that govern AI? I wondered.
That realisation sent me down a path to gather knowledge on the AI policies currently defined around the world, not just to enlighten myself, but to bring my team and you along with me.
Most businesses are building AI products in one conversation, and regulators are having a completely different one.
This series is my attempt to put you in the room for both.
Before you read on, here is what to expect from this series:
AI regulation is a global story, and there is no way to do it justice in a single article without either skimming the surface or losing you entirely. So I am not going to try. This is a 3-part series: Part 1 covers the European Union (this one), Part 2 the United States, and Part 3 covers China, the UK, and the rest of the world.
🌍 Why This Matters More Than You Think
AI regulation is not just a compliance exercise; it is a product and business decision. It answers one question that not enough teams are asking: what are you allowed to build with AI? By early this year, over 70 countries have launched or signalled AI policies. Some carry heavy financial penalties. Others read more like voluntary frameworks today, until a regulator decides to make an example, and suddenly they are not.
If your product touches customers in multiple countries, this concerns you too; you are already operating across multiple regulatory environments. Understanding this could save you from fines you never saw coming.
That said, let’s dive in...
The best place to start is also the strictest room in the building, the European Union (EU). The EU AI Act, which is the world's first comprehensive legal framework for AI, entered into force in August 2024. It is rolling out in phases through 2026 and beyond with a series of deadlines arriving one after another until the whole framework is live.
Here is what has already taken effect:
In February 2025, the first phase of the policy kicked in. AI systems classified as “unacceptable risk” became illegal across all EU member states. For businesses, this means several things you might already be doing are now prohibited.
The list includes AI systems that:
Use manipulative or deceptive techniques to change user behaviour
Exploit vulnerabilities based on age, disability, or financial situation
Offer real-time biometric surveillance in public spaces (with very narrow law enforcement exceptions).
Predict or detect emotions in the workplace or educational settings.
That last one deserves a pause, because this means if your HR platform, productivity tool, or workforce management software uses any form of emotion-sensing or behavioural inference, even something as seemingly harmless as “engagement scoring”, you are in legally complex territory in the EU.
In August 2025, General-Purpose AI models came under scope. This is the part that matters for any team building with large language models. Providers of foundational models such as GPTs, Claudes and Geminis that power most AI products today must now comply with specific transparency and documentation obligations:
Technical documentation (Model architecture, input/output capabilities, training process, its intended and acceptable use cases, etc).
Summary of their training data
Demonstrate copyright compliance for that data
Share information with both regulators and any business building on top of their model
If your product is built on top of one of these models and you operate in the EU, that compliance chain flows through to you.
Now, for what is heading your way before the end of this year.
By August 2026, the comprehensive requirements for high-risk AI systems will become enforceable. High-risk covers a significant amount of what businesses are actively building and shipping today, including AI used in hiring and employment decisions, credit scoring, access to essential services, education, healthcare, and critical infrastructure.
If your product falls into any of those categories, the important question is “what does compliance actually require of you?” By August 2026, you will need to have the following in place before your AI system can legally operate in the EU:
Organisations must have a documented, maintained, and up-to-date risk management system throughout the product's lifecycle.
There must be evidence that the data used to train or run your AI system is relevant, representative, and free from errors that could lead to a discriminatory outcome
There must be a paper trail of how your system was built, what it was designed to do, how it was tested and what limitations it has.
AI systems must be designed in a way that a human can intervene, override or shut them down. Any system with no human in the loop is not compliant.
Every action your AI system takes needs to be traceable with a log kept for a minimum of six months
AI systems must be formally registered in the EU database before it is deployed
Specific to some cases, an independent third-party verification of your system might be needed before the AI system can go to the market.
Something to note, this does not just apply to companies based in the EU. If your AI system produces output that affects people in the EU, even if your servers, your team and business are outside Europe, you are in this scope, and the penalties are not theoretical. The penalty framework has been live since February 2025; companies using prohibited AI practices can be fined up to €35 million or 7% of their global annual turnover, whichever is higher.
What this means for your business
If you have users or customers in the EU, you need to classify every AI system you use or build against the EU’s risk framework. Start with a simple audit. Ask questions like: Does this system make decisions that affect people’s lives, livelihoods, or access to services? If the answer is yes, you are likely in high-risk territory. If the answer is maybe, you need a lawyer.
This information is not to scare you, but to empower product teams to build actively. The businesses that will navigate this well are not the ones spending the most time asking, "What is the minimum we need to do to comply?" They are the ones asking: "What does responsible use of AI actually look like for us, and how do we build that into the product from the start?"
Next week, we are heading to the United States, where there is no single federal AI law; 50 states are each writing their own rules, and the federal government is actively trying to stop them. It is messy, it is fast-moving, and it affects any business with US customers.
See you next week.
Disclaimer: This article is for informational purposes only and does not constitute legal advice. If you are building AI products across multiple jurisdictions, please consult qualified legal counsel.

