https://techcrunch.com/2025/02/02/ai-systems-with-unacceptable-risk-are-now-banned-in-the-eu/ [tc-lockup] TechCrunch Desktop Logo [tc-lo] TechCrunch Mobile Logo * Latest * Startups * Venture * Apple * Security * AI * Apps * Events * Podcasts * Newsletters Sign In Search [ ]Submit * Site Search Toggle Mega Menu Toggle Topics Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture More from TechCrunch Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Sign In Hands waving small European Union flagsImage Credits:Malte Mueller / Getty Images AI AI systems with 'unacceptable risk' are now banned in the EU Kyle Wiggers 6:00 AM PST * February 2, 2025 As of Sunday in the European Union, the bloc's regulators can ban the use of AI systems they deem to pose "unacceptable risk" or harm. February 2 is the first compliance deadline for the EU's AI Act, the comprehensive AI regulatory framework that the European Parliament finally approved last March after years of development. The act officially went into force August 1; what's now following is the first of the compliance deadlines. The specifics are set out in Article 5, but broadly, the Act is designed to cover a myriad of use cases where AI might appear and interact with individuals, from consumer applications through to physical environments. Under the bloc's approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk -- AI for healthcare recommendations is one example -- will face heavy regulatory oversight; and (4) unacceptable risk applications -- the focus of this month's compliance requirements -- will be prohibited entirely. Some of the unacceptable activities include: * AI used for social scoring (e.g., building risk profiles based on a person's behavior). * AI that manipulates a person's decisions subliminally or deceptively. * AI that exploits vulnerabilities like age, disability, or socioeconomic status. * AI that attempts to predict people committing crimes based on their appearance. * AI that uses biometrics to infer a person's characteristics, like their sexual orientation. * AI that collects "real time" biometric data in public places for the purposes of law enforcement. * AI that tries to infer people's emotions at work or school. * AI that creates -- or expands -- facial recognition databases by scraping images online or from security cameras. Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to EUR35 million (~$36 million), or 7% of their annual revenue from the prior fiscal year, whichever is greater. The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect." Preliminary pledges The February 2 deadline is in some ways a formality. Last September, over 100 companies signed the EU AI Pact, a voluntary pledge to start applying the principles of the AI Act ahead of its entry into application. As part of the Pact, signatories -- which included Amazon, Google, and OpenAI -- committed to identifying AI systems likely to be categorized as high risk under the AI Act. Some tech giants, notably Meta and Apple, skipped the Pact. French AI startup Mistral, one of the AI Act's harshest critics, also opted not to sign. That isn't to suggest that Apple, Meta, Mistral, or others who didn't agree to the Pact won't meet their obligations -- including the ban on unacceptably risky systems. Sumroy points out that, given the nature of the prohibited use cases laid out, most companies won't be engaging in those practices anyway. "For organizations, a key concern around the EU AI Act is whether clear guidelines, standards, and codes of conduct will arrive in time -- and crucially, whether they will provide organizations with clarity on compliance," Sumroy said. "However, the working groups are, so far, meeting their deadlines on the code of conduct for ... developers." Possible exemptions There are exceptions to several of the AI Act's prohibitions. For example, the Act permits law enforcement to use certain systems that collect biometrics in public places if those systems help perform a "targeted search" for, say, an abduction victim, or to help prevent a "specific, substantial, and imminent" threat to life. This exemption requires authorization from the appropriate governing body, and the Act stresses that law enforcement can't make a decision that "produces an adverse legal effect" on a person solely based on these systems' outputs. The Act also carves out exceptions for systems that infer emotions in workplaces and schools where there's a "medical or safety" justification, like systems designed for therapeutic use. The European Commission, the executive branch of the EU, said that it would release additional guidelines in "early 2025," following a consultation with stakeholders in November. However, those guidelines have yet to be published. Sumroy said it's also unclear how other laws on the books might interact with the AI Act's prohibitions and related provisions. Clarity may not arrive until later in the year, as the enforcement window approaches. "It's important for organizations to remember that AI regulation doesn't exist in isolation," Sumroy said. "Other legal frameworks, such as GDPR, NIS2, and DORA, will interact with the AI Act, creating potential challenges -- particularly around overlapping incident notification requirements. Understanding how these laws fit together will be just as crucial as understanding the AI Act itself." Topics AI, AI Act, banned ai, Enterprise, EU AI Act, Generative AI, Government & Policy, prohibited ai systems Kyle Wiggers Kyle Wiggers Senior Reporter, Enterprise Kyle Wiggers is a senior reporter at TechCrunch with a special interest in artificial intelligence. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Brooklyn with his partner, a piano educator, and dabbles in piano himself. occasionally -- if mostly unsuccessfully. View Bio Most Popular * Tana snaps up $25M as its AI-powered knowledge graph for work racks up a 160K+ waitlist + Ingrid Lunden * DeepSeek founder Liang Wenfeng receives a hero's welcome back home + Connie Loizos * OpenAI unveils a new ChatGPT agent for 'deep research' + Anthony Ha * Dub: The copy trading app that has teens talking + Connie Loizos * AI systems with 'unacceptable risk' are now banned in the EU + Kyle Wiggers * Senator warns of national security risks after Elon Musk's DOGE granted 'full access' to sensitive Treasury systems + Zack Whittaker * X expands lawsuit over advertiser 'boycott' to include Lego, Nestle, Pinterest, and others + Anthony Ha Newsletters See More Subscribe for the industry's biggest tech news TechCrunch Daily News Every weekday and Sunday, you can get the best of TechCrunch's coverage. TechCrunch AI TechCrunch's AI experts cover the latest news in the fast-moving field. TechCrunch Space Every Monday, gets you up to speed on the latest advances in aerospace. Startups Weekly Startups are the core of TechCrunch, so get our best coverage delivered weekly. No newsletters selected. [ ] Subscribe By submitting your email, you agree to our Terms and Privacy Notice. Related * The DeepSeek AI application is seen on a mobile phone in this photo illustration. In Brief No, DeepSeek isn't uncensored if you run it locally + Charles Rollet 1 hour ago * Facebook/Meta logo with a twist Government & Policy Meta says it may stop development of AI systems it deems too risky + Kyle Wiggers 1 hour ago * DeepSeek logo Government & Policy DeepSeek: The countries and agencies that have banned the AI company's tech + Kyle Wiggers 3 hours ago Latest in AI See More * The DeepSeek AI application is seen on a mobile phone in this photo illustration. In Brief No, DeepSeek isn't uncensored if you run it locally + Charles Rollet 1 hour ago * Facebook/Meta logo with a twist Government & Policy Meta says it may stop development of AI systems it deems too risky + Kyle Wiggers 1 hour ago * DeepSeek logo Government & Policy DeepSeek: The countries and agencies that have banned the AI company's tech + Kyle Wiggers 3 hours ago TechCrunch Logo * X * LinkedIn * Facebook * Instagram * youTube * Mastodon * Threads * Bluesky * TechCrunch * Staff * Contact Us * Advertise * Crunchboard Jobs * Site Map * Terms of Service * Privacy Policy * RSS Terms of Use * Privacy Placeholder 1 * Privacy Placeholder 2 * Privacy Placeholder 3 * Privacy Placeholder 4 * Code of Conduct * About Our Ads * Tesla Earnings * DeepSeek * Liang Wenfeng * Waymo In LA * Netflix * Tech Layoffs * ChatGPT (c) 2024 Yahoo.