[HN Gopher] AI systems with 'unacceptable risk' are now banned i...
       ___________________________________________________________________
        
       AI systems with 'unacceptable risk' are now banned in the EU
        
       Author : geox
       Score  : 79 points
       Date   : 2025-02-03 10:31 UTC (12 hours ago)
        
 (HTM) web link (techcrunch.com)
 (TXT) w3m dump (techcrunch.com)
        
       | whereismyacc wrote:
       | sounds good
        
       | _heimdall wrote:
       | What I don't see here is how the EU is actually defining what is
       | and is not considered AI.
       | 
       | > AI that manipulates a person's decisions subliminally or
       | deceptively.
       | 
       | That can be a hugely broad category that covers any algorithmic
       | feed or advertising platform.
       | 
       | Or is this limited specifically to LLMs, as OpenAI has so
       | successfully convinced us that LLMs really are Aai and previous
       | ML tools weren't?
        
         | dijksterhuis wrote:
         | the actual text in the ~act~ guidance states:
         | 
         | > Exploitation of vulnerabilities of persons, manipulation and
         | use of subliminal techniques
         | 
         | techcrunch simplified it.
         | 
         | from my reading, it counts if you are _intentionally setting
         | out to build a system to manipulate or deceive people_.
         | 
         | edit -- here's the actual text from the act, which makes more
         | clear it's about whether the deception is purposefully intended
         | for malicious reasons
         | 
         | > the placing on the market, the putting into service or the
         | use of an AI system that deploys subliminal techniques beyond a
         | person's consciousness or purposefully manipulative or
         | deceptive techniques, with the objective, or the effect of
         | materially distorting the behaviour of a person or a group of
         | persons by appreciably impairing their ability to make an
         | informed decision, thereby causing them to take a decision that
         | they would not have otherwise taken in a manner that causes or
         | is reasonably likely to cause that person, another person or
         | group of persons significant harm
        
           | Bjartr wrote:
           | Seems like even a rudimentary ML model powering ad placements
           | would run afoul of this.
        
             | dijksterhuis wrote:
             | > In addition, common and legitimate commercial practices,
             | for example in the field of advertising, that comply with
             | the applicable law should not, in themselves, be regarded
             | as constituting harmful manipulative AI-enabled practices.
             | 
             | https://artificialintelligenceact.eu/recital/29/
        
               | Ajedi32 wrote:
               | Broadly speaking, I feel whenever you have to build into
               | a law specific cave outs for perfectly innocuous behavior
               | that would otherwise be illegal under the law as written,
               | it's not a very well thought out law.
               | 
               | Either the behavior in question is actually bad in which
               | case there shouldn't be exceptions, or there's actually
               | nothing inherently wrong with it in which case you have
               | misidentified the actual problem and are probably
               | needlessly criminalizing a huge swathe of normal behavior
               | beyond just the one exception you happened to think of.
        
               | HPsquared wrote:
               | It's the style of law that says "Anything not explicitly
               | permitted, is banned."
        
             | anticensor wrote:
             | That is by design.
        
             | blackeyeblitzar wrote:
             | Even ads without ML would run afoul of this
        
           | dist-epoch wrote:
           | so "sex sells" kind of ads are now illegal?
        
         | troupo wrote:
         | > What I don't see here is how the EU is actually defining what
         | is and is not considered AI.
         | 
         | Because instead of reading the source, you're reading a
         | sensationalist article.
         | 
         | > That can be a hugely broad category that covers any
         | algorithmic feed or advertising platform.
         | 
         | Again, read the EU AI Act. It's not like it's hidden, or hasn't
         | been available for _several years_ already.
         | 
         | ----
         | 
         | We're going to get a repeat of GDPR aren't we? Where 8 years in
         | people arguing about it have never read anything beyond twitter
         | hot takes and sensationalist articles?
        
           | scarface_74 wrote:
           | Maybe if the GDPR was a simple law instead of 11 chapters and
           | 99 sections and all anyone got as a benefit from it is cookie
           | banners it would be different.
        
             | HeatrayEnjoyer wrote:
             | GDPR doesn't benefit anyone? Is that a joke?
        
             | troupo wrote:
             | > Maybe if the GDPR was a simple law
             | 
             | It is a simple law. You can read it in an afternoon. If you
             | still don't understand it _8 years later_ , it's not the
             | fault of the law.
             | 
             | > instead of 11 chapters and 99 sections
             | 
             | News flash: humans and their affairs are complicated
             | 
             | > all anyone got as a benefit from it is cookie banners
             | 
             | Please show me where GDPR requires cookie banners.
             | 
             | Bonus points: who is responsible for the cookie banners.
             | 
             | Double bonus points: why HN hails Apple for implementing
             | "ask apps not to track", boos Facebook and others for
             | invasive tracking, ... and boos GDPR which literally tells
             | companies not to track users
        
               | scarface_74 wrote:
               | It doesn't matter what it requires, the point is as
               | usual, the EU doesn't take into account the unintended
               | consequences of laws it passes when it comes to
               | technology.
               | 
               | That partially explains the state of the tech industry in
               | the EU.
               | 
               | But guess which had a more deleterious effect on Facebook
               | ad revenue and tracking - Apples ATT or the GDPR?
        
               | MattPalmer1086 wrote:
               | The EU just prioritises protection for its citizens over
               | tech industry profits. They are also not opposed to ad
               | revenue and tracking; only that people must consent to
               | being tracked, no sneaky spying. I'm quite happy for tech
               | to have those restrictions.
        
               | scarface_74 wrote:
               | The EU right now is telling Meta that it is illegal to
               | give users the option of either ads based on behavior on
               | the platform or charging a monthly subscription fee.
        
               | Kbelicius wrote:
               | > The EU right now is telling Meta that it is illegal to
               | give users the option of either ads based on behavior on
               | the platform or charging a monthly subscription fee.
               | 
               | And? With GDPR the EU decided that private data cannot be
               | used as a form of payment. It can only be voluntarily
               | given. Similarly to using ones body. You can fuck whoever
               | you want and you can give your organs if you so choose
               | but no business is allowed to be payed in sex or organs.
        
               | scarface_74 wrote:
               | That's just the problem. Meta was going to give users a
               | choice between paying with "private data" or paying
               | money. The EU won't let people make that choice are you
               | saying people in the EU are too dumb to decide for
               | themselves?
               | 
               | But how is your data that you give to Facebook "private"
               | to you? Facebook isn't sharing your data to others. Ad
               | buyers tell Facebook "Put this ad in front of people
               | between 25-30 who look at pages that are similar to $x
               | _on Facebook_ "
        
               | Kbelicius wrote:
               | > That's just the problem. Meta was going to give users a
               | choice between paying with "private data" or paying
               | money.
               | 
               | Well, per GDPR they aren't allowed to do that. Are they
               | giving that option to users outside of EU? Why Not?
               | 
               | > The EU won't let people make that choice are you saying
               | people in the EU are too dumb to decide for themselves?
               | 
               | No I do not think that. What made you think that I think
               | that?
               | 
               | What about sex and organs? In your opinion should
               | businesses be allowed to charge you with those?
               | 
               | > But how is your data that you give to Facebook
               | "private" to you?
               | 
               | I didn't give it to them. What is so hard to understand
               | about that?
               | 
               | Are you saying that your browsing data isn't private to
               | you? Care to share it?
        
               | scarface_74 wrote:
               | > Well, per GDPR they aren't allowed to do that. Are they
               | giving that option to users outside of EU? Why Not?
               | 
               | Because no other place thinks that their citizens are too
               | dumb to make informed choices.
               | 
               | > _What about sex and organs? In your opinion should
               | businesses be allowed to charge you with those?_
               | 
               | If consenting adults decide they want to have sex as a
               | financial arrangement why not? Do you think these 25 year
               | old "girlfriends" of 70 year old millionaires are there
               | for the love?
               | 
               | > _I didn 't give it to them. What is so hard to
               | understand about that?_
               | 
               | When you are on Facebook's platform and you tell them
               | your name, interests, relationship status, check ins, and
               | on their site, you're not voluntarily giving them your
               | data?
               | 
               | > _Are you saying that your browsing data isn 't private
               | to you? Care to share it?_
               | 
               | If I am using a service and giving that service
               | information about me, yes I expect that service to have
               | information about me.
               | 
               | Just like right now, HN knows my email address and my
               | comment history and where I access this site from.
        
               | impossiblefork wrote:
               | and it is illegal, and has been illegal for a long time.
               | 
               | Consent for tracking must be freely given. You can't give
               | someone something in return for it.
        
               | HPsquared wrote:
               | Free as in freedom, or free as in beer?
        
               | wiz21c wrote:
               | > Please show me where GDPR requires cookie banners.
               | 
               | That's the bit everyone forget. GDPR didn't ask for
               | cookie banners at all. It asked for consent in case
               | consent is needed.
               | 
               | And most of the time consent is not needed since I just
               | can say "no cookies" to many websites and everything is
               | just fine.
        
               | scarface_74 wrote:
               | Intentions don't matter, effects do
        
               | HPsquared wrote:
               | So why does every website persist in annoying their
               | users? Are they all (or 99%) simply stupid? I have a hard
               | time believing that.
        
           | robertlagrant wrote:
           | The briefing on the Act talks about the risk of overly broad
           | definitions. Why don't you just engage in good faith? What's
           | the point of all this performative "oh this is making me so
           | tired"?
        
           | _heimdall wrote:
           | Sure, I get that reading the act is more important than the
           | article.
           | 
           | And in reading the act, I didn't see any clear definitions.
           | They have broad references to what reads much like any ML
           | algorithm, with carve outs for areas where manipulating or
           | influencing is expected (like advertising).
           | 
           | Where in the act does it actually define the bar for a
           | technology to be considered AI? A link or a quote would be
           | really helpful here, I didn't see such a description but it
           | is easy to miss in legal texts.
        
           | pessimizer wrote:
           | > Again, read the EU AI Act. It's not like it's hidden, or
           | hasn't been available for several years already.
           | 
           | You could point out a specific section or page number,
           | instead of wasting everyone's time. The vast majority of
           | people who have an interest in this subject do not have a
           | strong enough interest to do what you have claim to have
           | done.
           | 
           | You could have shared, right here, the knowledge that came
           | from that reading. At least a hundred interested people who
           | would have come across the pointing out of this clear
           | definition within the act in your comment will now instead
           | continue ignorantly making decisions you disagree with.
           | Victory?
        
         | vitehozonage wrote:
         | Exactly what i thought too.
         | 
         | Right now, for 10 years at least, with targeted advertising, it
         | has been completely normalised and typical to use machine
         | learning to intentionally subliminally manipulate people. I was
         | taught less than 10 years at a top university that machine
         | learning was classified as AI.
         | 
         | It raises many questions. Is it covered by this legislation?
         | Other comments make it sound like they created an exception, so
         | it is not. But then I have to ask, why make such an exception?
         | What is the spirit and intention of the law? How does it make
         | sense to create such an exception? Isn't the truth that the
         | current behaviour of the advertising industry is unacceptable
         | but it's too inconvenient to try to deal with that problem?
         | 
         | Placing the line between acceptable tech and "AI" is going to
         | be completely arbitrary and industry will intentionally make
         | their tech tread on that line.
        
       | mhitza wrote:
       | > AI that attempts to predict people committing crimes based on
       | their appearance.
       | 
       | Should have been
       | 
       | > AI that attempts to predict people committing crimes
        
         | troupo wrote:
         | Instead of relying on Techcrunch and speculating, you could
         | read sections (33), (42), and (59) of the EU AI Act yourself.
        
           | mhitza wrote:
           | Article 59 seems relevant, other two on a quick skim don't
           | seem to relate to the subject.
           | 
           | > 2. For the purposes of the prevention, investigation,
           | detection or prosecution of criminal offences or the
           | execution of criminal penalties, including safeguarding
           | against and preventing threats to public security, under the
           | control and responsibility of law enforcement authorities,
           | the processing of personal data in AI regulatory sandboxes
           | shall be based on a specific Union or national law and
           | subject to the same cumulative conditions as referred to in
           | paragraph 1.
           | 
           | https://artificialintelligenceact.eu/article/59/
           | 
           | Seems like it allows pretty easily for national states to add
           | in laws that allow them to skirt around
        
             | hcfman wrote:
             | Yep, they want to be able to continue to violate human
             | rights and do the dirty.
        
         | HeatrayEnjoyer wrote:
         | Why?
        
           | mhitza wrote:
           | I'd prefer if Minority Report remains a work of fiction, or
           | at least not possible in the EU.
        
             | hcfman wrote:
             | These laws will not be applied to the government
        
         | hcfman wrote:
         | Except government systems for the same. In the Netherlands we
         | had the benefits affaire. A system that attempted to predict
         | people committing benefits crime. Destroying the lives of more
         | than 25,000 people before anyone kicked in.
         | 
         | Do you think they are going to fine their own initiatives out
         | of existence? I don't think so.
         | 
         | However, they also have a completely extrajudicial approach to
         | fighting organised crime. Guaranteed to be using AI approaches
         | on the banned list. But you won't have get any freedom of
         | information request granted investigating anything like that.
         | 
         | For example, any kind of investigation would often involve
         | knowing which person filled a particular role. They won't grant
         | such requests, claiming it involves a person, so it's
         | personally. They won't tell you.
         | 
         | Let's have a few more new laws to provide the citizens please,
         | not government slapp handles.
        
       | ItsBob wrote:
       | > AI that attempts to predict people committing crimes based on
       | their appearance.
       | 
       | FTFY: AI that attempts to predict people committing crimes.
       | 
       | By "appearance" are they talking about a guy wearing a hoodie
       | must be a hacker or are we talking about race/colour/religious
       | garb etc?
       | 
       | I'd rather they just didn't use it for any kind of criminal
       | application at all if I have a say in it!
       | 
       | Just my $0.02
        
         | troupo wrote:
         | > I'd rather they just didn't use it for any kind of criminal
         | application at all if I have a say in it!
         | 
         | Instead of relying on Techcrunch and speculating, you could
         | read sections (33), (42), and (59) of the EU AI Act yourself.
        
         | Oarch wrote:
         | It will be hard to gauge this too if it's all just vectors?
        
       | rustc wrote:
       | Does this affect open weights AI releases? Or is the ban only on
       | the actual use for the listed cases? Because you can use open
       | weights Mistral models to implement probably everything on that
       | list.
        
         | ben_w wrote:
         | Use and development.
         | 
         | I know how to make chemical weapons in two distinct ways using
         | only items found in a perfectly normal domestic kitchen, that
         | doesn't change the fact that chemical weapons are in fact
         | banned.
         | 
         | """The legal framework will apply to both public and private
         | actors inside and outside the EU as long as the AI system is
         | placed on the Union market, or its use has an impact on people
         | located in the EU.
         | 
         | The obligations can affect both providers (e.g. a developer of
         | a CV-screening tool) and deployers of AI systems (e.g. a bank
         | buying this screening tool). There are certain exemptions to
         | the regulation. Research, development and prototyping
         | activities that take place before an AI system is released on
         | the market are not subject to these regulations. Additionally,
         | AI systems that are exclusively designed for military, defense
         | or national security purposes, are also exempt, regardless of
         | the type of entity carrying out those activities.""" -
         | https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
        
       | dijksterhuis wrote:
       | link to the actual act:
       | https://artificialintelligenceact.eu/article/5/
       | 
       | link to the q&a:
       | https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
       | 
       | (both linked in the article)
        
       | DoingIsLearning wrote:
       | I am not expert but there seems to be an overlap in the article
       | between 'AI' and well ... just software, or signal processing:
       | 
       | - AI that collects "real time" biometric data in public places
       | for the purposes of law enforcement.
       | 
       | - AI that creates -- or expands -- facial recognition databases
       | by scraping images online or from security cameras.
       | 
       | - AI that uses biometrics to infer a person's characteristics
       | 
       | - AI that collects "real time" biometric data in public places
       | for the purposes of law enforcement.
       | 
       | All of the above can be achieved with just software, statistics,
       | old ML techniques, i.e. 'non hype' AI kind of software.
       | 
       | I am not familiar with the detail of the EU AI pact but it seems
       | like the article is simplifying important details.
       | 
       | I assume the ban is on the purpose/usage rather than whatever
       | technology is used under the hood, right?
        
         | amelius wrote:
         | You can even replace AI by humans. For example, it is not legal
         | for e.g. police officers to engage in racial profiling.
        
           | uniqueuid wrote:
           | No you cannot, see Article 2: Scope
        
         | spacemanspiff01 wrote:
         | From the laws text:
         | 
         | For the purposes of this Regulation, the following definitions
         | apply:
         | 
         | (1) 'AI system' means a machine-based system that is designed
         | to operate with varying levels of autonomy and that may exhibit
         | adaptiveness after deployment, and that, for explicit or
         | implicit objectives, infers, from the input it receives, how to
         | generate outputs such as predictions, content, recommendations,
         | or decisions that can influence physical or virtual
         | environments; Related: Recital 12
         | 
         | https://artificialintelligenceact.eu/article/3/
         | https://artificialintelligenceact.eu/recital/12/ So, it seems
         | like yes, software, if it is non-deterministic enough would
         | qualify. My impression is that software that simply takes "if
         | your income is below this threshold, we deny you a credit
         | card." Would be fine, but somewhere along the line when your
         | decision tree grows large enough, it probably changes.
        
           | gamedever wrote:
           | So no more predicting the weather with sensors?
        
             | lubsch wrote:
             | Which of the criteria of the risk levels that will be
             | regulated which are listed in the article do you think
             | would include weather predictions?
        
         | uniqueuid wrote:
         | Unfortunately yes, the article is a simplification, in part
         | because the AI act delegates some regulation to existing other
         | acts. So to know the full picture of AI regulation one needs to
         | look at the combination of multiple texts.
         | 
         | The precise language on high risk is here [1], but some
         | enumerations are placed in the annex, which (!!!) can be
         | amended by the commission, if I am not completely mistaken. So
         | this is very much a dynamic regulation.
         | 
         | [1] https://artificialintelligenceact.eu/article/6/
        
         | impossiblefork wrote:
         | I wouldn't be surprised if it does cover all software. After
         | all, chess solvers are AI.
        
         | teekert wrote:
         | Have been having a lot of laughs about all the things we call
         | AI nowadays. Now it's becoming less funny.
         | 
         | To me it's just generative AI, LLMs, media generation. But I
         | see the CNN folks suddenly getting "AI" attention. Anything
         | deep learning really. It's pretty weird. Even our old batch
         | processing, SLURM based clusters with GPU nodes are now "AI
         | Factories".
        
       | hcfman wrote:
       | So is a burglar alarm that uses AI to see a person on your
       | property and alert the owner (Same function as the old PIR
       | sensors) now AI tech that is predicting that there is a person
       | that might be wanting to commit a crime and thus a banned use of
       | AI now?
       | 
       | Or is it something that is open to interpretation, let the courts
       | sort of out and fine you 15,000,000 euros if someone in the EU
       | has leverage on the courts and doesn't like you?
       | 
       | Oh and the courts will already kill any small startup.
        
         | dijksterhuis wrote:
         | From the act:
         | 
         | > to assess or predict the risk of a natural person committing
         | a criminal offence, _based solely on the profiling of a natural
         | person or on assessing their personality traits and
         | characteristics_ ;
         | 
         | more details in recital 42:
         | https://artificialintelligenceact.eu/recital/42/
        
       | hcfman wrote:
       | Laws that are open to interpretation with drastic consequences if
       | it's interpreted against your favour pose unacceptable risk to
       | business investors and stifle innovation.
        
         | dns_snek wrote:
         | People said that about GDPR. Laws that don't leave any room for
         | interpretation are bound to have loopholes that pose
         | unacceptable risk to the population.
        
       | hcfman wrote:
       | I would like to see a new law that puts any member of government
       | found obstructing justice is put in jail.
       | 
       | Except that the person responsible for travesty of justice
       | framing 9 innocent people in this Dutch series is currently the
       | president of the court of Maastricht.
       | 
       | https://npo.nl/start/serie/de-villamoord
       | 
       | Remember. The courts have the say as to who wins and looses in
       | these new vague laws. The ones running the courts have to not be
       | corrupt. But the case above shows that this situation is in fact
       | not the case.
        
       | waltercool wrote:
       | Oh nooo, that will stop everyone from using AI systems.
       | 
       | Do European politicians understand that those laws are usually
       | dead? There is no way a law like that can be enforced except by
       | large companies.
       | 
       | Also, this kind of laws would make Europeans to stay at the loser
       | side of the AI competition as China and mostly every US
       | corporation doesn't care about that.
        
         | kandesbunzler wrote:
         | European politicians dont understand anything. We are led by
         | utter morons. And redditors here complain about trump, I'd much
         | rather have him than these mouthbreathers we have.
        
           | mattnewton wrote:
           | not to derail us into politics, but with Trump's team I'd
           | take that trade offer for almost any EU nation's
           | representatives in a heartbeat, if for no other reason than
           | they would respect the process and take years to break what
           | the US is now breaking in days.
        
         | jampekka wrote:
         | The usage covering "unacceptable risk" is stuff that's either
         | illegal or heavily regulated already in human operation too.
         | 
         | > Also, this kind of laws would make Europeans to stay at the
         | loser side of the AI competition as China and mostly every US
         | corporation doesn't care about that.
         | 
         | Not sure that's a game I want to win.
        
       | theptip wrote:
       | Seems like a mostly reasonable list of things to not let AI do
       | without better safety evals.
       | 
       | > AI that tries to infer people's emotions at work or school
       | 
       | I wonder how broadly this will be construed. For example if an
       | agent uses CoT and they needs emotional state as part of that,
       | can it be used in a work or school setting at all?
        
       ___________________________________________________________________
       (page generated 2025-02-03 23:00 UTC)