[HN Gopher] EU Artificial Intelligence Act
       ___________________________________________________________________
        
       EU Artificial Intelligence Act
        
       Author : Trouble_007
       Score  : 111 points
       Date   : 2023-05-16 19:27 UTC (3 hours ago)
        
 (HTM) web link (artificialintelligenceact.eu)
 (TXT) w3m dump (artificialintelligenceact.eu)
        
       | mdp2021 wrote:
       | The Annexes ( https://artificialintelligenceact.eu/annexes/ )
       | contain a definition of "High risks AI systems" at Annex III.
       | 
       | --
       | 
       | Incidentally, for the many who claimed on these pages that we
       | would not "have a definition of AI" (actually we have several),
       | well, this legislative text provides one:
       | 
       |  _software with the ability, for a given set of human-defined
       | objectives, to generate outputs such as content, predictions,
       | recommendations, or decisions which influence the environment
       | with which the system interacts, employing techniques including
       | (a) Machine learning approaches, including supervised,
       | unsupervised and reinforcement learning, using a wide variety of
       | methods including deep learning; (b) Logic- and knowledge-based
       | approaches, including knowledge representation, inductive (logic)
       | programming, knowledge bases, inference and deductive engines,
       | (symbolic) reasoning and expert systems; (c) Statistical
       | approaches, Bayesian estimation, search and optimization methods_
        
         | amadeuspagel wrote:
         | Is there any recommendation engine which would not count as AI
         | according to this definition?
        
           | mdp2021 wrote:
           | > _Is there_
           | 
           | Yes (oddly enough): arithmetic based.
        
           | johnchristopher wrote:
           | The HN definition torture engine would agree, depending of
           | course on which knobs you turn.
        
         | duxup wrote:
         | I'm an AI engineer and I didn't even know it...
         | 
         | At least based on that mess of words I am.
        
         | gradys wrote:
         | "Symbolic reasoning" used to make "decisions which influence
         | the environment with which the system interacts" could describe
         | almost all real world computer systems, could it not?
        
           | [deleted]
        
           | nopenotthat wrote:
           | Yup, it's so laughably broad that almost any software that a
           | user interacts with could come under that definition.
        
         | nopenotthat wrote:
         | [dead]
        
       | FloatArtifact wrote:
       | I'd rather see agencies certify AI to standards rather than
       | regulatory on development of AI models.
        
       | jdiez17 wrote:
       | For anyone who is jumping to the comments to complain about how
       | more rules from the EU is going to make innovation difficult, I
       | highly recommend you to read the summary presentation:
       | https://www.ceps.eu/wp-content/uploads/2021/04/AI-Presentati...
       | 
       | Basically, as I understand it, it divides AI systems (in the
       | broadest sense Machine Learning sense) into risk categories:
       | unacceptable risk (prohibited), high risk, medium/other risk, and
       | low risk.
       | 
       | Applications in the high risk category include medical devices,
       | law enforcement, recruiting/employment and others. AI systems in
       | this category will be subject to the requirements mentioned by
       | most people here (oversight, clean and correct training data,
       | etc).
       | 
       | Medium risk applications seem to revolve around the risk of
       | tricking people, for example via chatbots, deepfakes etc. In this
       | case they require to "notify" people that they are interacting
       | with an AI or that the content was generated by AI. How this can
       | be enforced in practice remains to be seen.
       | 
       | And the low risk category is basically everything else, from
       | marketing applications to ChatGPT (as I understand it).
       | Applications in this category would have no mandatory
       | obligations.
       | 
       | If you ask me, that's a quite sensible approach.
        
       | riazrizvi wrote:
       | "Unacceptable risk", "high risk", "force for good". Terms as
       | vague and broad as an interstellar gas cloud. It makes me wonder
       | if this is a strawman argument against regulation.
        
       | piotr-yuxuan wrote:
       | To add some more context it's funded by Elon Musk [0] and Vitalik
       | Buterin [1] (source [2]).
       | 
       | [0] https://en.wikipedia.org/wiki/Elon_Musk
       | 
       | [1] https://en.wikipedia.org/wiki/Vitalik_Buterin
       | 
       | [2]
       | https://ec.europa.eu/transparencyregister/public/consultatio...
        
       | nonethewiser wrote:
       | There was a great article on this recently that cuts through the
       | EU's dressing:
       | 
       | EU AI Act To Target US Open Source Software
       | 
       | https://technomancers.ai/eu-ai-act-to-target-us-open-source-...
       | 
       | TLDR it imposes ridiculous constraints on github and open source
       | developers
        
         | reedciccio wrote:
         | A much better analysis from a qualified researcher
         | https://openfuture.eu/blog/undermining-the-foundation-of-ope...
        
         | rover0 wrote:
         | That site has some questionalble views, for example
         | https://technomancers.ai/pardon-elizabeth-holmes/
        
           | moffkalast wrote:
           | Well the draft proposal it references seems official, I
           | haven't skimmed through it yet to see if it checks out or
           | not: https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep
           | /COM...
        
           | nonethewiser wrote:
           | This seems like an ad hominem. Why not engage with the
           | content of the artcicle before hand waving it away?
        
             | thr-nrg wrote:
             | [flagged]
        
           | Myrmornis wrote:
           | I hope all of us have some questionable views!
        
           | miohtama wrote:
           | Also the post itself has some factual errors.
        
         | mrtksn wrote:
         | I don't know, my impression is that this is written by someone
         | who doesn't understand how law works and making hyperbolic
         | assumptions through cherrypicking. I recall a similar panic and
         | outrage when GDPR was introduced, claiming that it will
         | bankrupt startups and small projects etc.
         | 
         | The EU stuff is usually against large corporations who pose
         | systematic risks. Think about how TikTok makes the US freak out
         | about China spying and manipulations, for EU it's the the same
         | thing but include Facebook and all other large social media
         | too(because American corporations are foreign in Europe like
         | TikTok is foreign in USA). US considers banning TikTok, EU's
         | approach is to regulate how data is processed and used in order
         | to keep open market and mitigate risks.
         | 
         | So as a rule of thumb, Europol doesn't knock on your door when
         | you train a model on data that doesn't meet the requirements.
         | This is probably designed to make countries introduce laws
         | which will hold Google/OpenAI etc. accountable for their
         | services so they can't just shrug and say "ouppsy, it's not our
         | fault the AI did it".
         | 
         | I'm sorry to pop the alarmist narrative, but this is not the
         | legislation which is going to "get you".
        
           | nforgerit wrote:
           | We've been there with GDPR. It didn't help creating the level
           | playing field they officially wanted to achieve. BigTech has
           | law departments whilst local tech startups have to deal with
           | legal battles initiated by third-class legal firms crawling
           | the web.
           | 
           | They just end up making their local tech entrepreneurs flee
           | to US/UK.
        
             | dariosalvi78 wrote:
             | FYI, the UK has the same legislation as GDPR
        
             | mrtksn wrote:
             | > They just end up making their local tech entrepreneurs
             | flee to US/UK.
             | 
             | Really? Which startups flee due to GDPR?
             | 
             | In my experience as a user, I now have option to download
             | my data and know what's collected about me. I like it.
        
               | nforgerit wrote:
               | Most prominent recent heads-up was BionTech who decided
               | to start a "Strategic Partnership" with the UK government
               | for their research.
               | 
               | [0] https://investors.biontech.de/news-releases/news-
               | release-det...
        
               | mrtksn wrote:
               | And this is connected to GDPR how exactly?
               | 
               | Companies do these things all the time, I guess it's fun
               | to imagine that opening offices in other countries is
               | fleeing and that are fleeing to UK because they are
               | having trouble processing personal data when developing
               | drugs but I don't see why would that be the case.
               | 
               | Fun fact: UK data protection laws are about the same as
               | EU.
        
               | anonylizard wrote:
               | Oh they don't flee, they just don't bother to start new
               | ones in the EU.
               | 
               | Which of the new AI labs is setup in EU?
               | 
               | Stability? Midjourney? Anthropic? Cohere? Every single
               | one is either in the US or UK. LLM companies deal with
               | huge legal uncertainties, especially around data and
               | privacy. Hence investors are reluctant to fund any in the
               | EU, and potential founders are deterred. This is all
               | legacy of the GDPR and the 'regulatory superpower'
               | mindset that the EU deluded itself into.
               | 
               | The only new wave AI company from the EU is DeepL, which
               | is going to face really really intense competition from
               | LLMs.
        
               | mrtksn wrote:
               | Europe missing out in tech has much longer history than
               | EU itself.
               | 
               | here is a quick watch:
               | https://www.youtube.com/watch?v=5ZdmS-EAbHo
               | 
               | I have no idea why would you claim that with the
               | introduction of GDPR Europe lost on tech. By tech I mean
               | the SV industry, there are many other technologies out
               | there.
               | 
               | Anyway, why all the "tech" is in SV? Do they have GDPR in
               | the other states?
        
             | jacooper wrote:
             | I'm sure European startups like Plausible, Matomo,
             | Nextcloud, which exist because of the GDPR or have gotten a
             | huge boost from it agree with you.
        
               | eipie10 wrote:
               | None of those are truly innovative. They are basically
               | reinventing the wheel.
        
           | rad_gruchalski wrote:
           | > Europol doesn't knock on your door when you train a model
           | on data that doesn't meet the requirements. This is probably
           | designed to make countries introduce laws which will hold
           | Google/OpenAI etc. accountable for their services so they
           | can't just shrug and say "ouppsy, it's not our fault the AI
           | did it".
           | 
           | Your use of "probably" is quite telling. Probably, as in,
           | maybe not, maybe yes, who knows. As the act is phrased today,
           | anyone who publishes a certain type of model, or a derivative
           | of such, is a subject to certain legal obligations. Do you
           | want to risk that Europol or any other task force knocks on
           | your door, or hope every time that you slip under the radar?
           | 
           | Those acts, together with the CRA, are so vague that a lot of
           | people will operate in a grey area. So maybe nobody knocks on
           | your door for a year. Or two. Or five. But when they knock,
           | good luck defending yourself from a legal action based on
           | laws written by people who had so little idea they had to
           | leave so many vague points open to interpretation depending
           | on who doesn't like you to what extent.
        
       | holistio wrote:
       | The act is dated 21. 4. 2021. More than two years old.
       | 
       | 1.) being a part of the team working on this has to be among the
       | most exciting legal jobs in Brussels
       | 
       | 2.) I did not have time to read the entire act, not even sure if
       | I'd understand it, but I'd be curious how much of it is still
       | relevant given the leaps in both tech and especially popularity
       | in the last two years.
        
       | shon wrote:
       | Interesting, from the site:
       | 
       | "applications, such as a CV-scanning tool that ranks job
       | applicants, are subject to specific legal requirement"
       | 
       | This is a continuation of EU logic first seen in GDPR around what
       | that law calls "automated decision making".
       | 
       | All I can say is that GDPR hasn't had a good effect. Partly
       | because It's not well written from a technical perspective.
       | 
       | GDPR demands explainable and auditable automation. Non-
       | deterministic AI systems make this difficult or impossible with
       | current tech. So to be "compliant", vendors dumb-down their
       | software to use explainable methods and often inferior hiring
       | decisions are made because users have to operate on untenable
       | amounts of data using basic sorts. So the Talent Acquisition team
       | end up structuring the hiring process around "disqualifers" such
       | as resume gaps, education requirements, pre-interview
       | qualification tests, etc.
       | 
       | It reminds me of an old recruiting joke:
       | 
       | "Recruiter: You said you only wanted to interview the 5 best
       | applicants but we are getting so many applicants we don't know
       | where to start.
       | 
       | Hiring Manager: OK, first, I only hire lucky people. Print out
       | all of the resumes and throw away every other one."
       | 
       | Interestingly, if this process is done randomly without reviewing
       | the resume, it's considered legal.
        
         | makeitdouble wrote:
         | It comes down to how you prove your blackbox is better than a
         | biased but auditable (and fixable) process.
         | 
         | Even doing a few dozen audits of the AI run and coming up with
         | better results, how can you assume these results will be
         | consistent across thousands of resumes that will be blindly
         | scanned ?
         | 
         | Usually stats could be applied, except as it's a black box, can
         | we assume that behavior will have cobsistency ? (a dumb
         | example: if the AI is somewhat influenced by dates, decision
         | will drastically change as time goes by)
        
       | IAmGraydon wrote:
       | I wonder if all those calling ChatGPT and the like "AI" when it's
       | nothing of the sort regret doing so now. AI is a scary word for
       | certain groups, while machine learning (which is what this is)
       | isn't. Now you have a bunch of Luddites with pitchforks looking
       | for a witch to burn.
       | 
       | What this act will do is severely stunt the European economy
       | compared to the rest of the world, which will be racing ahead (as
       | long as countries like the US don't pass similar laws). By the
       | time Europe realizes its mistake, it will be too late to catch
       | up.
        
         | makeitdouble wrote:
         | The act is about automated processes, "AI" or not.
        
           | IAmGraydon wrote:
           | Do you think it's coincidence that it came about during the
           | middle of a massive hype wave with widespread talk of how AI
           | will figuratively take over the world? Automated processes
           | have existed for nearly 100 years.
        
         | tpoacher wrote:
         | So what's AI then?
        
           | IAmGraydon wrote:
           | A machine that can think, not a machine that spits out
           | strings of text that fool humans into believing it can think.
           | The map is not the territory.
        
       | OhNoNotAgain_99 wrote:
       | [dead]
        
       | Zpalmtree wrote:
       | Does the EU ever get tired of proposing terrible innovation
       | crushing laws?
        
         | OhNoNotAgain_99 wrote:
         | [dead]
        
       | cookieperson wrote:
       | Looks like it's being treated as a dual use weapon. Pretty sane
       | to me.
        
         | dogma1138 wrote:
         | About as (in)sane as how PGP was treated as a weapon...
        
       | cjg_ wrote:
       | Note that this is not an official EU website, but by a non-profit
       | organization https://futureoflife.org/
        
       | abujazar wrote:
       | At first glance this looks like official information, but in fact
       | it's a campaign site from https://futureoflife.org and should be
       | clearly marked as such.
        
       | duringmath wrote:
       | Why is everyone in a hurry to regulate AI?
       | 
       | I can't think of one example where someone was harmed by an LLM.
       | 
       | Besides "AI" is largely a marketing term, most software has "AI"
       | elements and that has been the case for a while now, this thing
       | has "unintended consequences" written all over it.
        
         | bitL wrote:
         | Power grab and preservation. This will be quickly approved to
         | preserve existing players and power status quo (lawyers,
         | managers and doctors don't want to be automated away).
        
           | felipemnoa wrote:
           | If an AI comes along that is able to do their job better than
           | them then they will not have a say in that. No matter how
           | many regulations the government puts up.
           | 
           | Is like trying to regulate cars to save the horse shoe
           | industry.
           | 
           | And if doctors can be automated away so can software
           | developers. I guess in the long term we are all obsolete.
        
         | bitshiftfaced wrote:
         | Isn't European tech regulation basically a billion dollar
         | income source for them now?
        
         | pembrook wrote:
         | Because there's no better marketing for a probabilistic (dumb)
         | language model than the idea that it's so powerful it needs to
         | regulated.
         | 
         | Just as Steve Jobs used to say when the media was constantly
         | stoking fears about the power of personal computers in the
         | 1980s--"you can just throw it out the window."
         | 
         | Yet, he simultaneously took advantage of that media hysteria
         | when signing off on what is considered the best commercial of
         | all time- "1984 won't be like 1984 because Apple"
         | 
         | Essentially what OpenAI is doing right now in Washington (ie.
         | stoking the fear and also selling the solution).
         | 
         | The more things change, the more they stay the same.
        
         | CamperBob2 wrote:
         | They're regulators. That's what they do. That's _all_ they do.
        
           | seydor wrote:
           | No , it wasn't always like that. The EU has gone on overdrive
           | lately but that's not because EU citizens asked for it. This
           | is mostly brussels people regulating popular subjects because
           | ... they want to be associated with things that are popular.
           | There is really little else to explain what's happening in
           | the EU in the past 5 years. As a citizen I am concerned by
           | this "tyrranny from Brussels" because my opinion never seems
           | to have mattered nor were EU citizens informed before the
           | fact.
        
           | [deleted]
        
       | VWWHFSfQ wrote:
       | Article 10 requires that
       | 
       | > all training data be "relevant, representative, free of errors
       | and complete."
       | 
       | This is especially interesting to me with regard to something
       | like ChatGPT. As we know, ChatGPT occasionally gives factually
       | incorrect information. Does this mean that, in its current form,
       | it would be illegal in EU? We know that Google is currently
       | blocking access to Bard in EU. Will ChatGPT be forced to follow
       | suit?
       | 
       | ChatGPT is great and I love it. It would be a shame if I'm not
       | even allowed to use it _at my own risk_ just because it might be
       | wrong about some things. This seems like a simplification, but it
       | sounds like EU is allowing Perfect to be the enemy of Good.
        
         | RobotToaster wrote:
         | Will be interesting for copilot, given all the buggy half
         | finished projects on github that would have been included in
         | it's training data.
        
         | jdiez17 wrote:
         | Article 10 does not apply to low risk AI systems like ChatGPT.
        
         | nashashmi wrote:
         | Error is different from misinformation.
        
         | jstx1 wrote:
         | The quoted sentence is about training data, not about the
         | output of the model, they're different things.
        
           | m463 wrote:
           | I wonder what "representative" means in relation to human
           | behavior?
           | 
           | does it mean "must collect ALL data"?
        
             | Satam wrote:
             | I have a feeling they might be looking for "equality" with
             | this formulation. However, if it is representative of the
             | real world, it will often not be in line with the norms
             | prescribed by the notion of equality.
        
             | jstx1 wrote:
             | Of course not, there's no meaning of representative that
             | requires this.
        
               | macksd wrote:
               | I am wondering what qualifies as "complete", though. Any
               | reasonable definition I can come up with is redundant
               | with "representative" and "free of errors".
        
         | simion314 wrote:
         | From I read earlier (I did not waste time on this article
         | again) EU rules in the propisal that is not definite are about
         | critical stuff.
         | 
         | I agree that would be idiotic to let some greedy bastards sell
         | some MedicalGPT to us, or PoliceGPT, SurveilenceGPT.
         | 
         | Imagine the MedicaGPT will give you different treatment each
         | time you ask since is not deterministic, or if you change the
         | patient name from Bob to John then it gives you some wild
         | results because test data had tons of hon Smiths in and nobody
         | can explain this AIs reasoning.
         | 
         | So IMO for critical systems we need good rules for safety
         | reasons, for non critical systems we need transparency and if
         | you sell an AI product you should also take responsibility if
         | it performs worse then you advertise. Like you can't SELL me a
         | GPT for schools with a shit disclaimer "it might be wrong
         | sometimes and teach the students wrong stuff, or it might
         | sometimes be NSFW" , IMO fuck this ToS where this giants sell
         | us stuff and take no responsibility on the quality of the
         | product.
        
           | NavinF wrote:
           | > different treatment each time you ask since is not
           | deterministic
           | 
           | https://ai.stackexchange.com/questions/32477/what-is-the-
           | tem...
           | 
           | It's unfortunate that EU regulators seem to be making the
           | same mistakes as you because they have a similar
           | understanding of language models.
        
       | netfortius wrote:
       | US congress may be trying to do something, also:
       | https://finance.yahoo.com/news/congress-took-on-ai-regulatio...
        
       | neom wrote:
       | I was trying to understand more about AIA today after it was
       | mentioned a few times in the oversight committee thing. Found
       | this talk, it's is pretty good, I thought it was going to be lame
       | content marketing but the guest is a real lawyer who seems to
       | have a real understanding of AI and what is going on:
       | 
       | https://www.youtube.com/watch?v=yoIC5EPPfn4
       | 
       | (feel like all my HN posts are always revealing the embarrassing
       | about of youtube I watch)
        
       ___________________________________________________________________
       (page generated 2023-05-16 23:00 UTC)