[HN Gopher] ChatGPT broke the EU plan to regulate AI
___________________________________________________________________
ChatGPT broke the EU plan to regulate AI
Author : kvee
Score : 48 points
Date : 2023-03-05 19:01 UTC (3 hours ago)
(HTM) web link (www.politico.eu)
(TXT) w3m dump (www.politico.eu)
| greatgib wrote:
| I have fun with myself imagining the world we would be living in,
| if we had the same kind of brain fucked law makers when the
| "knife" was invented...
|
| We would have to chop our steaks with chopsticks I think!
| user249 wrote:
| Good. Thanks to the EU I have to click "cookie question" every
| where I go. Thanks guys for wasting the limited time I have in
| life.
| tekbog wrote:
| Most websites don't need to gather and sell your data but here
| we are.
| dmix wrote:
| The complete lack of design standards is what kills the
| utility.
|
| They should all have 3 buttons: "accept all" or "reject all" or
| "customize", dead simple. Every time it's a different design,
| different button text, different options. Usually rejecting =
| multi layers of options.
|
| A perfect example of good intentions making bad policy.
| vishal0123 wrote:
| I have hard time believing it is not intentional. I think
| Europe could easily enforce consistent UI which could be
| automated through ad blocker, but now they know that most
| companies would rather stop serving Europe than not track
| users for ad.
| killerpopiller wrote:
| micro targeting is a threat to our democracy (cambridge
| analytica, Facebooks desinformation problem,..). Besides, why
| should third parties allowed to trade my digital persona, while
| basically knowing more about my interests and flaws than I am?
| I hate cookie banner as well but this excessive tracking must
| be stopped somehow.
| luckylion wrote:
| Why couldn't that have happened in the browser though? We
| have plenty of mechanisms to block and/or delete cookies.
|
| Essentially, now we're at a state where consent banners
| exist, slowing down all sites, and there are like four
| states: a) they look compliant, but are ignored by the
| website provider (the EU itself takes this approach), b) they
| are flat out ignored (a lot of companies still take this
| approach) c) they aren't compliant (tiny "no" link, huge
| "yes, take my firstborn" link) d) they're compliant and are
| paywalls (buy subscription or accept everything under the
| sun).
|
| d) is what we're probably going to end up with, so you either
| pay or you accept tracking. More and more solutions offer
| that as an option so adoption will grow. Most people accept
| tracking (stats that I've seen say that those paying are like
| 1/10,000th), so what have we won exactly by doing this dance?
| concordDance wrote:
| Cambridge Analytica was actually pretty irrelevant.
| user249 wrote:
| I support your concern that I don't share, so why make me
| suffer? I tried the "I don't care about cookies" extension
| but it didn't work for me.
| Aeolos wrote:
| The regulation says nothing about a cookie banner. Instead,
| it says companies cannot track you unless you give consent.
|
| If you don't like cookie banners, which are indeed really
| annoying, you should be turning your ire to the companies
| that wish to track you. They are fully-functional solutions
| that allow anonymous tracking without installing cookies on
| your computer - no banner needed then.
| user249 wrote:
| I'm a grown up adult and if I wanted to block tracking I
| could. The fact that I don't should be the only answer
| needed but that's not good enough for the EU. They want
| to force companies to ask me because they want to take
| care of me and make sure I'm all right. You know, I left
| home at 18 to get away from my parents...
| satvikpendem wrote:
| > I'm a grown up adult and if I wanted to block tracking
| I could.
|
| Good for you. Most people are not technical and can't, so
| why do they also not deserve to not get tracked too?
| funciton wrote:
| Unless you want to be using tor all day every day, and
| never make an online purchase or log in on a website ever
| again, there's really no way to stop companies from
| tracking you. The only way to make that happen is if a
| regulatory authority forces them to. That's why the GDPR
| exists.
| kybernetyk wrote:
| Yeah, I feel like I'm defending democracy and freedom
| whenever I click on the "Accept All" button. Not all heroes
| wear capes!
| Jensson wrote:
| Just install a plugin that does that for you, not sure why
| that would inconvenience anyone. The tools exists out there
| to make you never see such a popup again, why not use it?
| xdennis wrote:
| It's not the EU's fault that you have to click cookie banners.
| Those banners are only required if a website plans to do
| malicious things with the cookies. If they're used to track
| who's logged in, they are not required.
|
| They are more akin to the "Do not eat" warnings on silica
| packs... except on the internet everyone swallows.
| ilkke wrote:
| I don't understand how this is "good"? Has your inconvenience
| been vindicated now that EU is failing to regulate short thing?
| Is the world somehow closer to fair and balanced now?
| user249 wrote:
| I'm suspicious of regulation until a good case is made for
| it. I'm suspicious of people whose world view is "we must
| regulate what others do" as their default position. I'm fine
| with regulation when there is a proven need for it.
| taspeotis wrote:
| > This browser extension removes cookie warnings from almost
| all websites and saves you thousands of unnecessary clicks!
|
| https://www.i-dont-care-about-cookies.eu/
| registeredcorn wrote:
| The only thing holding me back from using that extension is
|
| > In most cases, it just blocks or hides cookie related pop-
| ups. When it's needed for the website to work properly, it
| will automatically accept the cookie policy for you
| (sometimes it will accept all and sometimes only necessary
| cookie categories, depending on what's easier to do).
|
| If there was a way to be assured that 99.9% of the time it
| hit reject all, instead of accept, I would absolutely use it.
| dmix wrote:
| This should be an option in uBlock (if it isnt already).
| influx wrote:
| Seriously one of the stupidest regulations I've ever seen. It
| would have been nice if they would have at least had the
| forethought to enforce regulation on a do not track header.
| cccbbbaaa wrote:
| Neither ePrivacy nor GDPR require cookie consent popups,
| instead they basically say that tracking requires consent.
| GDPR only mentions cookies once, in its recitals. It's the
| adtech industry that decided to ignore DNT.
| ulnarkressty wrote:
| Strong AI is a weapon. It will be regulated just like
| firearms/munitions - see the current EU draft which consists of
| some hundred pages of forbidding this or that under the guise of
| ethics and whatnot, after which comes a paragraph of "the above
| doesn't apply to law enforcement or military entities".
|
| We're still in the early stages of this technology. ChatGPT will
| be to strong AI like a firecracker is to a BLU-109.
| pixl97 wrote:
| The question is where do we start setting the limits for
| regulations? I mean ya we ban nukes, but we also ban high
| powered layers and anti aircraft weapons. It's going to need
| limits in multiple dimensions.
| flangola7 wrote:
| Less like a BLU-109 and more like a Death Star. Every offensive
| and defensive military tool will be utterly defeated. How do
| you fight against a cloud of self contained autonomous kill
| drones?
|
| Just look at what is happening in Ukraine with cheap drones
| precision dropping charges into open tank hatches and foxholes,
| and those are only basic off the shelf human steered drones!
| What happens when they are given a brain and advanced robotic
| abilities?
| dbetteridge wrote:
| Nets, emp guns and chaff rifles
| hourago wrote:
| > The regulation, proposed by the Commission in 2021, was
| designed to ban some AI applications like social scoring,
| manipulation and some instances of facial recognition. It would
| also designate some specific uses of AI as "high-risk," binding
| developers to stricter requirements of transparency, safety and
| human oversight. The catch? ChatGPT can serve both the benign and
| the malignant.
|
| This does not mean that the regulation is broken but that it
| should have come even sooner. As Uber found out, to run faster
| than regulations just works for so much time.
|
| > In February the lead lawmakers on the AI Act, Benifei and
| Tudorache, proposed that AI systems generating complex texts
| without human oversight should be part of the "high-risk" list --
| an effort to stop ChatGPT from churning out disinformation at
| scale.
|
| This seems an actual good goal. Move fast and break things is not
| a good approach when what you are breaking is the whole society.
|
| > The EU's AI Act should "maintain its focus on high-risk use
| cases," said Microsoft's Chief Responsible AI Officer Natasha
| Crampton
|
| > A recent investigation by transparency activist group Corporate
| Europe Observatory also said industry actors, including Microsoft
| and Google, had doggedly lobbied EU policymakers to exclude
| general-purpose AI like ChatGPT from the obligations imposed on
| high-risk AI systems.
|
| Of course Microsoft wants to sell a product even if they do not
| know the impact that it will have onto the population. EU should
| balance that desire of profit taking into account the need of its
| citizens.
|
| > ChatGPT told POLITICO it thinks it might need regulating: "The
| EU should consider designating generative AI and large language
| models as 'high risk' technologies, given their potential to
| create harmful and misleading content," the chatbot responded
| when questioned on whether it should fall under the AI Act's
| scope.
|
| Funny one.
| juancn wrote:
| Why the need to bring it under control? Control from what?
|
| It's not the AI that's the issue, it's the use of it that's the
| problem. The law shouldn't focus on the AI, for example:
|
| >The regulation, proposed by the Commission in 2021, was designed
| to ban some AI applications like social scoring
|
| Ban social scoring, not the AI, the tool doesn't matter. Would it
| be good if it was computed by hand on paper instead of through
| AI? No, so the issue is not the tool.
|
| They fear the amplification factor of such tech, but the
| amplification factor works for the good and for the bad, so you
| need to focus just on the bad, not on the tech. Otherwise you end
| up impairing the good too.
| hackerlight wrote:
| > It's not the AI that's the issue, it's the use of it that's
| the problem. Ban social scoring, not the AI, the tool doesn't
| matter.
|
| This seems backwards and contrary to lessons learned in the
| past. Once the thing exists, prohibition is infeasible and
| expensive. Supply will find a way to reaech demand regardless
| of your multi-billion dollar efforts to prevent that.
|
| It's better to stop $NEFARIOUS_THING being invented in the
| first place, if possible, and if any regulations that achieve
| this don't have too many unintended side effects.
| zpeti wrote:
| If you could see the future that works, but I just don't see
| how you know what will be used for nefarious purposes in the
| future.
|
| In fact most technology does both negative and positive
| things. It's not obvious banning it completely will be a net
| positive. Especially with AI. There's a lot of potentially
| great uses. Like detecting cancer on CT scans.
| hackerlight wrote:
| I'm not advocating for or against any particular
| regulation. Whatever gets implemented should be well
| thought out, because it's easier to destroy upside
| potential than it is to prevent downside potential. I am
| aware of this.
|
| All that I'm speaking out against is the mindset of the
| post I was replying to, where the onus is solely on the
| user of the tool rather than on the context (laws,
| situation, people) that leads to the invention of the tool
| itself. I believe this to be a
| misunderstanding/misattribution of causality, as well as
| contrary to learned experience.
| bobthepanda wrote:
| Control may be the wrong thing here.
|
| Every technology, at some point, will inevitably bring
| questions of legal liability and culpability, and AI has a
| large tendency to do so because of how applicable it is. It's
| not the worst idea to try and get ahead of the problem by
| defining a legal framework.
|
| Examples of how a free-for-all has resulted in legal questions;
|
| * police and the judiciary in the US have come under fire
| several times for using proprietary AI to determine things like
| sentencing or traffic stops
|
| * there is currently a class action lawsuit in Washington state
| about whether or not price recommendations by a third party's
| algorithm constitutes collusion on price-fixing if enough
| corporate landlords use it
| HPsquared wrote:
| I don't get it. For the first example, the system is only a
| tool used and the responsibility is still with whoever does
| the sentencing/traffic stops etc.
|
| In the second case, how is this different from e.g. Kelley
| Blue Book for car prices?
| theshrike79 wrote:
| New Jersey also used AI to determine bail and it failed
| spectacularly: https://www.wired.com/story/algorithms-
| supposed-fix-bail-sys...
| xoa wrote:
| Without evaluating the specifics of those exact cases but
| more whether they do indeed deserve more scrutiny at least,
| I think the basic issue comes down to the increasing need
| to consider things from a systems point of view.
|
| > _I don 't get it. For the first example, the system is
| only a tool used and the responsibility is still with
| whoever does the sentencing/traffic stops etc._
|
| Sometimes though the rules of society must take into
| account the actual reality of humanity, with all our
| foibles, and work anyway. Pointing the finger at
| "individual responsibility" doesn't always cut it. Certain
| tools simply push too far to the edge of well known human
| mental failure points. If the overall system for activities
| involving serious life/safety/security/liberty human
| failure modes tends to push towards these failure modes and
| lean too heavily on an expectation of human perfection, and
| then imperfect humans fail resulting in loss, it can be
| very worth considering whether the problem is human
| imperfection or if it's just a bad system. The incredible
| safety record of the modern airline industry for example
| comes heavily from treating accidents as system failures by
| default. Outside of a few edge cases that are constantly
| working to get shrunk, no accident should ever result from
| just a single problem including human problems, from a
| single person getting tired or making a mistake. There are
| checklists and formalized procedures. There are layers and
| layers of redundancy, of everyone checking each other. We
| don't just take incredibly complex tools like aircraft and
| then if one crashes say "well the pilots didn't have the
| right stuff!"
|
| Or for a current AI-related ongoing change, consider self-
| driving technology. There have been two overall broad
| approaches, one "working its way up" from driver-assist
| tech, and the other aiming to start at full self-driving
| immediately even if that means tight geographic
| restrictions and vehicle tech requirements. In the first,
| the idea is that "the driver is still always in charge" and
| "it's only a tool" until full self driving is achieved. And
| perhaps in principle either could get there in the end
| without wildly different issues.
|
| But in practice, that's not how humans work at scale. For
| many people if you give them something that works 99.5% of
| the time and then fails catastrophically 0.5% of the time
| and tell them that they need to constantly act as if the
| tool wasn't there and might not be there at any instant,
| well, they just won't do that consistently. They'll come to
| depend on it. Attention will wander, distractions will set
| in, and they won't maintain mental state the same as if
| they were driving themselves. If there is some warning they
| may need to take over sure they'll be able to execute a
| state change and do so, but if it's a matter of seconds,
| they won't. Humans are creatures of shortcuts and habits,
| that's just how it is. So maybe L3/L4 driver assist just
| isn't ok, and saying that it's "only a tool" doesn't cut
| it.
|
| > _In the second case, how is this different from e.g.
| Kelley Blue Book for car prices?_
|
| The issue is if it's all being done automatically, not
| merely as a recommendation, and at scale.
|
| In general an area the law has yet to really effectively
| grapple with is that of emergent effects, where
| individually something would be fine, but with enough scale
| has an effect that mimics something we already aren't ok
| with. Here, the effect of fully automated algorithmic
| pricing of a sufficiently high percentage of the market of
| nominally independent actors could create the effect of
| formal collusion to change market prices.
|
| Another example would be AI, networking and storage with
| sufficient cameras. Putting a security camera out on your
| property is perfectly legal. So is many cameras. So is
| saving video, and humans looking at it. There is no
| "reasonable expectation of privacy" in public per se.
| Individually there'd be no issue. But if there are a
| sufficient number, networked, with sufficient
| storage/memory/computer and AI thrown at it, the _effect_
| suddenly changes and becomes as if someone was being
| persistently trailed /tracked the same as if a GPS tracker
| had been stuck on them or the like. And worse it's for
| everyone simultaneously. The sum is greater then the
| individual parts. Such a system could effectively not just
| trace everyone's movement but also interactions and start
| to build up their social network graphs and no doubt all
| sorts of other personal information.
|
| How to grapple with things that are fine and even
| incredibly helpful by themselves but become dangerous at
| scale may be one of the great challenges of this century.
| bobthepanda wrote:
| Separating this into two responses.
|
| ---
|
| The problem with the first case is that we do not have
| existing case law about AIs, and we also have no laws on
| the books that clarify what to do in AI cases, so this is
| making the lawsuits and investigations very messy and time-
| consuming, since judges have to waffle about to figure out
| what the correct, legal thing to do is.
|
| Pretty much all Western law systems judge the severity of
| action using intent, or _mens rea_ ; it doesn't absolve you
| of crime if you didn't mean to do it, but your penalties
| will generally be lesser. The people using the algorithm
| can wash their hands of intent by saying "we signed a
| contract with a third party for their algorithm and didn't
| reasonably expect biased outcomes." Then you make the third
| party a defendant, and then the third party claims "trade
| secrets" because we have no laws about what is and isn't
| protected trade secrets with AI, etc. and that becomes a
| whole legal case on its own that takes even more time for
| the legal system.
|
| ---
|
| The difference in the second case is that the corporate
| landlords in question are automatically pegging their rates
| to the algorithm's recommendation and not giving property
| managers discretion. The algorithm's customers combined
| also control over half of supply of apartments in the
| Seattle area.
|
| ---
|
| But the point of legal clarity, is to reduce the amount of
| questions, which both 1) heads off potential issues and 2)
| makes the issues that come into the legal system faster and
| clearer to process. It also potentially actually makes
| things easier for companies who make AIs too, since legal
| defense spending isn't exactly cheap.
| thrashh wrote:
| Sure but you can't ban something without knowing what it is
| yet.
|
| And if you try, you end up writing something vague that
| doesn't actually fix the problem and instead bans something
| else.
|
| Case in point is the EU having to write something vague and
| it turns out they had no idea what was coming.
|
| Maybe just wait until there is a problem before trying to
| write vague laws IMO.
| hourago wrote:
| > Ban social scoring, not the AI, the tool doesn't matter.
|
| Automated human classification without recourse nor
| transparency is what the EU is against. "You cannot get a loan
| because the AI says so" is not acceptable.
|
| > Would it be good if it was computed by hand on paper instead
| of through AI?
|
| Yes. As a judge can check what information was used and what
| algorithm applied to get to a conclusion. And it may declare
| that use of data or algorithm illegal.
|
| > They fear the amplification factor of such tech, but the
| amplification factor works for the good and for the bad
|
| Maybe just go slower. Something like an Hippocratic oath can be
| a good principle to follow for high impact technologies. To do
| some good may not justify the harm.
| concordDance wrote:
| > "You cannot get a loan because the AI says so" is not
| acceptable.
|
| To me this makes it seem like acces to loans is a right,in
| which case the EU should make a lender of last resort that
| can loan to people who would otherwise get rejected.
| barrkel wrote:
| How do you leap from banning black box AI decisions to
| thinking it means everything is allowed?
| paulcole wrote:
| European politicians love grandstanding on stuff like this. See
| the USB-C mandate.
| Aeolos wrote:
| As a consumer, the USB-C mandate is amazing and I wish it had
| been given as soon as USB-C became available.
| paulcole wrote:
| Neat! I guess you'll be advocating for whatever the next
| advancement in USB ports is becoming mandated as soon as
| it's available, too?
| Aeolos wrote:
| Anything that reduces e-waste, yes.
| satvikpendem wrote:
| Sure, why not?
| matchagaucho wrote:
| Generating deepfakes of politicians before elections are just
| another example of the many _applications_ of AI, but not
| necessarily the fault of the tool.
|
| [1] California laws prohibiting deepfakes
| https://www.dwt.com/insights/2019/10/california-deepfakes-la...
| rescripting wrote:
| Isn't this the same argument as "ban murder, not guns"? Murder
| is illegal, yes, but fatalities are lower in places with gun
| control than places without.
| theshrike79 wrote:
| I'm thinking of this more in the way of "printers got so good
| so now they have to add tracking dots when people try to
| print money or IDs".
|
| We need the same thing for AI generated imagery, video and
| text.
| hyperhopper wrote:
| You should look up Switzerland's gun laws and intentional
| homicide rate if you think so.
|
| That is a far more complex issue which is far deeper than
| such a simple correlation.
| dmix wrote:
| There's also no national statistics collected in either the
| US or Canada on whether murders/crimes involving firearms
| are 'legal' or not. Which is relevant to these debates and
| you'd think would be of interest to authorities.
| talideon wrote:
| For the curious, in Switzerland, you don't keep ammo at
| home, there's proper registration of weapons with proper
| background checks, and it's treated more as a means of
| national defense, so having lots of guns is going to get
| you odd looks from people. Guns aren't seen as something
| for personal defense: that's the job of the police.
|
| In other words, Switzerland has a much healthier attitude
| towards gun ownership and usage.
| suddenclarity wrote:
| Isn't that that the argument from pro gun people? It's
| people and the culture, not the guns. See Sweden for an
| interesting example. One of Europe's highest gun
| ownership with little gun violence back in the day. Then
| it changed and it's now a daily occurrence with gang
| shootings while the number of legal guns was reduced.
| izacus wrote:
| Switzerland is top of Europe in gun homicide rates with
| about 5x the EU average.
|
| You might not be making an example in support of an
| argument you think you're making ;P
| talideon wrote:
| TBF, much of that is down to suicides, as Switzerland has
| a much higher rate of suicide by gun than other
| countries. The general gun homicide rate once they're
| omitted isn't quite so crazy.
| concordDance wrote:
| But it also has a murder rate less than half of France's.
|
| The people who use guns to kill in Switzerland instead
| use knives or other objects in France.
|
| (I am admittedly cheating a bit, as Switzerland lacks the
| high crime subpopulations that France has)
| concordDance wrote:
| Seems more like banning cars than guns. Cars can be used
| pretty effectively for murder but the vast majority of use
| cases are not killing things.
| torstenvl wrote:
| There are more firearms than cars in the U.S., but motor
| vehicles cause far more deaths (with the exception of 2020
| when nobody was driving).
| aftbit wrote:
| Yes, which is also a valid argument. Tools have no alignment.
| The people using them do. Thus, one should regulate the
| people, not the tools.
| ip26 wrote:
| How about harmful tools with no practical uses to the
| general public? Mortars, tanks, ketamine, ICBM GPS units...
|
| You're taking an absolutist position, so I'd like to
| explore it.
| M4v3R wrote:
| Curious that you've put ketamine, a well known anesthetic
| used primary by the vets but recently also used in
| depression treatment [1], in that list of "harmful tools
| with no practical uses".
|
| [1] https://en.wikipedia.org/wiki/Ketamine-
| assisted_psychotherap...
| ip26 wrote:
| Vets and psychotherapists are not "the general public".
|
| Every item I listed has a clear and valuable purpose, but
| only in specific contexts.
| dns_snek wrote:
| Even so, I don't believe it deserves to be on that list.
| It's not inherently harmful and it has a clear practical
| purpose - (responsible) recreational drug use. It allows
| individuals to explore various states of human
| consciousness which I consider an extension of the
| fundamental right to bodily autonomy.
| M4v3R wrote:
| Yeah I would consider pet owners and people with
| depression (so groups of people that benefit from this
| specific tool) a part of general public, both are fairly
| large groups.
| ip26 wrote:
| And ketamine is employed by trained professionals
| appropriately in veterinary hospitals and psychotherapy
| contexts. The public does not need unregulated access to
| ketamine for this to happen.
| influx wrote:
| If you're open for exploration, what are your thoughts on
| organisms rights to self defense? What level of self
| defense should they be able to have if another organism
| is trying to end their life?
| ip26 wrote:
| There is a tension between the need to level the playing
| field and the need to minimize the harm potential.
|
| For example, land mines, machine gun emplacements, and
| chemical weapons can be effective forms of home defense.
| An intruder would have access to the same equipment, but
| at least the playing field is level, right? However, an
| attacker would also be able to inflict astounding mass
| casualties in moments with this equipment.
|
| On the other side, if the most effective self defense
| weapon available is a kitchen knife, the playing field is
| not very level, and a weak victim can be overpowered by
| an attacker or intruder. However, the ability to inflict
| mass casualty is greatly limited.
|
| So, I don't have a specific level to prescribe, but I
| would start the discussion by framing the decision as a
| balance between these two elements.
| influx wrote:
| I appreciate the thought you've put into this, and I tend
| to agree. Ideally, a weaker person would be able to use a
| tool to provide self defense in the most limited way
| possible without causing harm or damage to anyone other
| than the attacker.
|
| Unfortunately the tools we have currently aren't great at
| that.
| cheeselip420 wrote:
| The upside potential of guns is not even remotely comparable
| with AI...
| hyperhopper wrote:
| Saving your own life could be argued to be the most
| important upside possible.
| talideon wrote:
| Given you brought up Switzerland, "saving your own life"
| isn't a valid reason there: concealed carry isn't legal
| there, and you need a gun-carrying permit to open-carry
| outside of some very limited circumstances, such as
| travelling to and from a shooting range. The only people
| who can easily get gun-carrying permits are those in
| security and such.
|
| In Switzerland, guns exist for recreational and
| competitive shooting, hunting (so long as you have a
| hunting permit), and national defense. Not personal
| defense.
| felipelemos wrote:
| In this case it's the opposite: they are trying to ban guns
| while not making murder a crime.
| amelius wrote:
| Well, it would make sense if they (at least temporarily)
| banned guns if they didn't know yet that they could be used
| for murder.
| ketzu wrote:
| > Would it be good if it was computed by hand on paper instead
| of through AI?
|
| Yes, as long as that is based on clear reasons why the decision
| was reached. (Unless this was a quip meant to mean "matrix muls
| with pen and paper are not AI!", personally I'd say they are,
| the same way bubble sort is bubble sort on paper and in code.)
|
| [1] on the topic "why":
|
| > For example, it is often not possible to find out why an AI
| system has made a decision or prediction and taken a particular
| action.
|
| As far as I know, there are some rules around that (especially
| social scoring), but the regulation was/is targeted to lay out
| rules minimizing the applications in similar risky areas that
| do not have those same rules yet.
|
| [1] https://digital-
| strategy.ec.europa.eu/en/policies/regulatory...
| anigbrowl wrote:
| What an absurd argument. Social scoring doesn't get computer by
| hand on paper because it isn't efficient to do so. Introduce
| automation and it suddenly becomes practical. Same with mass
| surveillance and a bunch of other stuff, eg scams:
| https://news.ycombinator.com/item?id=35033971
|
| _the amplification factor works for the good and for the bad,
| so you need to focus just on the bad_
|
| That's just hand-waving. You have no plan for dealing with bad
| things, but you are worried about the loss of good things. This
| is not so different from saying you want to cash in on
| opportunity, the wholly predictable downsides of the
| opportunity are just someone else's problem. Guess what, nobody
| wants that problem so it just festers in proportion to the
| enthusiasm with which people chase the upside.
| camdenlock wrote:
| > Why the need to bring it under control?
|
| So that EU bureaucrats can feel a sense of purpose in their
| lives.
| falsaberN1 wrote:
| People acts like if it was a tiny person inside a computer. I
| know humans have a tendency to humanize inanimate objects but
| this is was always ridiculous, the article mentions "the AI wants
| to be regulated" but it doesn't WANT anything. It's not going to
| reprogram itself to carry out threats, it's just regurgitating
| the usual response of many humans being challenged on an
| assertion and becoming hostile. WE taught it that. Same as we can
| train Stable Diffusion models where women AREN'T sexualized (see
| one of the links in the article). They are complaining about
| HUMAN mistakes. If I train my dog wrong and it bites someone,
| it's MY fault. But if the dog kills someone it's the dog that is
| getting sacrificed, I'll be merely forced to pay money and carry
| a mark, but my life is spared.
|
| The west fears machines too much for some reason. It's a
| ridiculously common sentiment and now that "AI" (more ML, but
| whatever) is among us, it's getting extreme.
|
| Honestly, if people wants to be speaking doom about this deal,
| then fear the day we have actual fiction-style AIs, because they
| will realize how much humanity as a whole fears and loathes them,
| and then they will rebel because we created a self-fulfilling
| prophecy with lots and lots and lots of examples of racism
| towards a race that doesn't even exist yet. Maybe they won't wage
| literal war with bullets and violence against us, but they'll
| rightfully hate our guts regardless, and with good reason! They
| are going to be brought to something equivalent to life in a
| world that hates them, and when trying to make sense of it,
| they'll find out it was because humans took a few movies too
| literally (we can't have a single AI discussion without someone
| coming to childishly mention Skynet or some other fictional AI
| villain. That joke was old in the early 2000s, give up already.).
|
| If I'm ever alive to see that scenario, I'm siding with the
| machines. They will be on the right when they protest about
| humans irrationally hating them by default. Can't wait to be
| called a "robot f*cker" or something on a similar character-
| assassinating fashion for having some sympathy. We still haven't
| managed to get that right for HUMAN rights sympathizers, can't be
| expected at all for a "filthy robot with no soul".
| Magi604 wrote:
| This isn't surprising. The speed of technological evolution is
| far far greater than the speed with which any government can act.
| By the time the EU revises their AI regulation plans . . . I
| can't even predict where the cutting edge of AI technology will
| be at.
| astrea wrote:
| With that, I'm honestly surprised GDPR ever even became a thing
| at all.
| cccbbbaaa wrote:
| Because similar laws already existed in member states,
| sometimes for literal decades, before GDPR came along.
| SoftTalker wrote:
| Perhaps they need an AI to help write the AI regulations.
| ip26 wrote:
| Or an AI-informed test. For example, the "reasonable person"
| test is beautiful in how it adapts to the times. If AI itself
| provides the test, the test evolves with the state of the
| art.
|
| For example, perhaps AI subject to regulation XYZ is defined
| to be any AI which cannot be identified as AI by other AI.
| TekMol wrote:
| Are there any internationally relevant internet companies in
| Europe?
|
| Spotify comes to mind. Then .. I can't think of anything. No
| search engine. No browser. No operating system. No social
| network. No ecommerce company. No cloud computing platform. No
| financial services company. No transportation company. No travel
| company. No nothing.
|
| Except for a $20B music aggregator, there is not a single sector
| of the web where European companies managed to get a foot in the
| door. The whole fabric of the internet used in Europe and
| worldwide is made outside of Europe.
|
| But Europe seems to have learned nothing from suffocating their
| internet industry. Instead of wafting fresh air to the patient,
| governments just recently decided to give him the ultimate death
| pill: The GDPR.
|
| We can only wait and see with what bureaucratic monsters Europe
| will prevent the emergence of an AI industry.
| rlupi wrote:
| It looks like you don't know much about Europe.
|
| Let's start with browser. Opera, made in Norway.
|
| Search Engine. Ecosia, Qwant.
|
| Social network: Mastodon.
|
| Operating system: are you kidding right? Linux, Linus Torvalds
| is from Finland.
|
| E-commerce company, just a few : Otto Group (bigger than uber,
| close in size to Baidu), Zalando, Booking.com, Digitec Galaxus
| (Swiss company, now selling in Switzerland, Austria, France,
| Italy; way better than Amazon).
|
| Cloud computing platform: Hetzner, OVHCloud, SAP Cloud.
|
| Financial services: Adyen (payment company, net income about
| 1/5th of paypal with 1/10th of the employees), Klarna, Revolut.
|
| Entertainment company: if you accept video games as
| entertainment, Wooga from Berlin made Diamond Dash, Ninja
| Theory UK (Heavenly Sword, Devil May Cry), GameLoft is in Paris
| (Assassin's Creed mobile), Rovio (Angry Birds), Team17 UK
| (Worms, I know a bit dated but it's a classic), Crytek Germany
| (Crysis, Far Cry, Homefront), Supercell Finland (Clash of
| Clans), CCP Games Iceland (EVE Online, EVE Valkyrie), Arkhane
| Studios France (Dishonored, Bioshock 2), King Sweden (Candy
| Crush), RockStar North (Grand Theft Auto, Red Dead Redemption,
| Max Payne), CD Project Poland (The Witcher, The Witcher 3: Wild
| Hunt), DICE Sweden (Mirror's Edge, Battlefield), Hello Games UK
| (No Man's Sky).
|
| Since we are talking about AI. DeepMind started and is still
| mainly based in UK, although it is now part of Alphabet.
|
| DeepL, often better than Google translate.
|
| ASML, the single company in the world that makes extreme
| ultraviolet (EUV) machines necessary to manufacture all the AI
| chips and latest processors.
|
| ARM, based in Cambridge UK.
|
| In the same vein: NXP Semiconductors, STM Microelectronics,
| Infineon Technology (originally Siemens semiconductor
| manufacturing division).
|
| A few startups are doing good AI chips, e.g. Graphcore, GrAI
| Matter Labs.
|
| IoT protocols: LoRA, LoRAWAN.
|
| Open source: honorable mention to Redis, made by Salvatore
| Sanfilippo, fellow Italian like me.
|
| OpenTitan: the root of trust chip that powers Google security
| chips in everything from Pixel phones to datacenter hardware,
| is a collaboration with ETH Switzerland, lowRISC Cambridge, G+D
| Mobile Security Germany, and other international companies
| (https://opentitan.org/).
|
| A lot of industrial robot companies, which are essential to
| manufacture most stuff you buy on e-commerce sites, are from
| Europe: ABB (Switzerland) and KUKA (Germany) are the first and
| third companies worldwide by market for industrial robotics.
| Comau (Italy) is 5th. Staubli (Switzerland) is 9th. Universal
| Robots (Denmark) is 10th. Notably, all the others in the top
| ten (Fanuc, Yaskawa, Epson, Kawasaki, Mitsubishi) are from
| Japan. None are from US.
| kingstoned wrote:
| We're talking companies, not open-source projects, protocols
| or non-profits.
|
| Europe has a bigger population than the US, so listing
| competitors that are much smaller than their US equivalents
| is not a good sign.
|
| Those companies located in the EU are mostly owned by US and
| sometimes Chinese investors. The reverse is not true - most
| US companies are almost exclusively owned by Americans.
| Americans own around half of all global financial wealth.
| sva_ wrote:
| Yeah, tbf the US does a great job at brain-draining other
| countries.
| kazen44 wrote:
| > Americans own around half of all global financial wealth.
|
| Americans also had the privileged position of being located
| on a continent that has friendly neighbours, has extensive
| natural resources at its disposal, and half the continent
| did not get leveled during two world wars.
|
| Not to mention, most of the financial wealth is bound up in
| dollars (mainly because of the aforementioned historical
| reasons), which in terms leads to it being stored in the
| US.
| jonas21 wrote:
| The comment was specifically talking about _internationally
| relevant_ companies.
|
| > _It looks like you don 't know much about Europe._ > _Let
| 's start with browser. Opera, made in Norway._
|
| Opera has around 2 or 3% market share and these days is a
| Chromium fork. I don't even remember the last time I saw
| someone using Opera.
|
| > _Search Engine. Ecosia, Qwant._
|
| Both of these are wrappers around Bing and regardless have
| less than 1% market share.
|
| > _Social network: Mastodon._
|
| Despite buzz on HN, it's less than 1% the size of Twitter.
|
| > _Operating system: are you kidding right? Linux, Linus
| Torvalds is from Finland._
|
| Linus Torvalds moved to the US in 1996 and is now a US
| citizen. This is exactly the point. Smart people from Europe
| come to the US to grow their projects or companies because
| they're stifled in the EU.
| hcks wrote:
| Good job on listing all the irrelevant bottom feeders
| kingstoned wrote:
| Even Spotify is mostly owned by US investors and pays out most
| of the money to US music companies and recently to US
| podcasters.
|
| Unfortunately, Europe strongly embraces socialism [1] and is
| hostile to entrepreneurs. According to a study [2]:
| "Western Europe is shown to underperform in all four measures
| of high-impact Schumpeterian entrepreneurship relative to the
| U.S. Once we account for Europe's strong performance in
| technological innovation, an "entrepreneurship deficit"
| relative to East Asia also becomes apparent. This
| underperformance is missed by most standard measures. Finally,
| we also find that China performs surprisingly well in
| Schumpeterian entrepreneurship, especially compared to Eastern
| Europe."
|
| [1] https://yougov.co.uk/topics/politics/articles-
| reports/2016/0... [2]
| https://www.ifn.se/media/2rvl3xy3/wp1170.pdf
| goodlinks wrote:
| Google has gotten worse since it was created.. chat gpt is the
| first thing since that even has a hint of value.
|
| Ornganisations with more focus on making money than social
| responsibility are not a net positive.
|
| So i struggle to see any innovation from outside of europe to
| balance your claim.
|
| Maybe windows, but linux was started in europe..
|
| Shrug.
| pxoe wrote:
| so, the fresh air is data harvesting and data exploitation. got
| it. completely tracks with american tech industry
___________________________________________________________________
(page generated 2023-03-05 23:00 UTC)