[HN Gopher] Europe seeks to limit use of AI in society
       ___________________________________________________________________
        
       Europe seeks to limit use of AI in society
        
       Author : anticristi
       Score  : 142 points
       Date   : 2021-04-14 16:40 UTC (6 hours ago)
        
 (HTM) web link (www.bbc.com)
 (TXT) w3m dump (www.bbc.com)
        
       | rndude wrote:
       | The title should read "Europe seeks to limit use of AI because
       | they can't build any AI system themself"
       | 
       | The state of AI in Europe is a sad joke and we already think
       | about regulation.
        
         | capableweb wrote:
         | > The state of AI in Europe is a sad joke and we already think
         | about regulation.
         | 
         | Yeah, fuck people who want to think about and limit the
         | consequences of what they are working on.
         | 
         | Like who would ever want to draft up laws around nuclear
         | weapons before researching and developing nuclear weapons, that
         | removes all the fun?
        
       | CFA178B wrote:
       | Europe can hardly implement a crud application, but they like to
       | talk their heads off about the risks of AI - painful.
        
         | nkl58 wrote:
         | What a fruitful reply
        
       | keiferski wrote:
       | Bonus points if they name the overseeing organization _the Turing
       | Police._ _And_ if Switzerland goes its own way and allows
       | citizenship for AIs.
       | 
       | In all seriousness, I'm not sure if these legal restrictions will
       | actually be effective. They are too broad, vague, and will likely
       | just result in technological stagnation.
        
         | daenney wrote:
         | Laws aren't immutable and right now this is just a proposal.
         | We'll have to see what its final form ends up being.
         | 
         | > will likely just result in technological stagnation.
         | 
         | Nothing in this stops any kind of research nor does it ban its
         | use. It just limits how effortlessly you can invade people's
         | privacy and discriminate against them. It may very well help
         | research in underinvested areas of AI, and it's ethical
         | consequences.
        
           | Aerroon wrote:
           | > _Laws aren't immutable_
           | 
           | You could've fooled me. About the only thing that seems to
           | change laws in places like the EU is when the courts decide
           | to strike something down. Everything else seems to just go
           | ahead the way the EU politicians envisioned. Consequences be
           | damned.
        
         | jariel wrote:
         | " will actually be effective"
         | 
         | Effective in doing what?
         | 
         | Everyone is caught up in ridiculous AI mythology, and
         | misunderstanding the nature of the tech.
         | 
         | AI is just one approach to solving a problem, and will
         | invariably make up just a small part of more complex systems
         | involving mostly classically approaches.
         | 
         | Not only is AI not hugely special, nothing we do or use is
         | mostly 'AI' to begin with.
         | 
         | From the article:
         | 
         | "AI systems used for indiscriminate surveillance applied in a
         | generalised manner"
         | 
         | So does this mean as long as we're not using Deep Learning, we
         | _can_ indiscriminately surveil?
         | 
         | And what if the 'surveillance system' doesn't use AI, but the
         | cameras themselves have AI embedded within to adjust focus?
         | Does that count?
         | 
         | What if the system doesn't use AI, but the supporting services
         | do?
         | 
         | It's basically ridiculous.
         | 
         | If the government wants to regulate 'mass surveillance' - that
         | sounds like a good thing so do that.
         | 
         | If they want to ensure privacy in certain domains - great - but
         | it has nothing to do with 'AI'.
         | 
         | Edit:
         | 
         | Futhermore:
         | 
         | "Mr Leufer added that the proposals should "be expanded to
         | include all public sector AI systems, regardless of their
         | assigned risk level".
         | 
         | "This is because people typically do not have a choice about
         | whether or not to interact with an AI system in the public
         | sector.""
         | 
         | This is laughably bad, because again, there is not such thing
         | as an 'AI System'.
         | 
         | A broad ban on on AI in the public sector would almost
         | guarantee European stagnation in every sector, for no good
         | reason at all.
         | 
         | Will they ban Google Search in public service? Google
         | assistant? Google navigation? Those use AI.
         | 
         | Will they ban AI signal processing for anything related to
         | government?
         | 
         | They'll have to ban Tesla as well, there's a ton of AI in every
         | unit.
         | 
         | Will there be a single automobile in 10 years that won't have
         | AI components? The EU is going to ban all of them from use in
         | public service?
         | 
         | Even today, AI is almost universal in every day systems, that
         | is only going to increase quite a lot.
         | 
         | In 5 years, you literally won't be able to use any tech without
         | it touching some form of AI.
         | 
         | Mr. Leufeur has no understanding of what he is talking about.
        
         | thebackup wrote:
         | I don't believe in BDUF:ing even regulations too much, better
         | to get a toothless first version out in the wild and then tweak
         | it when in place.
        
         | mc32 wrote:
         | But, what good is technology for the sake of technology if it's
         | detrimental to people's well-being? Technology to conquer ills,
         | yes. Technology which alienates people, no.
         | 
         | It's kind of like the goths vs the romans or the inuit vs
         | europeans. Is the value of progress more than the value of
         | self-worth?
        
           | keiferski wrote:
           | That's a different question. Even if you think certain
           | technologies are overall negative, you may be forced to adopt
           | them in order to remain competitive with other nation states.
           | Nuclear weapons are probably the classic example here. AI
           | seems similar to me.
        
             | [deleted]
        
             | quotemstr wrote:
             | Nuclear weapons are a huge net positive for humanity. Have
             | we had another big industrial war since WWII? No. Why not?
             | Because the combatants understand that nowadays total war
             | means total annihilation. Now we resolve our disputes in
             | other ways. Nuclear weapons gave us an age of peace.
        
               | thoughtstheseus wrote:
               | There's a survivorship bias here. If nuclear weapons did
               | totally annihilate society we could not have this
               | conversation.
        
               | kube-system wrote:
               | Their point is that anyone who can develop a nuclear
               | weapon also is aware of this. It might be a survivorship
               | bias, but it is also self-fulfilling.
               | 
               | The biggest threat, as I can see it, is a truly
               | irrational actor. Luckily they're hard enough to build
               | that this prerequisite has so far filtered out anyone
               | truly irrational.
        
           | shadowgovt wrote:
           | I'm more concerned about these two aspects of it from the
           | reporting:
           | 
           | > Experts said the rules were vague and contained loopholes.
           | 
           | > The use of AI in the military is exempt, as are systems
           | used by authorities in order to safeguard public security.
           | 
           | Sounds like it's keyed to build stagnation in public tools,
           | but state-actor tools can go right on ahead becoming more
           | sophisticated (and harder to understand or predict).
        
             | overscore wrote:
             | There are almost always these exemptions for military/law
             | enforcement use cases in EU Directives and Regulations,
             | because while the constituent countries in the EU have
             | miltary and law enforcement co-operation, they would veto
             | new legislation that impacts their independence in those
             | areas.
        
           | visarga wrote:
           | > Is the value of progress more than the value of self-worth?
           | 
           | Why is self worth put in antithesis with progress? Was past
           | progress a cause of loss of self worth?
        
             | mc32 wrote:
             | Progress brings good: vaccines, decreased infant mortality,
             | etc. But we also get to live in cities, disconnected from
             | some realities, like food and self sufficiency. In
             | civilization you are counting on other people doing things
             | for you. Farming, housing, transportation, care, education,
             | etc. Yes it's fancy and advanced and most of us will choose
             | modern life, but not everyone has gone that route (goths in
             | roman times, inuit in present times). Some people see value
             | in being connected to nature, not necessarily in some
             | artificial romantic way, but more visceral ways and
             | forgoing modern progressive life.
        
           | quotemstr wrote:
           | How do you _know_ that these technologies are detrimental to
           | people 's well-being?
           | 
           | Some activists _claim_ that things like facial recognition,
           | ad targeting, and personalized risk scoring are detrimental,
           | but are these activists correct? I don 't think so! All these
           | technologies give us new capabilities and allow us to more
           | precisely understand and shape the world.
           | 
           | Every single time humanity has gained new abilities --- from
           | the Acheulean stone ax to the modern deep neural network ---
           | humans in general have benefited and prospered, because any
           | increase in our ability to understand and manipulate the
           | world is a boon.
           | 
           | There is no such thing as a net-negative technology.
        
             | ncallaway wrote:
             | > There is no such thing as a net-negative technology
             | 
             | Explain to me the benefits of a gatling gun, other than
             | being a more effective tool for killing humans. Is all of
             | humanity really better off for all those that have been
             | killed by this invention?
             | 
             | That's a _lot_ of deaths that start out as a massive
             | negative balance against. Tell me the overall improvement
             | to society that the gatling gun brought us that was
             | "worth" those deaths.
        
               | livueta wrote:
               | Lethality of weaponry has a significant impact on how
               | battles are fought, where increasing lethality generally
               | means fewer participants and, counterintuitively, fewer
               | deaths: https://acoup.blog/2021/02/26/fireside-friday-
               | february-26-20... is a decent discussion.
               | 
               | There's obviously some lag (the bloodiness of WWI) but
               | overall yes, in a weird way, the Gatling gun and other
               | weapons like it are part of why you're a lot less likely
               | to die as a draftee today than in the Napoleonic era.
        
               | watwut wrote:
               | WWII was even bloodier then WWI.
               | 
               | The obvious difference between current draftee and
               | Napoleon one is that Napoleon set up to conquer other
               | coutries. The peace time draftee is going to have lower
               | mortality.
               | 
               | Napoleon was literally the aggressor.
        
               | livueta wrote:
               | WWII's civilian/military casualty ratio was way higher
               | than WWI's (~2:1 vs. very very roughly 1:1 or lower),
               | which would affect how the lethality hypothesis affects
               | casualties. When more of the dead are civilians and
               | civilians predominantly die from famine and disease,
               | higher overall death counts in a more-global conflict
               | don't necessarily mean that conflict killed more soldiers
               | per-capita, though some nations definitely suffered
               | higher per-capita military losses due to factors beyond
               | increasing weapon lethality - mostly thinking of Russia
               | there. For instance, _furiously crunches Wikipedia
               | numbers_ in the UK it looks like WWI had a higher
               | proportion of military deaths to population (~2% vs.
               | ~.8%) even though it also suffered significantly more
               | civilian deaths in II, though not enough to outweigh the
               | decrease in military losses as a function of total
               | population.
               | 
               | There might be an argument that increasing weapon
               | lethality can decrease the number of battlefield
               | combatant deaths but also could increase the likelihood
               | of mass civilian atrocity. That said, high-lethality
               | weapons definitely aren't necessary for mass civilian
               | atrocities either.
               | 
               | And sure, I intended a wartime/wartime comparison
               | regarding draftees: even if we go back to WWII as the
               | last real great-power hot war, a French Napoleonic
               | draftee (looking at France as Britain apparently didn't
               | actually conscript in the Napoleonic wars) is
               | significantly more likely to die in battle than a French
               | WWII (or even WWI!) draftee: https://en.wikipedia.org/wik
               | i/Napoleonic_Wars_casualties#cit...
        
               | ratsforhorses wrote:
               | The mere threat of its awesome killing power made an
               | adversary think twice. In one of the most bizarre
               | episodes Ms. Keller recounts, on July 17, 1863, during
               | the draft riots, the New York Times (which supported
               | conscription) mounted three Gatling guns on the roof of
               | its headquarters, with the editor in chief at the
               | trigger, and successfully cowed an angry mob without
               | firing a single shot.
        
               | erik_seaberg wrote:
               | Armies are smaller because drafting ten million half-
               | trained teenagers to carry rifles is no longer the best
               | way to win a war.
        
             | WitCanStain wrote:
             | > All these technologies give us new capabilities and allow
             | us to more precisely understand and shape the world.
             | 
             | Allow who to shape the world, exactly? Because it's not me,
             | and it's probably not you. Technology gives power to those
             | who control it, and control over face-recognition tech,
             | personalized risk-scoring and ad tech is in the best cases
             | behind several layers of bureaucratic abstraction. Our
             | world is being shaped by megacorporations and governments,
             | not those whose lives these technologies have the potential
             | of having the most negative impact on.
        
               | visarga wrote:
               | Our lives are being shaped by powerful organizations so
               | we should shun progress because it helps them too! Let's
               | all burn our phones and dismantle the internet, it's the
               | root of all evil, I tell you.
               | 
               | After we destroy AI we should make sure nobody does any
               | data analysis by hand or other means. Just to be sure.
               | Because there are people who would justify the exact same
               | decisions even without AI. They just use data to do what
               | they want. So let's destroy data too, and math so nobody
               | can do anything biased or wrong.
        
             | crispyambulance wrote:
             | > There is no such thing as a net-negative technology.
             | 
             | OK, but no one is actually arguing that. The problem starts
             | when the technology gets abused. We need safeguards against
             | abuse of AI in much the same way as we need it for nuclear
             | weapons and energy and, more recently, social media (eg
             | GDPR protections).
        
         | londons_explore wrote:
         | > will likely just result in technological stagnation.
         | 
         | I believe that is the goal of the legislation. By stagnating
         | the field of AI within the EU one can encourage any negative
         | effects to happen in other countries, so they can suffer and
         | discover the potential downsides.
        
         | 8fGTBjZxBcHq wrote:
         | My perspective is that our technological advancement has well
         | outpaced our ability to adapt to the changes or bring our legal
         | and social tools effectively to bear on them.
         | 
         | A decade or two of stagnation would be frustrating for those in
         | the field but probably overall a good thing. Plus I don't think
         | this would affect research at all so not even.
        
           | avz wrote:
           | > A decade or two of stagnation would be frustrating for
           | those in the field but probably overall a good thing.
           | 
           | Is a decade or two of a head start given to high-tech
           | totalitarian regimes like China overall a good thing?
           | 
           | > Plus I don't think this would affect research at all so not
           | even.
           | 
           | Limiting use of AI reduces interest of the public and young
           | researchers and engineers, contributes to brain drain and
           | limits availability of large datasets that are an important
           | asset for AI development.
        
             | YeGoblynQueenne wrote:
             | >> Limiting use of AI reduces interest of the public and
             | young researchers and engineers, contributes to brain drain
             | and limits availability of large datasets that are an
             | important asset for AI development.
             | 
             | I disagree. As a senior-year PhD student, I am relieved
             | that the EU is taking a stance on this matter and hope that
             | others in the West will follow suit (it's probably too late
             | for China). I am relieved because I personally have grave
             | concerns about the uses of AI in society and have thought
             | for some time that some kind of formal and official
             | framework is needed. AI researchers haven't yet managed to
             | establish such a framework, so legislators have stepped in.
             | The framework still seems pretty "green" and like it will
             | take a lot of development and improvement, but that a first
             | step was made is important.
             | 
             | So in fact you might say that having a legal framework in
             | place makes AI research _more_ attractive, because the
             | student is not left to wonder about the ethics of her
             | research on her own.
             | 
             | As to the availability of large datasets- how do you see
             | that this would be affected by the legislation being
             | considered?
             | 
             | I should also point out that the reliance on large datasets
             | is a bug, not a feature, of the currently dominant AI
             | techniques and that an alternative is sorely needed. If
             | large datasets became less easily available, that would
             | give a good incentive to researchers to go do something
             | new, rather than throw a ton of data to an old benchmark to
             | improve it by 0.2%.
        
             | [deleted]
        
           | Aerroon wrote:
           | > _A decade or two of stagnation would be frustrating for
           | those in the field but probably overall a good thing._
           | 
           | Won't they just leave? They could just go somewhere where
           | this isn't banned.
        
           | RankingMember wrote:
           | Agreed 100%. I appreciate the hacker mindset, but when things
           | approach society-altering scale, "just because you can
           | doesn't mean you should" should be the mantra, not "move fast
           | and break things".
        
             | quotemstr wrote:
             | And who decides when that "should" condition is met? Over
             | the past few years, I've seen far too many activists react
             | to algorithms that notice _true_ but _politically
             | inconvenient_ things by trying to shut down the algorithms,
             | to pull wool over our eyes, to continue the illusion that
             | things are other than what they are. Why should we keep
             | doing that?
             | 
             | I have zero faith in the ability of activists or states to
             | decide when it's safe to deploy some new technology. Only
             | the people can decide that.
        
               | aftbit wrote:
               | Not 100% sure what the parent is talking about, but my
               | first thought is the predictive policing algorithms used
               | in some jurisdictions to set bail and make parole
               | decisions. My hazy understanding of the controversy is
               | that these algorithms have "correctly" deduced that
               | people of color are more likely to reoffend, thus they
               | set bail higher or refuse release on parole
               | disproportionately. At one fairly low level this
               | algorithm has noticed something "true but politically
               | inconvenient", but at a higher level, it is completely
               | blind to the larger societal context and the structural
               | racism that contributes to the racial makeup of convicted
               | criminals. I'd argue that calling this simply "true" is
               | neglecting a lot of important discussion.
               | 
               | Of course, perhaps the parent is referring to something
               | else. I'd also like to see some examples.
        
               | [deleted]
        
               | visarga wrote:
               | > completely blind to the larger societal context and the
               | structural racism that contributes to the racial makeup
               | of convicted criminals
               | 
               | What happens if you and others have conflicting views on
               | the "larger societal context"? Who wins, obviously you
               | because you are right?
               | 
               | AI has become political football now and everyone with an
               | issue finds an AI angle. In all this game few are
               | actually interested in AI itself.
        
               | lambda_obrien wrote:
               | > true but politically inconvenient
               | 
               | Wanna give some examples or are you just dog-whistling?
        
               | 8fGTBjZxBcHq wrote:
               | Decisions like these need to made slowly and societally
               | and over time.
               | 
               | Tension between small-c conservatism that resists change
               | and innovators who push for it before the results can be
               | known is very important!
               | 
               | No one person or group needs to or will decide.
               | Definitely not states. "Activists" both in favor of and
               | opposed to changes will be part of it. The last few
               | decades in tech the conservative impulse has been mostly
               | missing (at least in terms of the application of
               | technology to our society lol) and look where we are. A
               | techno-dystopian greed powered corporate surveillance
               | state.
               | 
               | We're not going to vote on it. Arguments like the one
               | happening in this comments section _is_ the process for
               | better or worse.
        
               | JoshTriplett wrote:
               | We also don't have to make the same decision for all use
               | of AI.
               | 
               | For example, we should be much more cautious about using
               | AI to decide "who should get pulled over for a traffic
               | stop" or "how long a sentence should someone get after a
               | conviction". Many government uses of AI are deeply
               | concerning and absolutely should move more slowly. And
               | government uses of AI should absolutely be a society-
               | level decision.
               | 
               | For uses of AI that select between people (e.g. hiring
               | mechanisms), even outside of government applications, we
               | already have regulations in that area, regarding
               | discrimination. We don't need anything new there, we just
               | need to make it explicitly clear that using an opaque AI
               | does not absolve you from non-discrimination regulations.
               | 
               | To pick a random example, if you used AI to determine
               | "which service phonecalls should we answer quicker", and
               | the _net effect_ of that AI results in systematically
               | longer /shorter hold times that correlate with a
               | protected class, that's absolutely a problem that should
               | be handled by existing non-discrimination regulations,
               | just as if you had an in-person queue and systematically
               | waved members of one group to the front.
               | 
               | We don't need to be nearly as cautious about AIs doing
               | more innocuous things, where consequences and stakes are
               | much lower, and where a protected class isn't involved.
               | And in particular, non-government uses of AI shouldn't
               | necessarily be society-level decisions. If you don't like
               | how one product or service uses AI, you can use a
               | different one. You don't have that choice when it comes
               | to hiring mechanisms, or interactions with government
               | services or officials.
               | 
               | Reading the article, it _sounds_ like many of the
               | proposals under consideration are consistent with that:
               | they 're looking closely at potentially problematic uses
               | of AI, not restricting usage of AI in general.
        
               | ncallaway wrote:
               | > to algorithms that notice true but politically
               | inconvenient
               | 
               | I don't know that I agree that there exists an algorithm
               | that can determine what is factually true, so I'm not
               | sure I agree that an algorithm can "notice a true thing".
               | 
               | Do you have an example of when an algorithm noticed
               | something that was objectively true but was shut down?
               | Can you explain how the algorithm took notice of the fact
               | that was objectively true (in such a way that all parties
               | agree with the truth of the fact)?
               | 
               | I can't think of a single example of an algorithm
               | determining or taking notice of an objective fact that
               | was rejected in this way. But there are lots of
               | controversies I'm not aware of, so it could have slipped
               | by me.
        
               | visarga wrote:
               | For example gender stereotyping for jobs or personal
               | traits, that is politically incorrect but nevertheless
               | reflects the corpus of training data. (He is smart. She
               | is beautiful. He is a doctor. She is a homemaker.)
        
               | YeGoblynQueenne wrote:
               | I think you're assuming that if it's in the data, it's
               | "factually true" as the OP puts it. It doesn't work that
               | way. There is such a thing as sampling error, for
               | example.
        
             | darepublic wrote:
             | Reminds me of this clip:
             | https://www.youtube.com/watch?v=_oNgyUAEv0Q&t=52s
             | 
             | That being said.. I don't trust the gov to decide what
             | should be on the internet. I have to resist all attempts to
             | do so, while acknowledging some gov suppression is probably
             | beneficial. It's a duality and despite being no one side of
             | it you can acknowledge the importance of the other side.
        
           | tablespoon wrote:
           | > My perspective is that our technological advancement has
           | well outpaced our ability to adapt to the changes or bring
           | our legal and social tools effectively to bear on them.
           | 
           | > A decade or two of stagnation would be frustrating for
           | those in the field but probably overall a good thing. Plus I
           | don't think this would affect research at all so not even.
           | 
           | I agree. And frankly, technological "progress" _for its own
           | sake_ reeks of technopoly [1]. Technology should serve
           | society, not the other way around.
           | 
           | [1] A term coined in a very good book by Neil Postman:
           | https://en.wikipedia.org/wiki/Technopoly:
           | 
           | > Postman defines technopoly as a "totalitarian technocracy",
           | which demands the "submission of all forms of cultural life
           | to the sovereignty of technique and technology".
        
           | dbtc wrote:
           | I agree but I'm not so sure society has that much control.
           | Maybe technology itself (or/and the economy) is in the
           | driver's seat now. Pun intended!
        
           | tiborsaas wrote:
           | In an arms race it's not the best strategy to take a step
           | back and look at angles. In AI there's an arms race going on
           | and one likely outcome of this might be increased brain drain
           | and even less likelihood of great companies rising in the EU.
        
             | 8fGTBjZxBcHq wrote:
             | I mean I don't think the damn NSA or whatever the EU has
             | for that is going to stop doing whatever they're already
             | planning to do with AI.
             | 
             | And I absolutely could not care less about "great companies
             | rising" either now or hypothetically in the future.
             | 
             | Some applications of AI tech are very clearly MORALLY WRONG
             | and cause harm. Currently that harm is limited because the
             | reach of these tools is limited and that is the only thing
             | holding them back from doing worse.
             | 
             | If companies need that dynamic to rise and be great then
             | they can just not as far as I'm concerned.
        
         | birdsbirdsbirds wrote:
         | The limits are a convenient way to escape the challenge. By
         | opting out, nobody can ask why European companies don't have
         | state of the art AI technology.
         | 
         | If Europe cannot offer more than EUR6.7 billion to create an
         | alternative infrastructure to AWS, GCP and Azure then they
         | better prepare an excuse for why they haven't managed to create
         | AI.
         | 
         | [1] https://ec.europa.eu/digital-single-market/en/european-
         | cloud...
        
         | flohofwoe wrote:
         | It's a good thing that it is brought into the light and
         | discussed though, because most people probably don't realize
         | how much power "automated decision making" already has over
         | their lives, and it will only get worse if tech giants and
         | oppressive governments have their way.
        
         | seanmcdirmid wrote:
         | > And if Switzerland goes its own way and allows citizenship
         | for AIs.
         | 
         | I'm pretty sure not being a member of the EU, Switzerland will
         | go its own way.
        
         | 908B64B197 wrote:
         | Am-I the only one that finds is odd how the British government
         | brags about Alan Turing after what they did to him?
         | 
         | The man saved women, men, children, of all races and
         | orientations from an horrible end. I wish the British
         | government had extended the favor to Turing himself.
        
           | mbroncano wrote:
           | Or to that extent Oscar Wilde and many, many others.
        
           | visarga wrote:
           | Nobody's above being cancelled, that's what happens when
           | righteous people set to work. They know best and are willing
           | to educate society by example.
        
           | watwut wrote:
           | That is literally what happens with historical figures
           | everywhere. Americans now celebrate Martin Luther King for
           | example.
        
       | eivarv wrote:
       | These issues should solved by making problematic AI-use in breach
       | of individual rights (e.g. privacy), rather than regulating the
       | technology itself (with govt. exceptions) - which increases the
       | power-imbalance between the state and the citizen.
        
       | prof-dr-ir wrote:
       | Does anybody have a link to the actual draft this article is
       | based on?
       | 
       | In my limited experience the proposals by the EU commission are
       | often readable and interesting. I might not agree with them, but
       | I do appreciate that the thought process is made public years
       | before ideas become laws. (As the article states, that is also
       | very much the expectation here.)
        
         | noshaker wrote:
         | Here is the draft
         | https://drive.google.com/file/d/1ZaBPsfor_aHKNeeyXxk9uJfTru7...
        
           | anticristi wrote:
           | Thanks! 81 pages and no executive summary?!?! I'm interested
           | to read it, but this is likely to have the same effect as
           | sleeping pills.
        
             | f137 wrote:
             | Well, just run it thru an AI summarizer )
        
             | noshaker wrote:
             | You can check out this tech crunch article where I got the
             | draft from. It is a bit more detailed than the bbc article.
             | https://techcrunch.com/2021/04/14/eu-plan-for-risk-based-
             | ai-...
        
       | imapeopleperson wrote:
       | No one actually wants a society being ruled by computers, except
       | maybe the people running those computers.
       | 
       | This is at least a step in the right direction.
        
         | pelorat wrote:
         | Speak for yourself. I take AI over EU politicians any day of
         | the week.
        
           | tgv wrote:
           | You can't be serious. EU leadership has always been a bit
           | weak, with a lot of compromises, and not a hard stance on
           | foreign policy, but to prefer an unknown oracle? The current
           | state of AI would even have it overfitted on a small relevant
           | corpus padded with arbitrary other material. How can you
           | expect that to produce better government?
        
             | systemvoltage wrote:
             | Perhaps OP meant "I take AI computer scientists over EU
             | politicians"?
        
         | avz wrote:
         | > No one actually wants a society being ruled by computers,
         | except maybe the people running those computers.
         | 
         | Humans in charge are known for injustice, kickbacks, favor
         | trading, selective enforcement and other forms of corruption
         | and abuse. Properly engineered and regularly reviewed open
         | source systems with balance checks might just get us closer to
         | a rules-based system that provides a level playing field for
         | everyone. Given all the known biases of current AI systems, we
         | are certainly far from ready for it, but the prospect of
         | transforming large parts of government into an open source
         | "social operating system" that automatically and fairly offers
         | basic services according to clearly coded and broadly enforced
         | rules looks like a desirable goal in the (very) long term.
         | 
         | Many laws can be expressed as computer code. Where they cannot
         | is often due to deliberate vagueness built in to leave scope
         | for future interpretation as new cases arise. This suggests
         | that we could express laws in computer code that raises a
         | HumanInputRequiredException in the cases currently handled with
         | deliberate vagueness. The resulting reduction in vagueness
         | would remove a huge amount of discretion that currently
         | facilitates corruption and abuse of power while ensuring
         | ultimate human control and human-directed evolution of the law.
        
           | avz wrote:
           | I want to add a historical remark. Very early forms of human
           | government, such as ancient kingdoms in various parts of the
           | world, had one or a few prominent members of society hold
           | full discretion and decision-making power. Later, we codified
           | rules and decided that even monarchs are not above the law.
           | Endowing the written laws with additional power by making
           | them executable seems like a natural next step.
        
       | quotemstr wrote:
       | Luddites.
       | 
       | If you look at the course of history, every attempt to slow the
       | adoption of new technology has been a disaster for human welfare.
       | This policy move is just like trying to ban the telephone or the
       | printing press.
        
         | tppiotrowski wrote:
         | I see this more along the lines of preventing nuclear
         | proliferation, chemical and biological warfare.
        
           | quotemstr wrote:
           | This is more like banning the production of chlorine gas
           | because someone might put it in an artillery shell --- never
           | mind all the other useful things you can do with chlorine
           | gas. If you want to regulate externalities, regulate
           | externalities: don't ban technology itself.
        
             | KineticLensman wrote:
             | > don't ban technology itself
             | 
             | The EU isn't planning to ban underlying AI technologies,
             | but instead to control its use in applications such as
             | credit worthiness assessment
        
               | throwawaysea wrote:
               | ...which is basically banning people from measuring risk
               | for themselves and deciding how to react to that risk
               | themselves. How is this not authoritarian?
        
               | La1n wrote:
               | >How is this not authoritarian
               | 
               | Authoritatian can mean so many things, could you define
               | it further. Cause how I read your comment I could call
               | seatbelt laws authoritatian, and I guess you didn't mean
               | that.
        
               | ThomPete wrote:
               | ai need training data. Its not just a technology play.
        
         | Jiejeing wrote:
         | It is humorous that you bring up Luddites, as they were one of
         | the first 19th century movements (among many) that sought to
         | fight the mechanization and industrialization of their labor,
         | and were repressed by military might because god forbid someone
         | sabotage a machine.
         | 
         | Many "innovations" were bought at the beginning not for real
         | use, but in order to threaten the working class with this new
         | tool, so that they do not ask for more job stability, higher
         | wages, or reduced working hours (e.g. grain harvesters,
         | industrial looms, etc).
         | 
         | And the new technology when used often resulted in a net
         | negative in terms of life expectancy, environmental pollution,
         | danger or physical load, which was then disregarded as a
         | necessary sacrifice that the poor need to make in the
         | inexorable march of progress.
        
         | KineticLensman wrote:
         | > every attempt to slow the adoption of new technology has been
         | a disaster for human welfare
         | 
         | Counter-example: nuclear non-proliferation treaties.
        
           | ativzzz wrote:
           | Maybe if we gave every country nuclear weapons we could stop
           | having proxy wars through poorer/developing countries.
        
       | RcouF1uZ4gsC wrote:
       | >those designed or used in a manner that manipulates human
       | behaviour, opinions or decisions ...causing a person to behave,
       | form an opinion or take a decision to their detriment
       | 
       | Would this mean that A/B testing of news article headlines would
       | be banned if it was powered by software?
        
       | thepangolino wrote:
       | Is this a remake of the PGP debacle? If the GDPR and the so
       | called EU cookie law are to be taken seriously then yes, it is.
        
       | sneak wrote:
       | > _The use of AI in the military is exempt, as are systems used
       | by authorities in order to safeguard public security._
       | 
       | I'm fairly sure this excludes nearly 100% of the most risky uses
       | of AI that exist today.
        
       | timme wrote:
       | By the geniuses that brought us ePrivacy Directive and GDPR.
       | Might be cheaper to just tell tech companies they're not welcome
       | in the EU.
        
       | vngzs wrote:
       | > those designed or used in a manner that manipulates human
       | behaviour, opinions or decisions ...causing a person to behave,
       | form an opinion or take a decision to their detriment
       | 
       | This appears to obviously apply to the Facebook wall. You can
       | find a high-profile example of this in [0], but [1] explains how
       | this manipulation, which optimizes "engagement", is built deep
       | into Facebook's design. I think the case that it causes users to
       | form opinions and take decisions to their detriment is obvious,
       | so these new laws should apply. Am I wrong?
       | 
       | [0]:
       | https://www.theguardian.com/technology/2014/jun/29/facebook-...
       | 
       | [1]: https://qz.com/1039910/how-facebooks-news-feed-algorithm-
       | sel...
        
         | enchiridion wrote:
         | Seems like books fit that description.
        
       | sturza wrote:
       | China brings hardware. US brings software. EU brings the
       | regulation.
        
         | YeGoblynQueenne wrote:
         | Or, China brings the matches, US brings the gasoline, EU brings
         | the good sense to keep them apart.
        
       | refraincomment wrote:
       | Europe != Obscure department of the European Commission issuing
       | unsolicited opinions
        
         | anticristi wrote:
         | I beg to differ. "AI" is already rejecting CVs and
         | untransparently keeping people unemployed. People do care about
         | how to use this tool responsibility.
        
           | La1n wrote:
           | I think OP was referring specifically to the HN title.
        
       | bitL wrote:
       | Here we go, it didn't take long... Anyone still thinking ML won't
       | be regulated? ML licenses to purchase GPUs next?
        
       | joe_the_user wrote:
       | The listed examples seem reasonably good.                   *
       | systems which establish priority            in the dispatching of
       | emergency services         * systems determining access to or
       | assigning people to educational institutes         * recruitment
       | algorithms          * those that evaluate credit worthiness
       | * those for making individual risk assessments         * crime-
       | predicting algorithms
       | 
       | While I'd also like to see autonomous military devices banned,
       | banned AI that makes opaque life-changing decisions about
       | individuals seems reasonable. We already say that these shouldn't
       | discriminate and we've seen ways AI can allow discrimination
       | through the back door.
        
         | tomp wrote:
         | > we've seen ways AI can allow discrimination through the back
         | door.
         | 
         | Do you have any concrete examples of these, in particular where
         | the use of statistics or AI enables _more_ discrimination than
         | using human decisions?
        
           | _1 wrote:
           | https://weaponsofmathdestructionbook.com/
        
           | YeGoblynQueenne wrote:
           | Why does it have to be "more"?
        
             | tomp wrote:
             | Parent mentioned that AI is used to sneak in discrimination
             | _through the back door_ , implying that discrimination
             | wouldn't be there (or there would be less) without AI.
        
               | YeGoblynQueenne wrote:
               | The OP mentioned "AI that makes opaque life-changing
               | decisions". In that context, "through the back door" was
               | more likely meant in the sense of "without anyone
               | noticing".
               | 
               | It doesn't really matter if there is "less"
               | discrimination without AI. While AI is not there, there
               | is no discrimination from AI. If there is some after
               | introducing AI, then it's a problem with AI.
        
               | nix0n wrote:
               | Here's an example: mortgages (in the USA) used to be
               | approved or denied by humans, but there were certain
               | neighborhoods where only white people were allowed.
               | 
               | Now, there's a law against that.
               | 
               | In the future, there will be an AI system to approve or
               | deny mortgages, based off of historical training data.
               | Since that data includes the redlining era, the AI will
               | learn to make racist decisions.
               | 
               | Most people do not understand how it is possible for a
               | computer to be racist. (Other than against all humans
               | like in Terminator 2.) This is why it's "through the back
               | door", because it's not obvious how it's possible or
               | where it's coming from.
        
           | someguy321 wrote:
           | One I can think of off the top of my head (statistics, not
           | AI, although AI would also allow it) is that the actuarial
           | calculations for home/car insurance quotes rely on risk data
           | by zip code, education level, income, and any and all other
           | socioeconomic variables not including protected class, but
           | which often correlate/group by protected class, and which are
           | also reliable indicators of risk.
           | 
           | Depending on who you talk to these algorithms either are or
           | are not discriminating against protected classes "through the
           | back door".
        
             | tomp wrote:
             | Sure but my point is that, while you could argue that
             | decisions about some topics could be discriminatory _by
             | definition_ , that has nothing to do with AI (and saying
             | that AI is at fault is pure anti-AI FUD).
        
             | jariel wrote:
             | Yes - and you've just proven the folly of this entire
             | exercise -> those algorithms have nothing to do with AI!
             | 
             | If the government believes that credit risk systems cannot
             | use 'race' as a factor, then they ought to reaffirm that.
             | 
             | They shouldn't be restricting broad use of a technology.
        
           | tablespoon wrote:
           | >>> we've seen ways AI can allow discrimination through the
           | back door.
           | 
           | >> we've seen ways AI can allow discrimination through the
           | back door.
           | 
           | > Do you have any concrete examples of these, in particular
           | where the use of statistics or AI enables more discrimination
           | than using human decisions?
           | 
           | Here's a concrete example:
           | 
           | https://towardsdatascience.com/racist-data-human-bias-is-
           | inf...
           | 
           | > An AI program called COMPAS has been used by a Wisconsin
           | court to predict the likelihood that convicts will reoffend.
           | An investigative piece by ProPublica last year found that
           | this risk assessment system was biased against black
           | prisoners, incorrectly flagging them as being more likely to
           | reoffend than white prisoners (45% to 24% respectively).
           | These predictions have led to defendants being handed longer
           | sentences, as in the case of Wisconsin v. Loomis.
           | 
           | Also, I'd dispute your injection of the "more racist than
           | humans" framing (which is also moving the goalposts a bit).
           | The problem with racist algorithms isn't necessarily their
           | "degree of racism" but the fact that they mask very real
           | racism behind a veneer of false computerized "objectivity."
        
           | alexvoda wrote:
           | It's not that it enables more, it's that the AI is harder to
           | fight and easier to excuse.
           | 
           | Just like when a clerk tells you "The computer won't let me."
        
         | ArkanExplorer wrote:
         | Presumably politicians fear an independent AI that would make
         | technically correct but politically incorrect decisions in all
         | of those categories.
        
         | neilparikh wrote:
         | I think the tradeoff is that at least the AI discrimination is
         | systemized, and there's one place you can manipulate to reduce
         | that discrimination, while with pre-AI human discrimination,
         | it's not at one place, so it's harder to eliminate.
         | 
         | As an example, it's the difference between being rejected by a
         | central agent for a loan, versus going to your local branch,
         | and being rejected by a random employee at the local branch.
         | It's obviously much easier to change the central agent than it
         | is to change every distributed employee.
         | 
         | Now, whether this is actually the case in practice, and whether
         | this is a good or bad thing is open to interpretation.
        
           | iR5ugfXGAE wrote:
           | *It's obviously much easier to change the random employee
           | than it is to change the central agent.
        
           | joe_the_user wrote:
           | " _I think the tradeoff is that at least the AI
           | discrimination is systemized_ "
           | 
           | What does systematized mean in this context? The specific
           | problem is that modern deep learning systems are
           | _unsystematic_ - they heuristically determine a result-
           | procedure based on some goodness measure and this result-
           | procedure is a black box.
           | 
           | You already have criteria-based algorithms for things like
           | loans - the individual employees aren't making arbitrary
           | decisions or just pen-and-paper calculations. You have a
           | central algorithm now in a given bank, one that can be looked
           | and understood. The question is whether to go _from that_ to
           | an opaque,  "trained" algorithm whose criteria can't analyzed
           | directly.
        
           | extropy wrote:
           | As far as I can tell the law does not prohibit algorithm
           | assisted decision making. So as long there is a human
           | rendering the final decision we are good. Which seems to be a
           | reasonable balance IMO.
        
       | tppiotrowski wrote:
       | I thought this article would be about software in heavy machinery
       | like self driving cars but it's more aimed at applications of AI
       | that are incompatible with human rights: social scores,
       | surveillance, crime-prediction, etc.
        
         | dalbasal wrote:
         | This is the problem: _they_ don 't really understand what
         | they're trying to regulate. A lot of it is (a) data-privacy
         | issues or (b) using data to make automated decisions. The "AI"
         | part is superfluous, as far as I can see.
         | 
         | Lawmakers appear as caught up in labeling stuff "AI" as
         | investors. It's going to make them less effective by letting
         | them avoid actually defining what they're trying to prevent.
         | 
         | Consider:
         | 
         | " _those (AIs) designed or used in a manner that manipulates
         | human behaviour, opinions or decisions ...causing a person to
         | behave, form an opinion or take a decision to their detriment_
         | "
         | 
         | It's clearly about advertising and social media. If you want
         | regulation to be effective, specifics are good. Platitudes
         | don't make good regulations.
        
       | zitterbewegung wrote:
       | This just feels like a play for large tech companies to be out
       | regulated from the European market.
        
       | throwawaysea wrote:
       | A restriction on computation and processing of information seems
       | like a restriction on speech, expression, and thought. The list
       | named in this article is just bizarre. For example it mentions
       | that the following would be covered by the proposed policy:
       | 
       | > those designed or used in a manner that manipulates human
       | behaviour, opinions or decisions ...causing a person to behave,
       | form an opinion or take a decision to their detriment
       | 
       | Can't all of marketing, politics, and activism be constructed to
       | fall under this broad statement? It feels to me like this
       | unfairly allows only certain means to achieving the same ends,
       | which ends up favoring certain segments of society at the expense
       | of others. As an example, what makes shaping political opinions
       | using AI inappropriate but shaping it via disruptive protesting
       | appropriate? A person who few responsibilities and enough time to
       | spend protesting is allowed to influence society, and someone who
       | wants to do the same through a different means that makes more
       | sense for them isn't permitted to do so? Similarly, credit
       | worthiness and crime risk assessment are plainly _logical_ ways
       | for individuals, corporations, and governments to contain risks,
       | incentivize the correct behavior, and make smart decisions for
       | themselves. Getting rid of credit scoring is equivalent to income
       | redistribution, since less risky individuals will be forced to
       | subsidize others.
       | 
       | I don't think blanket regulation like this is the answer. The
       | answer lies in ensuring healthy markets with sufficient
       | competition (enforcing anti-trust law), in relying on federalism
       | so that local governments can decide which technologies they want
       | to use or not use, and in privacy controls for users to retain
       | control of their data. Not in restricting math.
        
       | goatcode wrote:
       | >The use of AI in the military is exempt, as are systems used by
       | authorities in order to safeguard public security.
       | 
       | Of course.
        
         | modeless wrote:
         | The problem with government regulation of AI is that the
         | biggest threat is government use of AI. Rules for thee but not
         | for me!
        
           | goatcode wrote:
           | Any governing body seems to leave a lot of room for "except
           | for pigs" when they write down their rules. The bigger the
           | body, the more potential pigs there are.
        
             | brobdingnagians wrote:
             | Your use of "pigs" reminds me of: "All animals are equal,
             | but some animals are more equal than others"
        
               | gramakri wrote:
               | It's meant to remind you of that quote :) because the
               | rules are made by pigs in the book Animal farm. Your
               | quote comes from the same book
        
           | geraneum wrote:
           | Government (ab)use of AI is is of course a serious threat.
           | But I'd say big corporations abuse of AI is even worse.
           | 
           | Assuming we are talking about a democratic state, at least
           | there are some checks and balances on governments whereas
           | people cannot elect a FAANG CEO or go to a '.gov' website to
           | read a transcript of board meetings.
           | 
           | Edit: I am by no means advocating for government's use of AI
           | in any form.
        
             | trutannus wrote:
             | A business can't lock you in a prison cell. I'd argue that
             | the checks and balances at this point are little more than
             | a mirage. Ruling by fiat is becoming more common,
             | accountability less so. Government use of AI is far more
             | menacing to me than a business using it.
        
               | KineticLensman wrote:
               | > A business can't lock you in a prison cell
               | 
               | Perhaps not, but companies can deny you access to
               | fundamental electronic infrastructure, use of which is
               | increasingly essential in a cashless society where
               | services are online or non-existent. With no right of
               | appeal.
        
               | cheschire wrote:
               | This is needlessly pedantic, but private prisons are
               | absolutely a thing in America.
        
               | jaredsohn wrote:
               | The part that matters is what makes a person required to
               | be in prison (i.e. government law / judicial system) and
               | not who runs the prison.
        
               | trutannus wrote:
               | Of course, but the person who assigns you to that prison
               | is still acting on behalf of the state. It would be
               | correct to say that a business can keep you in a prison
               | though!
        
               | bhupy wrote:
               | Although private prisons exist (in strikingly small
               | numbers[1]), private corporations can't just decide to
               | throw you into a private prison on a whim; the government
               | decides that.
               | 
               | [1]
               | https://www.sentencingproject.org/publications/private-
               | priso...
        
               | La1n wrote:
               | https://en.m.wikipedia.org/wiki/Kids_for_cash_scandal
               | 
               | Private prison kickbacks for conviction are a thing.
        
               | bhupy wrote:
               | They're not _presently_ a thing; it 's just a scandal
               | that happened at one time in history. It's also
               | corruption, and corruption exists in both the private as
               | well as public sectors. In the case of the Kids for Cash
               | scandal, there have since been lawsuits, overturned
               | adjudications, and commissions to ensure that it doesn't
               | happen again.
               | 
               | Solving corruption is orthogonal to the question of
               | whether private corporations can perform extra-judicial
               | imprisonment with impunity. That really just doesn't
               | happen at scale, because it can't.
        
               | La1n wrote:
               | >just a scandal that happened at one time in history.
               | 
               | I really hope that's true, but it might also be that it
               | was only convicted for once in history while it happens
               | more often.
        
               | bhupy wrote:
               | Oh, I absolutely do not doubt that it could happen again
               | more often, but at least we have Racketeering laws and a
               | system to essentially minimize the degree to which it
               | happens _with impunity_.
        
               | joe_the_user wrote:
               | _Ruling by fiat is becoming more common, accountability
               | less so._
               | 
               | Well, PG&E cut down a whole bunch of trees in my town
               | just recently, against the objections of the locals and
               | the city council. And Judge Alsop, who supervises their
               | bankruptcy chided them for this slap-dash, crude effort
               | to show they were doing thing (didn't stop them, darn
               | it).
               | 
               | So you see a bunch of actions that _look_ like the state
               | or industry acting by fiat. But it only looks that way.
               | The many institutions of this society are at loggerheads
               | with each other, the parties are in gridlock, etc. The
               | main thing is they 've shut out the average person from
               | their debates - which is a bit different.
        
               | trutannus wrote:
               | > look like the state or industry acting by fiat
               | 
               | The propensity of leaders to rule by executive order, or
               | cabinet bill is what I'm getting at. Those sorts of
               | actions are most certainly ruling by fiat.
        
               | crispyambulance wrote:
               | > Government use of AI is far more menacing to me than a
               | business using it.
               | 
               | I think both are equally menacing. The problem with
               | business usage is that we're "trusting" them to be good
               | stewards of that capability.
               | 
               | There's not much preventing a business from abusing such
               | power in a covert anti-competitive anti-consumer fashion,
               | or worse, selling access to that power to the highest
               | bidder (as a service!).
        
             | neilparikh wrote:
             | Note that these checks and balances don't apply to non-
             | citizens of the country, who are the people affected the
             | use of AI in military (one of the exemptions listed above).
             | If a EU member state abuses AI in the military against a
             | non-European, what direct recourse do they have?
        
           | joe_the_user wrote:
           | Government does regulate itself, sometimes. The proposal
           | seems to be concerned with regulating the government, for
           | example limiting "crime prediction". Also, the private
           | institutions it's talking about are things like credit
           | bureaus and employers large enough to use AI for screening
           | employees.
        
             | goatcode wrote:
             | >for example limiting "crime prediction"
             | 
             | I saw that too, and ended up wondering if it'll be ignored
             | if it's seen that crime prediction falls under the "public
             | safety" exception, from time to time (or eventually,
             | altogether). That's the problem with vague things like
             | "public safety" being tied to regulation, imo.
        
         | systemvoltage wrote:
         | The AI race is in an unstable equilibrium. Slight perturbations
         | of the initial conditions (slight advantage) will have
         | consequential, exponential and final implications years later.
        
       | ThomPete wrote:
       | the only outcome of this is that Europe will fall even further
       | behind the US and China. How sad.
        
         | clownpenis_fart wrote:
         | Since there have been and will be exactly zero useful ai
         | application anytime soon other than bias laundering (aka
         | "systematic discrimination is ok when a computer does it"), I
         | think it's ok.
        
           | Engineering-MD wrote:
           | While I don't necessarily disagree, can you elaborate a bit
           | more? AI is likely to make some significant impacts
           | especially in computer vision applications among others.
        
           | ThomPete wrote:
           | And with that approach there wont be in europe anytime soon
           | :)
        
           | anticristi wrote:
           | I share your concern: Train AI to reject CVs with "Ahmed" and
           | accept CVs with "Smith". Then blame racial bias on the
           | difficulty to explain DNN.
        
       | FredPret wrote:
       | Big organization seeks to dam water with sieve, says "this time
       | will be different"
        
       | roomey wrote:
       | It's been in Irish law for some time (which makes it applicable
       | to the majority of US tech giants). Unfortunately Ireland isn't
       | hq for a lot of the big finance and insurance firms which would
       | probably be more useful in this situation. Anyway, here is the
       | latest version of the law: TLDR; if it impacts you it should
       | require a human decision and be appeal-able. Also there is a
       | right to review code but I'm not sure how that works:
       | 
       | the right of a data subject not to be subject to a decision based
       | solely on automated processing, including profiling, which
       | produces legal effects concerning him or her or similarly
       | significantly affects him or her shall, in addition to the
       | grounds identified in Article 22(2)(a) and (c), not apply where--
       | 
       | (a) the decision is authorised or required by or under an
       | enactment, and
       | 
       | (b) either--
       | 
       | (i) the effect of that decision is to grant a request of the data
       | subject, or
       | 
       | (ii) in all other cases (where subparagraph (i) is not
       | applicable), adequate steps have been taken by the controller to
       | safeguard the legitimate interests of the data subject which
       | steps shall include the making of arrangements to enable him or
       | her to--
       | 
       | (I) make representations to the controller in relation to the
       | decision,
       | 
       | (II) request human intervention in the decision-making process,
       | 
       | (III) request to appeal the decision.
        
       | intricatedetail wrote:
       | I'll vote for any party that will make tracking illegal. I am
       | sick and tired of being stalked by all those multibillion
       | corporations who don't even give anything back. They are the
       | cancer on the society.
        
       | JoeAltmaier wrote:
       | We will have to face everything everybody thinks up. Legislation
       | is whistling in the dark (pissing in the wind; closing the barn
       | door...). Its too easy to create and deploy these things. Anybody
       | who finds a reason to do so, will.
       | 
       | It will take social changes of some kind, to adapt to this new
       | reality. Not draconian laws.
        
       ___________________________________________________________________
       (page generated 2021-04-14 23:02 UTC)