[HN Gopher] The Original AI Doomer: Dr. Norbert Weiner
       ___________________________________________________________________
        
       The Original AI Doomer: Dr. Norbert Weiner
        
       Author : headalgorithm
       Score  : 46 points
       Date   : 2023-06-03 16:45 UTC (6 hours ago)
        
 (HTM) web link (newsletter.pessimistsarchive.org)
 (TXT) w3m dump (newsletter.pessimistsarchive.org)
        
       | 082349872349872 wrote:
       | See https://en.wikipedia.org/wiki/The_Human_Use_of_Human_Beings
        
       | mehh wrote:
       | And in the UK in the late nineties we had Kevin Warwick also
       | predicting AI woes lay ahead.
        
       | bmitc wrote:
       | This makes it sound like Norbert Wiener was just doom and
       | glommong and was wrong. Well, he wasn't.
       | 
       | Norbert Wiener, Neil Postman, Lewis Mumford, Ross Ashby, Stafford
       | Beer, Douglas Engelbart, and others were all correct. Society's
       | acceptance of of technological capture and replacing of humans
       | (or attempting so) is damaging, and thatbwe should instead use
       | technology to augment and serve humans rather than the other way.
        
         | simonh wrote:
         | So far the history of automation technologies is that they both
         | augment the capabilities and replace them. Automated looms
         | replaced manual weavers, but the result was much cheaper better
         | quality cloth which massively increased demand for cloth.
         | Information technology is the same, it does what human
         | mathematicians used to do, but has massively increase demand
         | for information processing services.
         | 
         | So I don't think AI technologies we have now, or plausibly have
         | in the next few decades minimum, look like they will have
         | materially novel effects.
        
           | bmitc wrote:
           | I would still say that it is the replacement and attempts at
           | replacing that are harmful.
           | 
           | Good examples of this are automated customer "service" and
           | "self"-driving technologies. The latter has done nothing but
           | spin its wheels by burning off R&D dollars and time while
           | killing people along the way. Eventually, people will realize
           | that a much better goal is to simply assist drivers in more
           | effective ways and to pour the resources into better urban
           | design and non-automotive transportation. Unfortunately,
           | that's happening not because of realization of this but
           | because of the realization that self-driving is a pipe dream
           | of being able to solve intractible societal and technological
           | problems.
        
             | deadlast2 wrote:
             | I remember when the internet started I was telling a friend
             | of mine she should look for a new career. She works as a
             | travel agent. I was convinced the internet would relace all
             | those jobs. You know she still works as a travel agent.
        
               | klipt wrote:
               | I assumed the ratio of travel agents to travelers has
               | gone down since the advent of flight search engines, is
               | that not true?
               | 
               | I certainly haven't used travel agents since I became
               | aware of Google Flight Search etc
               | 
               | But maybe older people / business travelers still use
               | them for convenience?
        
           | visarga wrote:
           | > So I don't think AI technologies we have now, or plausibly
           | have in the next few decades minimum, look like they will
           | have materially novel effects.
           | 
           | Anything you can do with chatGPT-4 today, you can also do
           | with Google Search and a little more, or maybe less, work.
        
             | add-sub-mul-div wrote:
             | A search also lets you do it with agency, transparency, and
             | skepticism. You can evaluate the trustworthiness of a
             | source given its actual context rather than blindly accept
             | the first opaque answer you're given.
             | 
             | Our practice of judgment and strategy in learning and
             | finding answers is critical. We're not encyclopedias.
        
         | hinkley wrote:
         | Wiener is credited with the concept of cybernetics. Yeah this
         | is a grotesquely bullshit title.
        
       | mkoubaa wrote:
       | I'm far more worried about AS. Artificial Stupidity
        
       | Barrin92 wrote:
       | There's also the famous Minsky statement that, paraphrasing, AI
       | could pretty much be figured out over a solid summer of work.
       | Appeals to authority on these questions are really annoying
       | because people who work their entire lives on a thing are
       | incredibly prone to think it's the most important thing in the
       | world, and that this is the most exceptional time right now. Just
       | statistically that's most of the time wrong.
       | 
       | Another example was Hinton, who according to a recent Wired
       | interview, went down his AI doomer spiral after seeing PaLM
       | explain to him why a joke is funny. That is such a bizarre
       | statement it honestly makes me retroactively question some of
       | these people's technical credentials.
        
         | simonh wrote:
         | On the present being an exceptional time, it really is.
         | Technological advances in the last few hundred years have
         | changed the human condition in developed countries to almost
         | unrecognisable levels. I saw this in my grandparents. When I
         | talked to them about what my life was like, much of it made no
         | sense to them. When I showed them an iPad and video called my
         | wife and kids in China, it took a while for it to sink in that
         | this was a live two way video feed to another continent. For
         | quite some time I'm convinced they were worried they were being
         | tricked. I saw them experience real, disorienting future shock
         | multiple times.
         | 
         | So in many ways now really is exceptional historically, and
         | this has been the case each generation for the last few
         | generations. It's likely this will continue to be true. The
         | world my grandchildren grow up in might well be even more
         | exceptional again.
         | 
         | As for Hinton, he knows what he is talking about. A.I.
         | alignment is a fiendishly difficult problem. A lot of alignment
         | pathologies that were once theoretical are proving to be real
         | and difficult to avoid phenomena in LLMs. This isn't vague
         | poorly specified paranoia. There are a lot of very real,
         | verified failure modes for A.I. and for some of them we
         | genuinely have no real idea at all how to properly address
         | them. The reason Hinton changed course is because previously he
         | didn't think we were anywhere near the point where such
         | problems could pose actual dangers, but the level of
         | advancement in the last few years has been so rapid he's
         | changed his assessment.
         | 
         | I highly recommend Robert Miles A.I. Safety channel on a
         | YouTube. He has a lot of very good introductory videos on many
         | issues in A.I. safety.
        
         | version_five wrote:
         | A big aspect of the problem is what you might call celebrity
         | worship. People think that because someone is accomplished in
         | an area, we should care what they have to say in another. And
         | AI as in stats and linear algebra has nothing to do with any of
         | the stuff about societal implications or philosophy. So you get
         | a mixed bag of views that basically parallel what laypeople
         | might think, because you're not talking to these people about
         | stuff they're experts in.
         | 
         | 1000 years ago an expert in materials and chemicals (say) could
         | build a mirror but still might belive it's a window into the
         | soul or something. The supernatural and practical parts have
         | nothing to do with each other and don't require overlapping
         | expertise.
        
       | mdp2021 wrote:
       | In the first edition of _The Human Use of Human Beings -
       | Cybernetics and Society_ (from Wiener), you read:
       | 
       | > _the purpose of this book is both to explain the potentialities
       | of the machine in fields which up to now have been taken to be
       | purely human, and to warn against the dangers of a purely selfish
       | exploitation of these possibilities in a world in which to human
       | beings human things are all-important_
       | 
       | which sounds more contentful than the submitted piece: it does
       | not yet contain arguments, but at least it reveals a positive
       | non-trivial direction.
       | 
       | The value of the submitted piece is mostly in linking to the
       | article "Some Moral and Technical Consequences of Automation" -
       | https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/moral.pdf
       | 
       | (Plus the quoted idea that <<Complete subservience and complete
       | intelligence do not go together>>.)
       | 
       | Discussion should be upon this material (at least). That
       | "technology is risky" (not that the article is at this level of
       | generality, but it is close), joked Jurgen Schmidhuber, is as old
       | as the discovery of fire.
        
         | mdp2021 wrote:
         | For example, more proper content in the original article:
         | 
         | > _Machines act far more rapidly than human beings [...] even
         | when machines do not in any way transcend man 's intelligence,
         | they very well may, and often do, transcend man in the
         | performance of tasks. An intelligent understanding of their
         | mode of performance may be delayed until long after the task
         | which they have been set has been completed. This means that
         | though machines are theoretically subject to human criticism,
         | such criticism may be ineffective until long after it is
         | relevant_
         | 
         | > _In determining policy in chess there are several different
         | levels of consideration which correspond in a certain way to
         | the different logical types of Bertrand Russell. There is the
         | level of tactics, the level of strategy, the level of the
         | general considerations which should have been weighed in
         | determining this strategy, the level in which the length of the
         | relevant past - the past within which these considerations may
         | be valid - is taken into account, and so on. Each new level
         | demands a study of a much larger past than the previous one
         | [...] The programming of such a learning machine would have to
         | be based on some sort of war game, just as commanders and staff
         | officials now learn an important part of the art of strategy in
         | a similar manner. Here, however, if the rules for victory in a
         | war game do not correspond to what we actually wish for our
         | country, it is more than likely that such a machine may produce
         | a policy which would win a nominal victory on points at the
         | cost of every interest we have at heart, even that of national
         | survival_
         | 
         | > _Complete subservience and complete intelligence do not go
         | together. How often in ancient times the clever Greek
         | philosopher slave of a less intelligent Roman slaveholder must
         | have dominated the actions of his master rather than obeyed his
         | wishes! Similarly, if the machines become more and more
         | efficient and operate at a higher and higher psychological
         | level..._
         | 
         | > _Disastrous results are to be expected not merely in the
         | world of fairy tales [ - the "Sorcerer's Apprentice etc. - ]
         | but in the real world wherever two agencies essentially foreign
         | to each other are coupled in the attempt to achieve a common
         | purpose. If the communication between these two agencies as to
         | the nature of this purpose is incomplete, it must only be
         | expected that the results of this cooperation will be
         | unsatisfactory. If we use, to achieve our purposes, a
         | mechanical agency with whose operation we cannot efficiently
         | interfere once we have started it, because the action is so
         | fast and irrevocable that we have not the data to intervene
         | before the action is complete, then we had better be quite sure
         | that the purpose put into the machine is the purpose which we
         | really desire and not merely a colorful imitation of it_
         | 
         | > _[Instead of rushing] ahead to employ the new powers for
         | action which are opened up to us ... we must always exert the
         | full strength of our imagination to examine where the full use
         | of our new modalities may lead us_
        
       | akomtu wrote:
       | Here is an updated scenario for Doom 5. The hell - a highly
       | advanced civilization of monsters - wants to invade Earth. The
       | only way to do so is to convince humanity to build a portal on
       | their side. Unfortunately (or fortunately) humans are nowhere
       | smart enough to build such a thing, and the hell scientists use a
       | low bandwidth quantum channel, the only channel to Earth they
       | have, to steer human scientists in the desired direction.
       | Eventually those succeed in creating an advanced imitation
       | machine, which humans quickly call AI, and the hell scientists
       | get to connect to AI over their quantum channel. Human scientists
       | don't notice anything because their super advanced quantum RNG
       | functions within spec. That's enough to make AI design the portal
       | that humans rush to build, as those believe it's a gate to other
       | planets. And it kind of is. Once the portal is buolt, the hell
       | invades. The humanity splits: some want to fight the invaders,
       | some want to talk to them, there are even those who want to
       | integrate those creatures into human society and even talk about
       | mixed marriages. Humanity loses 99% of the population, but
       | ultimately prevails and destroys the portal. What's worse is the
       | remaining crowd, with few exeptions, receded in moral level to
       | pre-historic savages. The final scene shows how a small tribe of
       | survivors grill some meat on a GPU chip, that was once in the
       | brains of that AI.
        
       | DemocracyFTW2 wrote:
       | Guy on the internet writes a whole piece about a fairly well-
       | known historical figure who lived half a century ago, includes
       | many scans of original news articles, can't be bothered to write
       | out the name correctly. Wiener it is, not Weiner.
        
         | golem14 wrote:
         | The article even contains 6 newspapers clippings where the name
         | is spelled correctly. Infuriating...
        
           | version_five wrote:
           | You're both engaging in some kind of logical fallacy. The
           | author lacks attention to detail but that doesn't mean their
           | content isn't credible. (I'm not saying it is credible, I'm
           | saying credibility is unrelated). I see a lot of this and
           | find it very shallow and uninteresting. It's easy to find
           | typos, it's hard to make substantive criticisms and so people
           | go with what's easy.
           | 
           | Related, I've worked with people, scientists, that I've asked
           | to provide useful feedback on the scientific aspects other's
           | work, and who instead have come back with typos and grammar
           | suggestions. This is worse than useless.
        
       | low_tech_love wrote:
       | Sorry but if Eliezer whatever is name-dropped in the first
       | sentence, I pass.
        
       | bitwize wrote:
       | I'd say Mary Shelley or, at the latest, Karel Capek beat Wiener
       | to the punch there.
        
       | ricopags wrote:
       | E.M. Forster's The Machine Stops[0] is probably the most
       | prescient and early tale I've seen foretelling the outcome of
       | Verilio's Pure War and Postman's Amusing Ourselves to Death. It's
       | worth a read.
       | 
       | [0]https://en.wikipedia.org/wiki/The_Machine_Stops
        
         | prolapso wrote:
         | I've been meaning to read it ever since I saw this quote, which
         | I found quite beautiful and sad:
         | 
         |  _The Machine is much, but it is not everything. I see
         | something like you in this plate, but I do not see you. I hear
         | something like you through this telephone, but I do not hear
         | you. That is why I want you to come. Pay me a visit, so that we
         | can meet face to face, and talk about the hopes that are in my
         | mind._
         | 
         | - The Machine Stops (1909)
        
       | [deleted]
        
       | version_five wrote:
       | Two thing:
       | 
       | 1. AGI killing us is as relevant now as it was in the 50s
       | 
       | 2. I Robot predates this. Asimov wasn't a doomer but he obviously
       | considered the implications of AGI
        
       | wseqyrku wrote:
       | I'd imagine this was discovered with the help of GPT.
        
       ___________________________________________________________________
       (page generated 2023-06-03 23:00 UTC)