[HN Gopher] Machines of loving grace: How AI could transform the...
       ___________________________________________________________________
        
       Machines of loving grace: How AI could transform the world for the
       better
        
       Author : jasondavies
       Score  : 58 points
       Date   : 2024-10-11 20:15 UTC (2 hours ago)
        
 (HTM) web link (darioamodei.com)
 (TXT) w3m dump (darioamodei.com)
        
       | thrance wrote:
       | This is basically the tech CEO's version of the book of
       | revelations: "AI will soon come and make everything right with
       | the world, help us and you will be rewarded with a Millennium of
       | bliss in It's presence".
       | 
       | I won't comment on the plausibility of what is being said, but
       | regardless, one should beware this type of reasoning. Any action
       | can be justified, if it means bringing about an infinite good.
       | 
       | Relevant read: https://en.wikipedia.org/wiki/Singularitarianism
        
         | HocusLocus wrote:
         | It won't bring about Infinite Good. It'll bring about infinite
         | contentment by diddling the pleasure center in our brains.
         | Because you know, eventually everything is awarded to and built
         | by the lowest bidder.
        
       | Muromec wrote:
       | Miquella the kind, pure and radiant, he wields love to shrive
       | clean the hearts of men. There is nothing more terrifying.
        
         | throwaway918299 wrote:
         | I beat Consort Radahn before the nerfs.
        
           | talldayo wrote:
           | But did you beat the original Radahn pre-nerf?
        
             | throwaway918299 wrote:
             | The day-1 version with broken hitboxes? Yeah
             | 
             | Consort was harder haha
        
       | KaiserPro wrote:
       | One of the sad things about tech is that nobody really looks at
       | history.
       | 
       | The same kinds of essays were written about trains, planes and
       | nuclear power.
       | 
       | Before lindbergh went off the deepend, he was convinced that
       | "airmen" were gentlemen and could sort out the world's ills.
       | 
       | The essay contains a lot of coulds, but doesn't touch on the base
       | problem: human nature.
       | 
       | AI will be used to make things cheaper. That is, lots of job
       | losses. must of us are up for the chop if/when competent AI
       | agents become possible.
       | 
       | Loads of service jobs too, along with a load of manual jobs when
       | suitable large models are successfully applied to robotics (see
       | ECCV for some idea of the progress for machine perception.)
       | 
       | But those profits will not be shared. Human productivity has
       | exploded in the last 120 years, yet we are working longer hours
       | for less pay.
       | 
       | Well AI is going to make that worse. It'll cause huge unrest (see
       | luddite riots, peterloo, the birth of unionism in the USA, plus
       | many more)
       | 
       | This brings us to the next thing that AI will be applied to:
       | Murdering people.
       | 
       | Andril is already marrying basic machine perception with cheap
       | drones and explosives. its not going to take long to get to
       | personalised explosive drones.
       | 
       | AI isn't the problem, we are.
       | 
       | The sooner we realise that its not a technical problem to be
       | solved, but a human one, we might stand a chance.
       | 
       | But looking at the emotionally stunted, empathy vacuums that
       | control either policy or purse strings, I think it'll take a
       | catastrophe to change course.
        
         | realce wrote:
         | 3 links above this one is
         | https://www.washingtonpost.com/nation/2024/10/08/exoskeleton...
        
         | kranke155 wrote:
         | We are entering a dystopia and people are still writing these
         | wonderful essays about how AI will help us.
         | 
         | Microtargeted psychometrics (Cambridge Analytica, AggregateIQ)
         | have already made politics in the West an unending barrage of
         | information warfare. Now we'll have millions of autonomous
         | agents. At some point soon in the future, our entire feed will
         | be AI content or upvoted by AI or AI manipulating the
         | algorithm.
         | 
         | It's like you said - this essay reads like peak AI. We will
         | never have as much hope and optimism about the next 20 years as
         | we seem to have now.
         | 
         | Reminds me of a graffiti I saw in London, while the city's cost
         | of living was exploding and making the place unaffordable to
         | anyone but a few:
         | 
         | "We live in a Utopia. It's just not ours."
        
         | jimkleiber wrote:
         | > The essay contains a lot of coulds, but doesn't touch on the
         | base problem: human nature.
         | 
         | > AI isn't the problem, we are.
         | 
         | I think when we frame it as human _nature_, then yes, _we_ look
         | like the problem.
         | 
         | But what if we frame it as human _culture_? Then _we_ aren't
         | the problem, but rather our _behaviors/beliefs/knowledge/etc_
         | are.
         | 
         | If we focus on the former, we might just be essentially
         | screwed. If we focus on the latter, we might be able to change
         | things that seem like nature but might be more nurture.
         | 
         | Maybe that's a better framing: the base problem is human
         | nurture?
        
           | laurex wrote:
           | I think this is an important distinction. Yes, humans have
           | some inbuilt weaknesses and proclivities, but humans are not
           | _required_ to live in or develop systems in which those
           | weaknesses and proclivities are constantly exploited for the
           | benefit /power of a few others. Throughout human history,
           | there have been practices of contemplation, recognition of
           | interdependence, and ways of increasing our capacity for
           | compassion and thoughful response. We are currently in a
           | biological runaway state with extraction, but it's not the
           | only way humans have of behaving.
        
             | exe34 wrote:
             | > Throughout human history, there have been practices of
             | contemplation, recognition of interdependence, and ways of
             | increasing our capacity for compassion and thoughful
             | response.
             | 
             | has this ever been widespread in society? I think such
             | people have always been few and far between?
        
               | keyringlight wrote:
               | The example that comes to mind is post-WW2 Germany, but
               | that was apparently a hard slog to change the minds of
               | the German people. I really doubt any organization could
               | do something similar presenting an opposing viewpoint to
               | the companies (and their resources) behind and using AI
        
           | achrono wrote:
           | Sure. But why do you think changing human nurture is any
           | easier than changing human nature? I suspect that as your set
           | of humans in consideration tends to include the set of _all_
           | humans, the gap between changeability of human nature vs
           | changeability of human nurture reduces to zero.
           | 
           | Perhaps you are implying that we sign up for a global (
           | _truly_ global, not global by the standards of Western
           | journalists) campaign of complete and irrevocable reform in
           | our behavior, beliefs and knowledge. At the very least, this
           | implies simply killing off a huge number of human beings who
           | for whatever reason stand in the way. This is not (just) a
           | hypothesis -- some versions of this have been tried and
           | tested. *
           | 
           | * https://en.wikipedia.org/wiki/Totalitarianism
        
           | tbrownaw wrote:
           | > _I think when we frame it as human _nature_, then yes, _we_
           | look like the problem.
           | 
           | But what if we frame it as human _culture_? Then _we_ aren't
           | the problem, but rather our _behaviors/beliefs/knowledge/etc_
           | are.
           | 
           | If we focus on the former, we might just be essentially
           | screwed. If we focus on the latter, we might be able to
           | change things that seem like nature but might be more
           | nurture.
           | 
           | Maybe that's a better framing: the base problem is human
           | nurture?_
           | 
           | This is about the same as saying that leaders can get better
           | outcomes by surrounding themselves with yes-men.
           | 
           | Just because asserting a different set of facts makes the
           | predicted outcomes more desirable, doesn't mean that those
           | alternate facts are better for making predictions with. What
           | matters is how congruent they are to reality.
        
         | swatcoder wrote:
         | > One of the sad things about tech is that nobody really looks
         | at history.
         | 
         | First, while I often write much of the same sentiment about
         | techno-optimism and history, you should remember that you're
         | literally in the den of Silicon Valley startup hackers. It's
         | not going to be an easily heard message here, because the site
         | specifically appeals to people who dream of inspiring exactly
         | these essays.
         | 
         | > The sooner we realise that its not a technical problem to be
         | solved, but a human one, we might stand a chance.
         | 
         | Second... you're falling victim to the same trap, but simply
         | preferring some kind of social or political technology instead
         | of a mechanical or digital one.
         | 
         | What history mostly affirms is that prosperity and ruin come
         | and go, and that nothing we engineer last for all that long,
         | let alone forever. There's no point in dreading it, whatever
         | kind of technology you favor or fear.
         | 
         | The bigger concern is that some of the acheivements of
         | modernity have made the human future _far_ more brittle than it
         | has been in what may be hundreds of thousands of years. Global
         | homogenization around elaborate technologies -- whether
         | mechanical, digital, social, political or otherwise -- sets us
         | up in a very  "all or nothing" existential space, where ruin,
         | when it eventually arrives, is just as global. Meanwhile, the
         | purge of diverse, locally practiced, traditional wisdom about
         | how to get by in un-modern environments steals the species of
         | its essential fallback strategy.
        
         | mythrwy wrote:
         | But will AI be eventually used to change human nature itself?
        
       | bugglebeetle wrote:
       | The more recent and consistent rule of technological development,
       | " For to those who have, more will be given, and they will have
       | an abundance; but from those who have nothing, even what they
       | have will be taken away."
        
         | kranke155 wrote:
         | blindingly true.
        
       | gyre007 wrote:
       | I think Dario is trying to raise a new round because OpenAI has
       | done and will continue to do so, nevertheless, the essay provides
       | for some really great reading and even if the fraction comes
       | true, it'll be wonderful.
        
         | lewhoo wrote:
         | So it's bs but for money and therefore totally fine ? I think
         | it's not ok if only a fraction comes true because some people
         | believe in those things and act on those beliefs right now.
        
           | gyre007 wrote:
           | I didn't say it was bs. I was alluding to the timing of this
           | essay being published but, clearly, I didn't articulate it in
           | my message well. I also don't think everything he says is bs.
           | Some of it I find a bit naive -- but maybe that's ok -- some
           | other things seem a bit like sci-fi, but who are we to say
           | this is impossible? I'm optimistic but also learnt in life
           | that things improve, sometimes drastically given the right
           | ingredients.
        
             | lewhoo wrote:
             | Well I don't know. A bit naive, a bit like sci-fi and aimed
             | at raising money fits my description of bs quite well.
        
       | add-sub-mul-div wrote:
       | Social media could have transformed the world for the better, and
       | we can be forgiven for not having foreseen how it would
       | eventually be used against us. It would be stupidity to fall for
       | the same thing again.
        
         | bamboozled wrote:
         | I'm sure social media is what's broken politics. Look at
         | peoples comments on a Trump YouTube video. You can't believe
         | what people believe.
         | 
         | I guess people feel for hitlers garbage too but algorithms just
         | make lies spread with a lot less effort on the liars part.
        
       | kranke155 wrote:
       | Are Americans really so scared of Marx to admit that AI
       | fundamentally proves his point?
       | 
       | Dario here says "yeah likely the economic system won't work
       | anymore" but he doesn't dare say what comes next: It's obvious
       | some kind of socialist system is inevitable, at least for basic
       | goods and housing. How can you deny that to a person in a post-
       | AGI world where almost no one can produce economic value that
       | beats the ever cheaper AI?
        
         | gyre007 wrote:
         | If, and it is an IF, this does turn out the way he is
         | imagining, the transitional period to the AI from the economic
         | PoV will be disastrous for people. That's the scariest part I
         | think.
        
           | kranke155 wrote:
           | Absolutely it will. And it will be a pure plain dystopia, as
           | clear as in the times of Dickens or Dostoyevsky.
           | 
           | We need to start being honest. We live in Dickensian times.
        
       | spiralpolitik wrote:
       | There are two possible end-states for AI once a threshold is
       | crossed:
       | 
       | The AIs take a look at the state of things and realize the KPIs
       | will improve considerably if homo sapiens are removed from the
       | picture. Cue "The Matrix" or "The Terminator" type future.
       | 
       | OR:
       | 
       | The AIs take a look and decide that keeping homo sapiens around
       | makes things much more fun and interesting. They take over
       | running things in a benevolent manner in collaboration with homo
       | sapiens. At that point we end up with 'The Culture'.
       | 
       | Either end-state is bad for the billionaire/investor/VC class.
       | 
       | In the first you'll be a fed into the meat grinder just like
       | everyone else. In the second the AIs, will do a much better job
       | of resource allocation, will perform a decapitation strike on
       | that demographic to capture the resources, and capitalism will
       | largely be extinct from that point onwards.
        
       | HocusLocus wrote:
       | All Watched Over by Machines of Loving Grace ~Richard Brautigan
       | 
       | https://www.youtube.com/watch?v=6zlsCLukG9A
        
       | cs702 wrote:
       | I found the OP to be an earnest, well-written, thought-provoking
       | essay. Thank you sharing it on HN, and thank you also to Dario
       | Amodei for writing it.
       | 
       | The essay does have one big blind spot, which becomes obvious
       | with a simple exercise: If you copy the OP's contents into you
       | word processing app and replace the words "AI" with "AI
       | controlled by corporations and governments" everywhere in the
       | document, many of the OP's predictions instantly come across as
       | rather naive and overoptimistic.
       | 
       | Throughout history, human organizations like corporations and
       | governments haven't always behaved nicely.
        
       ___________________________________________________________________
       (page generated 2024-10-11 23:00 UTC)