[HN Gopher] Shane Legg: Machine Super Intelligence (2008) [pdf]
       ___________________________________________________________________
        
       Shane Legg: Machine Super Intelligence (2008) [pdf]
        
       Author : Anon84
       Score  : 50 points
       Date   : 2024-05-09 16:19 UTC (6 hours ago)
        
 (HTM) web link (www.vetta.org)
 (TXT) w3m dump (www.vetta.org)
        
       | nmwnmw wrote:
       | Author is Shane Legg, cofounder of DeepMind.
        
       | bamboozled wrote:
       | Would be good if a super intelligence helped them fix their HTTPS
       | configuration.
        
         | blueboo wrote:
         | even AGI has its limits.
        
       | dvaun wrote:
       | https://archive.is/U2rz5
        
         | nico wrote:
         | This is just a screenshot of a pdf reader showing the first
         | page of the pdf
        
           | dvaun wrote:
           | Jeez, you're right. Didn't catch that
        
       | dsubburam wrote:
       | His 2008 Ph.D. dissertation. Could probably use a [2008] tag on
       | the title.
        
       | nico wrote:
       | Link without ssl (notice the http at the beginning):
       | 
       | http://www.vetta.org/documents/Machine_Super_Intelligence.pd...
        
       | ramoz wrote:
       | From: Is building intelligent machines a good idea?
       | 
       | > If one accepts that the impact of truly intelligent machines is
       | likely to be profound, and that there is at least a small
       | probability of this happening in the foreseeable future, it is
       | only prudent to try to prepare for this in advance. If we wait
       | until it seems very likely that intelligent machines will soon
       | appear, it will be too late to thoroughly discuss and contemplate
       | the issues involved.
       | 
       | So, are we too late?
        
         | NoMoreNicksLeft wrote:
         | > So, are we too late?
         | 
         | If we're not too late, then surely we're waiting til the last
         | possible moment. There's a Fermi Paradox hanging over our
         | heads, and all we hear from the LLM crowd is "you're being
         | silly, there's nothing to worry about here".
        
           | landryraccoon wrote:
           | I don't get how AI can be a possible solution to the Fermi
           | Paradox.
           | 
           | If an AI is intelligent enough and capable enough to displace
           | a biological species, then the paradox remains. The question
           | just becomes, why hasn't the galaxy already been colonized by
           | robots instead of biological organisms?
        
             | happypumpkin wrote:
             | Maybe the species that create advanced AI use it to extinct
             | themselves before the AI is fully autonomous and self-
             | sustaining? Presumably "help an ~average intelligence but
             | crazy person make a bioweapon" would come before AI capable
             | of self-sustaining and colonizing the galaxy?
        
               | landryraccoon wrote:
               | The scenario you described is just suicide. An AI that is
               | acting on behalf of a controller and has no ability to
               | make autonomous decisions is just a tool. To me that's no
               | different conceptually than a race destroying itself with
               | nuclear weapons, but simply replacing nukes with some
               | sort of drone or automated weapon. It wouldn't be AGI.
        
               | happypumpkin wrote:
               | I agree, I'm not saying it would be AGI, just that it
               | would make AI a solution to the Fermi Paradox.
        
             | user90131313 wrote:
             | Maybe, maybe some other galaxy is already colonized and
             | died or their light still not reached us. Or we can't
             | detect it with our toys. Our time to look for real things
             | in universe is like literally nothing compared to billion
             | years.
        
             | dinosaurdynasty wrote:
             | https://grabbyaliens.com/ is a solution to this.
             | 
             | Basically "if they did already, we wouldn't be here, so we
             | exist before the universe gets colonized." (Also they
             | colonize fast, ~0.3c, so you don't see it coming.)
        
             | wslh wrote:
             | Why AI hasn't discovered a time machine?
        
           | anothernewdude wrote:
           | Fermi isn't a paradox, the energy requirements are too high
           | for travel.
           | 
           | LLMs are pathetic.
        
             | NoMoreNicksLeft wrote:
             | Nuclear saltwater rockets seem pretty feasible to me. There
             | won't be any Star Trekking going on, but hitting the next
             | stars 4-5ly out doesn't seem completely out of the realm of
             | possibility. Our biology's a little screwed, but even on
             | Earth there are organisms with the correct
             | lifespan/fertility that they could colonize such worlds as
             | they found habitable.
             | 
             | > LLMs are pathetic.
             | 
             | Perhaps. But is there anyone here who believes that if we
             | do eventually come up with an artificial mind, that LLMs
             | won't be at the very least, a component of such an
             | achievement? Insufficient on their own, but likely
             | necessary.
        
       | sjkoelle wrote:
       | striking how closely ij good's conception of machine
       | superintelligence is matched today
       | http://incompleteideas.net/papers/Good65ultraintelligent.pdf
        
       ___________________________________________________________________
       (page generated 2024-05-09 23:00 UTC)