[HN Gopher] OpenAI Researcher Jason Wei: It's obvious that it wi...
       ___________________________________________________________________
        
       OpenAI Researcher Jason Wei: It's obvious that it will not be a
       "fast takeoff"
        
       Author : s-macke
       Score  : 17 points
       Date   : 2025-06-30 19:26 UTC (3 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | 4ndrewl wrote:
       | Jam tomorrow
        
       | neom wrote:
       | "Finally, maybe this is controversial but ultimately progress in
       | science is bottlenecked by real-world experiments."
       | 
       | I feel like this has been the vast majority of consensus around
       | these halls? I can't count the number of HN comments I've nodded
       | at around the idea that irl will become the bottleneck.
        
         | bglazer wrote:
         | This shows just how completely detached from reality this whole
         | "takeoff" narrative is. It's utterly baffling that someone
         | would consider it "controversial" that understanding the world
         | requires * _observing the world*_.
         | 
         | The hallmark example of this is life extension. There's a not
         | insignificant fraction of very powerful, very wealthy people
         | who think that their machine god is going to read all of reddit
         | and somehow cogitate its way to a cure for ageing. But how
         | would we know if it works? Seriously, how else do we know if
         | our AGI's life extension therapy is working besides just
         | fucking waiting and seeing if people still die? Each iteration
         | will take years (if not decades) just to test.
        
           | neom wrote:
           | Last year went for a walk with a fairly known AI researcher,
           | I was somewhat shocked that they didn't understand the
           | difference between thoughts, feelings and emotions. This is
           | what I find interesting about all these top someones in AI.
           | 
           | I presume the teams at the frontier labs are
           | interdisciplinary (philosophy, psychology, biology,
           | technology) - however that may be a poor assumption.
        
       | janalsncm wrote:
       | A lot of this is pretty intuitive but I'm glad to hear it from a
       | prestigious researcher. It's a little annoying to hear people
       | quote Hinton's opinion as the "godfather" of AI as if there's
       | nothing more we need to know.
       | 
       | On a related note, I think there is a bit of nuance to
       | superintelligence. The following are all notable landmarks on the
       | climb to superintelligence:
       | 
       | 1. At least as good as any human at a single cognitive task.
       | 
       | 2. At least as good as any human on all cognitive tasks.
       | 
       | 3. Better than any human on a single cognitive task.
       | 
       | 4. Better than any individual human at all cognitive tasks.
       | 
       | 5. Better than any group of humans at all cognitive tasks.
       | 
       | We are not yet at point 4 yet. But even after that point, a group
       | of humans may still outperform the AI.
       | 
       | Why this matters is if part of the "group" is performing
       | empirical experiments to conduct scientific research, an AI on
       | its own won't outperform your group unless the AI can also
       | perform those experiments or find some way to avoid doing them.
       | This is another way of restating the original Twitter post.
        
       ___________________________________________________________________
       (page generated 2025-06-30 23:01 UTC)