[HN Gopher] Why lifeless AI is not intelligent
       ___________________________________________________________________
        
       Why lifeless AI is not intelligent
        
       Author : HasanYousef
       Score  : 19 points
       Date   : 2021-11-21 14:41 UTC (8 hours ago)
        
 (HTM) web link (bdtechtalks.com)
 (TXT) w3m dump (bdtechtalks.com)
        
       | canjobear wrote:
       | > The AI community usually turns to the brain to get inspiration
       | for algorithms and new directions of research.
       | 
       | ??? This isn't true. The author doesn't seem to have an
       | understanding of modern AI.
        
         | bendee983 wrote:
         | Where does the inspiration for ANN/CNN/RNN come from?
        
           | canjobear wrote:
           | Neural networks originated from coarse-grained analogies to a
           | 1940s understanding of neurons. That's about where the
           | neuroscience connection ended. People have tried to make
           | connections since then, but it's almost always post-hoc.
        
             | bendee983 wrote:
             | If you listen to recent talks by Hinton (Capsule networks),
             | LeCun (self-supervised learning), and Bengio (system 2 deep
             | learning), as well as others, you'll find plenty of
             | references to neuroscience, psychology, cognitive science,
             | etc. There are always implementation differences, but the
             | inspiration from brains is always there. The point of the
             | book (which might be wrong, btw) is that the brain itself
             | is an agent of the gene, which has evolved out of the need
             | for better survival mechanisms. Therefore, it is suggesting
             | that anything that has been modeled after the brain is--by
             | extension--an agent of the main source of human
             | intelligence (because it serves the goals of humans) and
             | not intelligent by itself.
        
           | robbedpeter wrote:
           | Hierarchical modeling. Spiking neural nets. Fire together,
           | wire together. Convolution. Boltzmann nets. Autoencoding.
           | LSTM gating. Attention, transformers, gans, etc.
           | 
           | GOFAI might not pull inspiration from the brain, but
           | connectionist style AI, which represents the vast majority of
           | ai being produced and operated, almost exclusively uses
           | brains for inspiration.
        
       | xenocyon wrote:
       | Is this simply an argument in favor of genetic algorithms or is
       | there more to it than that?
        
       | hmry wrote:
       | Sure, if you define "intelligence" as "solving problems in a
       | variety of environments to accomplish your own goals and self-
       | replicate", then no, modern AI is not intelligent. You have just
       | redefined intelligence so that only living beings can be
       | intelligent.
        
         | Jensson wrote:
         | A computer virus that evolves and spreads and is too elusive
         | for humans to eliminate would fit that scenario. I think the
         | point is that as long as humans defines what the AI should do
         | it will never be intelligent, it will only become intelligent
         | when we lose control of it.
         | 
         | I think that was his point, not sure I agree with it but at
         | least it isn't trivially wrong.
        
           | rowanG077 wrote:
           | Isn't that just moving the goalpost to a higher level of
           | abstraction? You could envision an AI were the only
           | instruction is spread yourself. In that quest it could create
           | a vastly more impressive society of AI agents with their own
           | culture.
           | 
           | Humans have instructions too, which are rooted in evolution
           | and biology. It's not at all clear to me how an AI that
           | follows an instruction must, per definition, be considered
           | unintelligent. That would imply Humans are unintelligent.
        
       | pilooch wrote:
       | My take on intelligence over the past 20y has been it is high
       | quality / efficient search of immense state spaces.
       | 
       | "Solving intelligence" as a famous corporation motto, might just
       | be improving state-space search.
       | 
       | Humans are incredible at state space search, it's obvious as soon
       | as you consider the potential data pointsnof any problem we face
       | every day, from washing dishes to designing algorithms.
        
         | Retric wrote:
         | You can abstract basically anything to state space search +
         | optimization. It's essentially too wide a classification to be
         | that useful.
        
           | twofornone wrote:
           | But that's literally what ML training does. Just like a
           | human, neural nets learn heuristics to take advantage of the
           | fact that of all the possible mappings of inputs to outputs,
           | there are actually vanishingly few output states that are
           | valid. Arguably all learning is indeed a reduction of state
           | space, be it by human or machine.
        
       | sgt101 wrote:
       | Junk - not worth reading
        
       | smitty1e wrote:
       | I am stuck by a comparison between AI's limitations as captured
       | in the article and the theoligical concept of angels.
       | 
       | Such a qualitative comparison may offend some HN, but it's a
       | useful means to communicate the idea of heavy "brainpower" that
       | has constraints.
        
       ___________________________________________________________________
       (page generated 2021-11-21 23:02 UTC)