[HN Gopher] The meeting of the minds that launched AI
       ___________________________________________________________________
        
       The meeting of the minds that launched AI
        
       Author : fremden
       Score  : 90 points
       Date   : 2023-09-11 16:40 UTC (5 hours ago)
        
 (HTM) web link (spectrum.ieee.org)
 (TXT) w3m dump (spectrum.ieee.org)
        
       | johndhi wrote:
       | My father and his friends were academic computer scientists
       | working on AI back in the 60s. I don't know that there's a
       | straightforward path between what they were doing and the popular
       | LLMs today, but I do applaud more stories on what old school comp
       | sci researchers were up to.
        
         | fnordpiglet wrote:
         | LLMs of today display amazing abductive abilities but are
         | limited in inductive and deductive abilities, as well as other
         | optimization techniques of classical AI and algorithms. These
         | abductive abilities are unique and exciting because we've
         | typically done really poorly with ambiguous and complex
         | semantic spaces like this. However I think the excitement has
         | obscured the fact it's just a piece of a larger machine. Why do
         | we care that LLMs are mediocre chess players when we have
         | machine models using more traditional techniques that are the
         | best chess players on earth? Why do we care they fail at
         | deductive reasoning tests? At mathematical calculations? Those
         | are really well understood areas of computing. Somehow people
         | have fixated on the things we've already done that this new
         | technique fails at, but ignore the abilities LLMs and other
         | generative models demonstrate we've never achieved before. At
         | the same time the other camp only sees generative AI as the
         | silver bullet tool to end all other tools. Neither is correct.
        
           | og_kalu wrote:
           | >but are limited in inductive and deductive abilities
           | 
           | LLMs are great at induction.
           | 
           | In a broad sense, they are also very good at deduction.
           | 
           | "I define a new word, the podition. A podition is any object
           | that can fit on a podium. Is a computer a podition ? Why ?"
           | 
           | A correct answer is deductive.
           | 
           | LLMs eat these kind of questions for breakfast. Even the OG
           | 2020 GPT-3 could manage them.
           | 
           | You really do have to stretch deduction to heights most
           | people struggle with to have them falter majorly.
        
           | dr_dshiv wrote:
           | How are LLMs bad at induction? I thought they were great at
           | induction. This paper doesn't go into measurements of it, but
           | helps lay out the nature of reasoning well.
           | 
           | https://aclanthology.org/2023.findings-acl.67.pdf#page15
        
             | einpoklum wrote:
             | They are great at saying things that sounds like the next
             | line of the conversation. That's a certain kind of
             | induction for sure, but probably not the kind you're after.
        
         | empath-nirvana wrote:
         | There's some value in putting a flag in the ground. Even if
         | most of those people there were in the symbolic camp, a lot of
         | their critiques of neural networks as they existed were well-
         | founded and were really only proved obviously _wrong_ after
         | many many rounds of moore's law.
        
           | marcosdumay wrote:
           | The criticism from the beginning was of a fundamental
           | theoretical nature, and died at the 90's when people proved
           | and demonstrated that neural networks were powerful enough to
           | run any kind of computation.
           | 
           | In fact, I don't recall people criticizing neural networks
           | from being too small to be useful. Ever. There was a lot of
           | disagreement between wide and deep network proponents, that
           | deep won by demonstration, but "how large a network we need
           | to handle X" was always more of a question than a "see, we'll
           | never get there". (Even more because the "we will never get
           | there" is obviously false, since the thing practically no
           | limit on scaling.)
        
             | [deleted]
        
       | shon wrote:
       | 67 years later: https://aiconference.com
        
       | simonw wrote:
       | My favourite detail about that 1956 meeting is this extract from
       | the conference proposal:
       | 
       | > An attempt will be made to find how to make machines use
       | language, form abstractions and concepts, solve kinds of problems
       | now reserved for humans, and improve themselves. We think that a
       | significant advance can be made in one or more of these problems
       | if a carefully selected group of scientists work on it together
       | for a summer.
       | 
       | I think this may be one of the most over-ambitious software
       | estimates of all time.
       | 
       | The whole proposal is on
       | https://en.wikipedia.org/wiki/Dartmouth_workshop
        
       | kaycebasques wrote:
       | I just learned about this conference a couple weeks ago while
       | watching the Computer History Museum video on AI:
       | https://youtu.be/NGZx5GAUPys?si=aVDZAmpR2ziKq4x9
       | 
       | (Video is from 2014)
        
       | TradingPlaces wrote:
       | Summer camp for mathematicians
        
       | aborsy wrote:
       | Other than Minsky, I don't think others (who are nevertheless
       | scientists in their respective fields) are considered to have
       | made significant contributions to modern machine learning or AI.
       | McCarthy's work around this topic culminated in LISP, leading to
       | Emacs, a text editor!
       | 
       | From that period, Rosenblatt's work was instrumental to modern
       | AI.
        
         | abecedarius wrote:
         | Solomonoff's
         | https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_induc...
         | is about as basic to the theory of intelligent agents as
         | anything gets.
         | 
         | (He's in the pic and I'd guess this article was by a relative.)
        
           | astrange wrote:
           | If I was an intelligent agent, I would prefer to be based on
           | a theory that was computable without time travel, which this
           | one isn't.
        
             | [deleted]
        
             | taneq wrote:
             | Ah, but time travel (or rather, prediction, but I'm being
             | whimsical here) is the essence of intelligence. Working off
             | your current state and inputs your mind peers forward in
             | time to imagine the ghost of the future, and echoes of this
             | future ripple back to drive your actions.
        
         | fipar wrote:
         | Emacs is so much more than a text editor! But I need to stay on
         | topic...
         | 
         | I believe your assessment of LISP (and therefore of MacArthy)'s
         | impact on AI to be unfair. Just a few days ago
         | https://github.com/norvig/paip-lisp was discussed on this site,
         | for example.
        
         | daveguy wrote:
         | Claiming that the creator of LISP did not have a significant
         | impact on AI is not a defensible position.
        
           | JamilD wrote:
           | People forget for how long Lisp had an impact on AI, even
           | outside GOFAI techniques; LeCun's early neural networks were
           | written in Lisp:
           | https://leon.bottou.org/publications/pdf/sn-1988.pdf
        
             | jahewson wrote:
             | I don't know - there's real impact and then there's
             | inconsequential path dependency. This feels like the
             | latter. The networks turned out to be valuable but LISP did
             | not.
        
           | aborsy wrote:
           | The story goes as, John McCarthy was applying for an
           | assistant professorship position at MIT. MIT told him, but we
           | have here Norbert Wiener who was a renowned mathematician at
           | the time and had published cybernetics some time ago, in
           | which he talks about agents interacting with the environment
           | and feedback control, sort of modern computation-based AI.
           | McCarthy changed the name from cybernetics to AI, and focused
           | on symbolic systems and logic. The approach was generally not
           | successful.
           | 
           | Some people consider that the logic-based approach to AI
           | pioneered in this conference contributed to an (what we now
           | call) AI winter. People like John Pierce of Bell Labs, a very
           | influential figure in government, defunded research in
           | computation-based AI such as for speech recognition (he wrote
           | articles, saying, basically, researchers pursuing these
           | techniques are charlatans).
           | 
           | There is no major algorithm or idea in undergrad machine
           | learning textbooks named after these people. There are other
           | people from that era.
        
             | dr_dshiv wrote:
             | Makes sense. I heard that some of Wiener's anti-war
             | sentiment (specifically anti-military-work-during-
             | peacetime) may have contributed... cybernetics really
             | collapsed hard as a discipline, even though I find it very
             | helpful from a systems design perspective. AI has always
             | bothered me as a term because, from a design perspective,
             | the goal should be creating intelligent systems--not
             | necessarily entirely artificial ones.
             | 
             | >There is no major algorithm or idea in undergrad machine
             | learning textbooks named after these people.
             | 
             | Maybe the pandemonium idea from Selfridge?
        
       ___________________________________________________________________
       (page generated 2023-09-11 22:00 UTC)