[HN Gopher] GPT-2 as step toward general intelligence (2019)
       ___________________________________________________________________
        
       GPT-2 as step toward general intelligence (2019)
        
       Author : savanaly
       Score  : 61 points
       Date   : 2023-03-26 18:57 UTC (4 hours ago)
        
 (HTM) web link (slatestarcodex.com)
 (TXT) w3m dump (slatestarcodex.com)
        
       | thomastjeffery wrote:
       | Personification is so easily applied, and so incredibly
       | misleading.
       | 
       | It's fascinating how much information we manage to encode into
       | text: so much more than the language itself we intentionally
       | wrote.
       | 
       | Unfortunately, by personifying the model, we create an
       | expectation that it will eventually start applying specific text
       | patterns _on purpose_ instead of simply continuing its core
       | behavior: to implicitly restructure continuations along the
       | patterns that humans have written into text.
        
       | graycat wrote:
       | AGI -- artificial general intelligence?
       | 
       | With the efforts currently getting the most attention in the tech
       | news, are we on the way to AGI?
       | 
       | Gee, I worked in _artificial intelligence_ the last time. Wrote
       | code, published papers, gave talks. My view at the time and since
       | is the same -- that work had no promise of progress toward AGI.
       | 
       | For what I've seen about the current efforts, for whatever
       | utility has been achieved, it appears that the output is based on
       | _borrowing, distilling, abstracting_ from the input of what has
       | already been done and published.
       | 
       | Soooo, we could consider questions with no published answers or
       | at least answers rarely published and now not easy to find. Here
       | are three such:
       | 
       | (1) I'll return to just a plane geometry puzzle question I
       | encountered as college freshman: By classic Euclidean
       | construction, construct a triangle ABC with point D on AB and
       | point E on BC so that the lengths AD = DE = EC.
       | 
       | (2) In the Kuhn-Tucker conditions of nonlinear optimization, are
       | the Kuhn-Tucker and Zangwill constraint qualifications
       | independent?
       | 
       | (3) Do the wave functions of quantum mechanics form a Hilbert
       | space? Physics texts commonly claim "Yes" but with the usual pure
       | math definition of a Hilbert space as a "complete inner product
       | space" the answer is "No". In what is published, mostly the
       | physics texts ignore the pure math definition and the pure math
       | texts ignore the quantum mechanics wave function examples -- so a
       | clean answer is not easy to find in the usual published material.
       | 
       | More generally, for a good pure mathematician about to publish
       | some good, new results, before publishing, ask that question to
       | current AI.
       | 
       | Maybe for an easier source of questions, just pick some of the
       | more difficult exercises from some graduate texts in pure math.
       | Correct solutions have not commonly been published, and some of
       | the exercises require some understanding of the math in the text
       | and some ingenuity.
       | 
       | Here's another chance: Once when I was teaching computer science
       | at Georgetown University, as a final exam question I gave the
       | code for quick sort where I had inserted an error -- the question
       | was to find and correct the error. So far that error and its
       | correction may never have been published.
        
         | lordnacho wrote:
         | Wait a minute, do people have to be able to solve those
         | problems to be considered intelligent in the AGI sense? My
         | guess is there's about 8B people who wouldn't pass that bar.
         | 
         | Also aren't there loads of lower level math questions that are
         | just as unique? A quadratic equation with large random numbers
         | would be easily solved by a high schooler yet not be in the
         | dataset verbatim. Or perhaps a proof of some geometry thing
         | that is a corollary of some well known proof, eg I came across
         | one earlier: it's well known that a cord subtending an angle on
         | the circle has the double angle from the centre. Now prove that
         | if you see two angles where one is the double of the other and
         | they open towards the same line segment, you can draw a circle
         | where the smaller angle is on the circle and the double is at
         | the centre, and the line segment is a cord of the circle.
         | 
         | Anyway what exactly is the bar for intelligence? There's lots
         | of people who can't do one task or another, but we don't think
         | of them as not intelligent.
        
         | sacrosancty wrote:
         | [dead]
        
       | johnfn wrote:
       | I remember reading this back in 2019. It was the first article
       | that really made me pay attention to GPT. The bits about making
       | acronyms and poor counting as second-order behaviors really
       | jumped out at me at the time.
       | 
       | It's remarkable how well the article has aged - all the bits
       | about "wow, look, it can kinda sorta try to summarize an article
       | if you prompt it the right way" obviously all became insanely
       | more relevant with GPT3 and GPT4. Same with the bits about
       | translation and how it seemed like it could sorta write a poem.
       | 
       | Still a good read, and scary that it was written only 4 years
       | ago.
        
         | dmonitor wrote:
         | Back in 2020 there was a gpt-based text adventure game called
         | AI Dungeon that got real popular. It'd be cool to check out
         | what that experience is like with the current iteration of the
         | technology
        
           | abj wrote:
           | I'm working on a current iteration of an AI dungeon text
           | based experience with AI illustrations and narration. If
           | you're interested you can take a look
           | https://twitch.tv/ai_voicequest
        
           | braymundo wrote:
           | It still exists at https://play.aidungeon.io
        
       | turtleyacht wrote:
       | And GPT-3 (2020): https://slatestarcodex.com/2020/06/10/the-
       | obligatory-gpt-3-p...
        
         | quadcore wrote:
         | What was 3.5 about?
        
           | turtleyacht wrote:
           | For the above, I tried Google search with
           | site:slatestarcodex.com gpt-4
           | 
           | But only found one for GPT-3 as the author's follow-up.
        
             | jglamine wrote:
             | Slatestarcodex shut down before GPT-4 came out. His new
             | blog is astralcodexten.substack.com and has lots of posts
             | in GPT-4.
        
               | turtleyacht wrote:
               | Ohh, thank-you for that! Here is an updated search:
               | site:astralcodexten.substack.com gpt-4
               | 
               | I see an article as recent as 6 days ago:
               | 
               | https://astralcodexten.substack.com/p/half-an-hour-
               | before-da...
        
       ___________________________________________________________________
       (page generated 2023-03-26 23:01 UTC)