[HN Gopher] The AI Illusion - State-of-the-Art Chatbots Aren't W...
       ___________________________________________________________________
        
       The AI Illusion - State-of-the-Art Chatbots Aren't What They Seem
        
       Author : agnosticmantis
       Score  : 13 points
       Date   : 2022-03-28 16:47 UTC (6 hours ago)
        
 (HTM) web link (mindmatters.ai)
 (TXT) w3m dump (mindmatters.ai)
        
       | blamestross wrote:
       | > "GPT-3 is not necessarily well-calibrated in its predictions on
       | novel inputs."
       | 
       | I don't thing even humans are well calibrated on novel inputs. We
       | just put a LOT of effort via training into preventing inputs from
       | being novel.
       | 
       | Honest to God "novel inputs" break people too.
       | 
       | Honestly I think the AGI problem will end with us admitting
       | humans are a lot dumber than we like to believe they are more
       | than it will require AGI getting smarter.
        
       | kromem wrote:
       | "The Broca area isn't true intelligence as it can't comprehend
       | speech, only make it."
       | 
       | Seems like a weird and overly obvious statement, right?
       | 
       | Focusing on only one specialized area of the brain and claiming
       | that in isolation what it brings to the table isn't comparable to
       | the general intelligence provided by the intersection of that
       | specialized area with multiple other specialized areas and
       | networks.
       | 
       | Wow, GPT-3 isn't actually a self-aware and self-determining
       | intelligence understanding what it is producing. There's only a
       | dozen articles and blog posts every week making that point.
       | 
       | But the whole Captain Obvious statement obfuscates that our own
       | generalized intelligence has been increasingly realized over the
       | years in neuroscience from Phineas Gage onwards to be the
       | byproduct of interconnected but localized specialization.
       | 
       | We have some very impressive AI specialization for a number of
       | things from vision processing to text generation.
       | 
       | And we are increasingly improving on ways to network that
       | specialization together in novel ways to great effect.
       | 
       | It's a sorry state of affairs when people get caught up in
       | declaring that the present position in a journey isn't its end
       | and somehow think that statement alone meaningfully represents
       | data about the journey.
       | 
       | Far more relevant than a single point in time is where we've come
       | from in that journey and how long each leg of it has taken, and
       | the acceleration or deceleration thereof in determining how long
       | until we reach that destination.
       | 
       | We shouldn't expect that AGI will result from a single monolith
       | model, and the presence of "magic tricks" is hardly an indication
       | the current specialized stepping stones are insufficient
       | components of that result.
       | 
       | Most of the neurology making up our perceived subjective
       | consciousness is filled to the brim with "magic tricks" that fall
       | apart when any number of things go wrong.
       | 
       | And we're yet to see just what happens when the gains from
       | photonic AI products like Lightmatter enters the scene. (An
       | interesting paper a year or two ago from the IIT theory of
       | consciousness folks was the argument that the level of self-
       | interaction of information that gives rise to consciousness in
       | our brains is impossible in classical computing architectures).
       | 
       | I think articles like this aren't going to age particularly well
       | within the next decade.
        
         | ars wrote:
         | This article is about labelers behind the scenes, not about if
         | GPT is a real AI.
         | 
         | If the only way they can make GPT better is to override bad
         | answer with a human, then it's an even weaker tool than it
         | appears.
        
         | xg15 wrote:
         | I personally don't believe either that there is some
         | fundamental difference between human consciousness and
         | computation - so, I believe that _in principle_ , AGI should be
         | possible.
         | 
         | But I think he noted some concrete shortcomings that current
         | models have that still place them not even close to AGI.
         | 
         | In particular the ability to map words to their real-world
         | concepts and imagining hypothetical situations with those
         | concepts.
         | 
         | Most of the trick questions the author was asking weren't about
         | formulating a natural-sounding answer to novel input (though
         | that is impressive enough and the network seems to excel at
         | _that_ task). They were more a test if the network  "pulls its
         | answers from books" or if it can really "imagine" the
         | situation, i.e. perform a simulation, taking in the real-world
         | characteristics of the question and come to a conclusion.
         | Results suggested that the network indeed did not imagine the
         | situations.
         | 
         | I think other concrete aspects that are still missing would be
         | social awareness and empathy, the ability to guess what another
         | person is thinking or feeling. Adding to that would be sense of
         | self and all that follows.
         | 
         | Finally, there is the practical problem of acquiring world
         | knowledge. Humans get a nonstop stream of training data on both
         | world and social knowledge from the moment they were born :)
         | Even more, it's extremely high-quality training data, because
         | we're able to explore _causations_ -  "I did A and then B
         | happened". Whereas most current networks today only seem to
         | work with pre-existing training data, which would amount to
         | correlations: "When A happens, B often happens as well".
         | 
         | Because of that stuff, I'm still extremely sceptical on any
         | claims that AGI is already archieved (or will be archived soon)
         | - not because I think it's impossible but because it seems to
         | me we have just scratched the surface of what intelligence and
         | consciousness even entails.
         | 
         | Another thought btw: I'm a bit surprised that there seems to be
         | so little talk about animals in the whole AI discussion. There
         | is lots of talk comparing computers with human intelligence -
         | which seems to amount to how well an AI could mimic texts
         | written by north american adults during their working hours -
         | but I haven't read anything yet about how well we could build a
         | robot that could survive in the wilderness or e.g. take part in
         | the social dynamics of a wolf pack.
         | 
         | If we're aiming to build AI that is as intelligent as a human,
         | shouldn't we first be able to build one that is as intelligent
         | as an animal?
        
       | jasfi wrote:
       | It's an incredibly difficult problem right now, that's why. Once
       | it's cracked it won't seem that difficult anymore. My own effort
       | has a landing page: https://lxagi.com.
        
       ___________________________________________________________________
       (page generated 2022-03-28 23:02 UTC)