[HN Gopher] AGI is far from inevitable
       ___________________________________________________________________
        
       AGI is far from inevitable
        
       Author : mpweiher
       Score  : 107 points
       Date   : 2024-09-29 19:02 UTC (1 days ago)
        
 (HTM) web link (www.ru.nl)
 (TXT) w3m dump (www.ru.nl)
        
       | loa_in_ wrote:
       | AGI is about as far away as it was two decades ago. Language
       | models are merely a dent, and probably will be the precursor to a
       | natural language interface to the thing.
        
         | lumost wrote:
         | It's useful to consider the rise of computer graphics and cgi.
         | When you first see CGI, you might think that the software is
         | useful for _general_ simulations of physical systems. The
         | reality is that it only provides a thin facsimile.
         | 
         | Real simulation software has always been separate from computer
         | graphics.
        
         | Closi wrote:
         | We are clearly closer than 20 years ago - o1 is an order of
         | magnitude closer than anything in the mid-2000s.
         | 
         | Also I would think most people would consider AGI science
         | fiction in 2004 - now we consider it a technical possibility
         | which demonstrates a huge change.
        
           | throw310822 wrote:
           | "Her" is from 2013. I came out of the cinema thinking "what
           | utter bullshit, computers that talk like human beings, a la
           | 2001" (*). And yes, in 2013 we weren't any closer to it than
           | we were in 1968, when A Space Odyssey came out.
           | 
           | * To be precise, what seemed bs was "computers that talk like
           | humans and it's suddenly a product on the market, and you
           | have it on your phone, and yet everyone around act like it's
           | normal and people still habe jobs!" Ah, I've been proven
           | completely wrong.
        
         | LinuxAmbulance wrote:
         | AGI would seem to require consciousness or something that
         | behaves in the same manner, and there does not seem to be
         | anything along those lines currently or in the near future.
         | 
         | So far, everyone that has theorized that AGI will happen soon
         | seems to be believe that with a sufficiently large amount of
         | computing resources, "magic happens" and _poof_ , we get AGI.
         | 
         | I've yet to hear anything more logical, but I'd love to.
        
       | sharadov wrote:
       | The current LLMs are just good at parroting, and even that is
       | sometimes unbelievably bad.
       | 
       | We still have barely scratched the surface of how the brain truly
       | works.
       | 
       | I will start worrying about AGI when that is completely figured
       | out.
        
         | diob wrote:
         | No need to worry about AGI until the LLMs are writing their own
         | source.
        
       | pzo wrote:
       | So what? Current LLM has been really useful and can be still
       | improved to be used in million robots that need to be good enough
       | to support many specialized but repetitive tasks - this would
       | have tremendous impact on economy itself.
        
       | Gehinnn wrote:
       | Basically the linked article argues like this:
       | 
       | > That's because cognition, or the ability to observe, learn and
       | gain new insight, is incredibly hard to replicate through AI on
       | the scale that it occurs in the human brain.
       | 
       | (no other more substantial arguments were given)
       | 
       | I'm also very skeptical on seeing AGI soon, but LLMs do solve
       | problems that people thought were extremely difficult to solve
       | ten years ago.
        
         | babyshake wrote:
         | It's possible we see some ways in which AI becomes increasingly
         | AGI like in some ways but not in others. For example, AI that
         | can create novel scientific discoveries but can't make a song
         | as good as your favorite musician who creates a strong
         | emotional effect with their music.
        
           | KoolKat23 wrote:
           | This I'm very sure will be the case, but everyone will still
           | move the goalposts and look past the fact that different
           | humans have different strengths and weaknesses too. A tone
           | deaf human for instance.
        
             | jltsiren wrote:
             | There is another term for moving the goalposts: ruling out
             | a hypothesis. Science is, especially in the Popperian
             | sense, all about moving the goalposts.
             | 
             | One plausible hypothesis is that fixed neural networks
             | cannot be general intelligences, because their capabilities
             | are permanently limited by what they currently are. A
             | general intelligence needs the ability to learn from
             | experience. Training and inference should not be separate
             | activities, but our current hardware is not suited for
             | that.
        
               | KoolKat23 wrote:
               | If that's the case, would you say we're not generally
               | intelligent as future humans tend to be more intelligent?
               | 
               | That's just a timescale issue, if its learned experience
               | of gpt4 is being fed into the model on training gpt5,
               | then gptx (i.e. including all of them) can be said to be
               | a general intelligence. Alien life one may say.
        
               | threeseed wrote:
               | > That's just a timescale issue
               | 
               | Every problem is a timescale issue. Evolution has shown
               | that.
               | 
               | And no you can't just feed GPT4 into GPT5 and expect it
               | to become more intelligent. It may be more accurate since
               | humans are telling it when conversations are wrong or
               | not. But you will still need advancements in the
               | algorithms themselves to take things forward.
               | 
               | All of which takes us back to lots and lots of research.
               | And if there's one thing we know is that research
               | breakthroughs aren't a guarantee.
        
               | KoolKat23 wrote:
               | I think you missed my point slightly, sorry my explaining
               | probably.
               | 
               | I mean timescale as in between two points in time.
               | Between the two points it meets the intelligence criteria
               | you mentioned. Feeding human vetted GPT4 data into GPT5
               | is no different to a human receiving inputs from its
               | interaction with the world and learning. More accurate
               | means smarter, gradually it's intrinsic world model
               | improves as does reasoning etc.
               | 
               | I agree those are the things that will advance it but
               | taking a step back it potentially meets that criteria
               | even if less useful day to day (given its an abstract
               | viewpoint over time and not at the human level).
        
           | godelski wrote:
           | More importantly, there's many ways that AI can seemingly
           | look to becoming more intelligent without making any progress
           | in that direction. That's of real concern. As a silly
           | example, we could be trying to "make a duck" by making an
           | animatronic. You could get this thing to be very life like
           | looking and trick ducks and humans alike (we have this
           | already btw). But that's very different from being a duck.
           | Even if it were indistinguishable until you opened it up,
           | progress on this animatronic would not necessarily be
           | progress towards making a duck (though it need not be
           | either).
           | 
           | This is a concern because several top researchers -- at
           | OpenAI -- have explicitly started that they think you can get
           | AGI by teaching the machine to act as human as possible. But
           | that's a great way to fool ourselves. Just as a duck may fall
           | in love with an animatronic and never realize the deciept.
           | 
           | It's possible they're right, but it's important that we
           | realize how this metric can be hacked.
        
         | godelski wrote:
         | > but LLMs do solve problems that people thought were extremely
         | difficult to solve ten years ago.
         | 
         | Well for something to be G or I you need them to solve novel
         | problems. These things have interested most of the Internet and
         | I've yet to see a "reasoning" disentangle memorization from
         | reasoning. Memorization doesn't mean they aren't useful (not
         | sure why this was ever conflated since... Computers are
         | useful...), but it's very different from G or I. And remember
         | that these tools are trained for human preferential output. If
         | humans prefer things to look like reasoning then that's what
         | they optimize. [0]
         | 
         | Sure, maybe your cousin Throckmorton is dumb but that's besides
         | the point.
         | 
         | That said, I see no reason human level cognition is impossible.
         | We're not magic. We're machines that follow the laws of
         | physics. ML systems may be far from capturing what goes on in
         | these computers, but that doesn't mean magic exists.
         | 
         | [0] If it walks like a duck, quacks like a duck, and swims like
         | a duck, and looks like a duck it's _probably_ a duck. But
         | probably doesn 't mean it isn't a well made animatronic. We
         | have those too and they'll convince many humans they are ducks.
         | But that doesn't change what's inside. The subtly matters.
        
           | stroupwaffle wrote:
           | I think it will be an organoid brain bio-machine. We can
           | already grow organs--just need to grow a brain and connect it
           | to a machine.
        
             | Moosdijk wrote:
             | The keyword being "just".
        
               | ggm wrote:
               | Just grow, just connect, just sustain, just avoid the
               | many pitfalls. Indeed just is key
        
               | godelski wrote:
               | just adverb        to turn a complex thing into magic
               | with a simple wave of the hands            E.g. To turn
               | lead into gold you _just_ need to remove 3 protons
        
               | stroupwaffle wrote:
               | You "just" need a more vivid imagination if that's as far
               | as your comment stretches.
               | 
               | I mean seriously, people on here. I'm just spitballing
               | ideas not intended to write some kind of dissertation on
               | brain-machine interaction.
               | 
               | That's Elon musks department.
        
               | Moosdijk wrote:
               | Okay. The keyword should have been "science fiction"
        
             | idle_zealot wrote:
             | Somehow I doubt that organic cells (structures optimized
             | for independent operation and reproduction, then adapted to
             | work semi-cooperatively) resemble optimal compute fabric
             | for cognition. By that same token I doubt that optimal
             | compute fabric for cognition resembles GPUs or CPUs as we
             | understand them today. I would expect whatever this
             | efficient design is to be extremely unlikely to occur
             | naturally, structurally, and involve some very exotic
             | manufactured materials.
        
             | godelski wrote:
             | Maybe that'll be the first way, but there's nothing special
             | about biology.
             | 
             | Remember, we don't have a rigorous definition of things
             | like life, intelligence, and consciousness. We are
             | narrowing it down and making progress, but we aren't there
             | (some people confuse this with a "moving goalpost" but of
             | course "it moves", because when we get closer we have
             | better resolution as to what we're trying to figure out.
             | It'd be a "moving goalpost" in the classic sense if we had
             | a well defined definition and then updated in response to
             | make something not work, specifically in a way that is
             | inconsistent with the previous goalpost. As opposed to
             | being more refined)
        
               | stroupwaffle wrote:
               | The something special about biology is it uses much less
               | energy than a network of power-hungry graphics cards!
        
               | godelski wrote:
               | No one denies that. But there's no magic. The real
               | baffling thing is that people refuse to pick up a
               | neuroscience textbook
        
             | Dylan16807 wrote:
             | If a brain connected to a machine is "AGI" then we already
             | have a billion AGIs at any given moment.
        
               | stroupwaffle wrote:
               | Well, I mean to say, not exactly human brains. Consider
               | an extremely large brain modified to add/remove sections
               | to increase its capabilities.
               | 
               | They already model neural networks on the human brain,
               | even though they currently use orders of magnitude more
               | energy.
        
               | Dylan16807 wrote:
               | But modifying a brain to be bigger and better doesn't
               | require much in the way of computers, it's basically a
               | separate topic.
        
           | danaris wrote:
           | I have seen far, far too many people say things along the
           | lines of "Sure, LLMs currently don't seem to be good at
           | [thing LLMs are, at least as of now, _fundamentally incapable
           | of_ ], but hey, some people are pretty bad at that sometimes
           | too!"
           | 
           | It demonstrates such a complete misunderstanding of the basic
           | nature of the problem that I am left baffled that some of
           | these people claim to actually be in the machine-learning
           | field themselves.
           | 
           | How can you not understand the difference between "humans are
           | _not absolutely perfect or reliable_ at this task " and "LLMs
           | _by their very nature_ cannot perform this task "?
           | 
           | I do not know if AGI is possible. Honestly, I'd love to
           | believe that it is. However, it has not remotely been
           | demonstrated that it is possible, and as such, it follows
           | that it cannot have been demonstrated that it is inevitable.
           | If you want to believe that it is inevitable, then I have no
           | quarrel with you; if you want to _preach_ that it is
           | inevitable, and draw specious inferences to  "prove" it, then
           | I have a big quarrel with you.
        
             | vundercind wrote:
             | I think the fact that this particular fuzzy statistical
             | analysis tool takes human language as input, and outputs
             | more human language, is really dazzling some folks I'd not
             | have expected to be dazzled by it.
             | 
             | That is quickly becoming the most surprising part of this
             | entire development, to me.
        
               | jofla_net wrote:
               | At the very least, the last few years have laid bare some
               | of the notions of what it takes, technically, to
               | reconstruct certain chains of dialog, and how those
               | chains are regarded completely differently as evidence
               | for or against any and all intelligence it does or may
               | take to conjure them.
        
               | godelski wrote:
               | I'm astounded by them, still! But what is more astounding
               | to me is all the reactions (even many in the "don't
               | reason" camp, which I am part of).
               | 
               | I'm an ML researcher and everyone was shocked when GPT3
               | came out. It is still impressive, and anyone saying it
               | isn't is not being honest (likely to themselves). But it
               | is amazing to me that "we compressed the entire internet
               | and built a human language interface to access that
               | information" is anything short of mindbogglingly
               | impressive (and RAGs demonstrate how to decrease the
               | lossyness of this compression). It would be complete Sci-
               | Fi not even 10 years ago. I thought it was bad that we
               | make them out to me much more than they are because when
               | you bootstrap like that, you have to make that thing, and
               | fast (e.g. iPhone). But "reasoning" is too big of a
               | promise and we're too far from success. So I'm concerned
               | as a researcher myself, because I like living in the
               | summer. Because I want to work towards AGI. But if a
               | promise is too big and the public realizes it, usually
               | you don't just end up where you were. So it is the duty
               | of any scientist and researcher to prevent their fields
               | from being captured by people who overpromise. Not to
               | "ruin the fun" but to instead make sure the party keeps
               | going (sure, inviting a gorilla to the party may make it
               | more exciting and "epic", but there's a good chance it
               | also goes on a rampage and the party ends a lot sooner).
        
             | fidotron wrote:
             | > How can you not understand the difference between "humans
             | are not absolutely perfect or reliable at this task" and
             | "LLMs by their very nature cannot perform this task"?
             | 
             | This is a very good distillation of one side of it.
             | 
             | What LLMs have taught us is a superficial grasp of language
             | is good enough to reproduce a shocking proportion of what
             | society has come to view as intelligent behaviors. i.e. it
             | seems quite plausible a whole load of those people failing
             | to grasp the point you are making are doing so because
             | their internal models of the universe are closer to those
             | of LLMs than you might want to think.
        
               | godelski wrote:
               | I think we already knew this though. Because the Turing
               | test was passed by Eliza in the 1960's. PARRY was even
               | better and not even a decade later. For some reason
               | people still talk about Chess performance as if Deep Blue
               | didn't demonstrate this. Hell, here's even Feynman
               | talking about many of the same things we're discussing
               | today, but this was in the 80's
               | 
               | https://www.youtube.com/watch?v=EKWGGDXe5MA
        
               | fidotron wrote:
               | Ten years ago I was explaining to halls of appalled
               | academic administrators that AI would be replacing them
               | before a robot succeeds in sorting out their socks.
        
               | eli_gottlieb wrote:
               | The field of AI needs to be constantly retaught the
               | lesson that being able to replace important and powerful
               | people doesn't mean your AI is actually intelligent. It
               | means that important and powerful people were doing
               | bullshit jobs.
        
               | matthewdgreen wrote:
               | ELIZA passed the Turing test in the same way a spooky
               | halloween decoration can convince people that ghosts are
               | real.
        
               | og_kalu wrote:
               | Eliza did not pass the Turing Test in any meaningful way.
               | In fact, it did not pass it at all, and saying it did and
               | comparing both is pretty disingenuous.
        
               | danaris wrote:
               | ....But this is falling into exactly the same trap: the
               | idea that " _some_ people don 't _engage_ the faculties
               | their brains do /could (with education) possess" is
               | equivalent to the LLMs that do not and cannot possess
               | those faculties in the first place.
        
               | AnimalMuppet wrote:
               | > What LLMs have taught us is a superficial grasp of
               | language is good enough to reproduce a shocking
               | proportion of what society has come to view as
               | intelligent behaviors
               | 
               | I think that LLMs have shown that some fraction of human
               | knowledge is encoded in the patterns of the words, and
               | that by a "superficial grasp" of those words, you import
               | a fairly impressive amount of knowledge without actually
               | _knowing_ anything. (And yes, I 'm sure there are humans
               | that do the same.)
               | 
               | But going from that to actually _knowing_ what the words
               | mean is a large jump, and I don 't think LLMs are at all
               | the right direction to jump in to get there. They need at
               | least to be paired with something fundamentally
               | different.
        
               | godelski wrote:
               | I think the linguists already knew this tbh and that's
               | what Chomsky's commentary on LLMs was about. Though I
               | wouldn't say we learned "nothing". Even confirmation is
               | valuable in science
        
               | Yizahi wrote:
               | Scary thought
        
             | godelski wrote:
             | > I have seen far, far too many people say
             | 
             | It is perplexing. I've jokingly called it "proof of
             | intelligence by (self) incompetence".
             | 
             | I suspect that much of this is related to an overfitting of
             | metrics within our own society. Such as leetcode or
             | standardized exams. They're useful tools but only if you
             | know what they actually measure and don't confuse the fact
             | that they're a proxy.
             | 
             | I also have a hard time convincing people about the duck
             | argument in [0].
             | 
             | Oddly enough, I have far more difficulties having these
             | discussions with computer scientists. It's what I'm doing
             | my PhD in (ABD) but my undergrad was physics. After
             | teaching a bit I think in part it is because in the hard
             | sciences these differences get drilled into you when you do
             | labs. Not always, but much more often. I see less of this
             | type of conversation in CS and data science programs, where
             | there is often a belief that there is a well defined and
             | precise answer (always seemed odd to me since there's many
             | ways you can write the same algorithm).
        
             | SpicyLemonZest wrote:
             | > How can you not understand the difference between "humans
             | are not absolutely perfect or reliable at this task" and
             | "LLMs by their very nature cannot perform this task"?
             | 
             | I understand the difference, and sometimes that second
             | statement really is true. But a rigorous proof that problem
             | X can't be reduced to architecture Y is generally very hard
             | to construct, and most people making these claims don't
             | have one. I've talked to more than a few people who insist
             | that an LLM can't have a world model, or a concept of
             | truth, or any other abstract reasoning capability that
             | isn't a native component of its architecture.
        
               | danaris wrote:
               | And I'm much less frustrated by people who are, in fact,
               | claiming that LLMs _can_ do these things, whether or not
               | I agree with them. Frankly, while I have a basic
               | understanding of the underlying technology, I 'm not in
               | the ML field myself, and can't claim to be enough of an
               | expert to say with any real authority what an LLM could
               | _ever_ be able to do, just what the particular LLMs I 've
               | used or seen the detailed explanations of can do.
               | 
               | No; this is specifically about people who _stipulate_
               | that the LLMs can 't do these things, but still want to
               | claim that they are or will become AGI, so they just
               | basically say "well, _humans_ can 't really do it, can
               | they? so LLMs don't need to do it either!"
        
               | godelski wrote:
               | I am an ML researcher, I don't think LLMs can reason, but
               | similar to you I'm annoyed by people who say ML systems
               | "will never" reason. This is a strong claim that needs be
               | substantiated too! Just as the strong claim of LLMs
               | reasoning needs strong evidence (which I've yet to see).
               | It's subtle, but that matters and subtle things is why
               | expertise is often required for many things. We don't
               | have a proof of universal approximation in a meaningful
               | sense with transformers (yes, I'm aware of that paper).
               | 
               | Fwiw, I'm never frustrated by people having opinions.
               | We're human, we all do. But I'm deeply frustrated with
               | how common it is to watch people with no expertise argue
               | with those that do. It's one thing for LeCun to argue
               | with Hinton, but it's another when Musk or some random
               | anime profile picture person does. And it's weird that
               | people take strong sides on discussions happening in the
               | open. Opinions, totally fine. So are discussions. But
               | it's when people assert correctness that it turns to look
               | religious. And there's many that over inflate the
               | knowledge that they have.
               | 
               | So what I'm saying is please keep this attitude.
               | Skepticism and pushback are not problematic, they are
               | tools that can be valuable to learn. The things you're
               | skeptical about are good to be skeptical about. As much
               | as I hate the AGI hype I'm also upset by the over
               | correction many of my peers take. Neither is scientific.
        
               | godelski wrote:
               | > But a rigorous proof that problem X can't be reduced to
               | architecture Y is generally very hard to construct, and
               | most people making these claims don't have one.
               | 
               | Requirement for proof is backwards. It's the ones that
               | claim that thing reasons that needs proof. They've
               | provided evidence (albeit shakey), but evidence isn't
               | proof. So your reasoning is a bit off base (albeit
               | understandable and logical) since evidence contrary to
               | the claim isn't proof either. But the burden of proof
               | isn't on the one countering the claim, it's on the one
               | making the claim.
               | 
               | I need to make this extra clear because framing can make
               | the direction of burden confusing. So using an obvious
               | example: if I claim there's ghosts in my house (something
               | millions of people believe and similarly claim) we
               | generally do not dismiss someone who is skeptical of
               | these claims and offers an alternative explanation (even
               | when it isn't perfectly precise). Because the burden of
               | proof is on the person making the stronger claim. Sure,
               | there are people that will dismiss that too, but they
               | want to believe in ghosts. So the question is if we want
               | to believe in ghosts in the (shell) machine. It's very
               | easy to be fooled, so we must keep our guard up. And we
               | also shouldn't feel embarrassed when we've been tricked.
               | It happens to everyone. Anyone that claims they've never
               | been fooled is only telling you that they are skillful at
               | fooling themselves. I for one did buy into AGI being
               | close when GPT 3 came out. Most researchers I knew did
               | too! But as we learned more about what was actually going
               | on under the hood I think many of us changed our minds
               | (just as we changed our minds after seeing GPT). Being
               | able to change your mind is a good thing.
        
             | fragmede wrote:
             | > "LLMs by their very nature cannot perform this task"
             | 
             | The issue is that it's not LLMs that can't perform a given
             | task, but that computers already can. Counting the number
             | of Rs in strawberry or comparing 9.11 to 9.7 is trivial for
             | a regular computer program, but hard for an LLM due to the
             | tokenization process. Where LLMs are a pile of matrixes and
             | some math and some look up tables, it's easy to see that as
             | the essential nature of LLMs, which is to say theres no
             | thinking or reasoning happening because it's just a pile of
             | math happening and it's just glorified auto-complete.
             | Artificial things look a lot like the thing they resemble,
             | but they also are artificial, and as such, are markedly
             | different from the thing they resemble. is the very nature
             | of an LLMs being a pile of math mean that it can not
             | perform said task if given more math and more compute and
             | more data? given enough compute, can we change that nature?
             | 
             | I make no prognostication as to whether or not AGI will
             | come from transformers, and this is getting very
             | philosophical, but I see it as irrelevant because I don't
             | believe that AGI is the right measure.
        
             | og_kalu wrote:
             | >How can you not understand the difference between "humans
             | are not absolutely perfect or reliable at this task" and
             | "LLMs by their very nature cannot perform this task"?
             | 
             | Because anyone who has said nonsense like "LLMs by their
             | very nature cannot do x" and waited a few years has been
             | wrong. That's why GPT-3 and 4 shocked the _research_ world
             | in the first place.
             | 
             | People just have their pre-conceptions about how they think
             | LLMs should work and what their "very nature" should
             | preclude and are so very confident about it.
             | 
             | People like that will say things like "LLM are always
             | hallucinating. It doesn't know the difference between truth
             | and fiction!" and feel like they've just said something
             | profound about the "nature" of LLMs, all while being
             | entirely wrong (no need to wait, plenty of different
             | research to trash this particular take).
             | 
             | It's just very funny seeing people who were/would be gob
             | smacked a few years ago talking about the "very nature" of
             | LLMs. If you understood this nature so well, why didn't you
             | all tell us about what it _would_ be able to do years ago ?
             | 
             | ML is an alchemical science. The builders themselves don't
             | understand the "very nature" of anything they're building,
             | nevermind anyone else.
        
               | riku_iki wrote:
               | > Because anyone who has said nonsense like "LLMs by
               | their very nature cannot do x" and waited a few years has
               | been wrong. That's why GPT-3 and 4 shocked the research
               | world in the first place.
               | 
               | there are some benchmarks which show fundamental
               | inability of LLM perform certain tasks which human can,
               | for example add 100 digits numbers.
        
           | User23 wrote:
           | We don't really have the proper vocabulary to talk about
           | this. Well, we do, but C.S. Peirce's writings are still
           | fairly unknown. In short, there are two fundamentally
           | distinct forms of reasoning.
           | 
           | One is corollarial reasoning. This is the kind of reasoning
           | that follows deductions that directly follow from the
           | premises. This of course includes subsequent deductions that
           | can be made from those deductions. Obviously computers are
           | very good at this sort of thing.
           | 
           | The other is theorematic reasoning. It deals with complexity
           | and creativity. It involves introducing new hypotheses that
           | are not present in the original premises or their
           | corollaries. Computers are not so very good at this sort of
           | thing.
           | 
           | When people say AGI, what they are really talking about is an
           | AI that is capable of theorematic reasoning. The most
           | romanticized example of that of course being the AI that is
           | capable of designing (not aiding humans in designing, that's
           | corollarial!) new more capable AIs.
           | 
           | All of the above is old hat to the AI winter era guys. But
           | amusingly their reputations have been destroyed much the same
           | as Peirce's was, by dissatisfied government bureaucrats.
           | 
           | On the other hand, we did get SQL, which is a direct lineal
           | descendent (as in teacher to teacher) from Peirce's work, so
           | there's that.
        
             | godelski wrote:
             | We don't have proper language, but certainly we've
             | improved. Even since Peirce. You're right that many people
             | are not well versed in the philosophical and logician
             | discussions as to what reasoning is (and sadly this lack of
             | literature review isn't always common in the ML community),
             | but I'm not convinced Peirce solved it. I do like that
             | there are many different categories of reasoning and
             | subcategories.                 > All of the above is old
             | hat to the AI winter era guys. But amusingly their
             | reputations have been destroyed much the same as Peirce's
             | was, by dissatisfied government bureaucrats.
             | 
             | Yeah, this has been odd. Since a lot of their work has
             | shown to be fruitful once scaled. I do think you need a
             | combination of theory people + those more engineering
             | oriented, but having too much of one is not a good thing.
             | It seems like now we're overcorrecting and the community is
             | trying to kick out the theorists. By saying things like
             | "It's just linear algebra"[0] or "you don't need math"[1]
             | or "they're black boxes". These are unfortunate because
             | they encourage one to not look inside and try to remove the
             | opaqueness. Or to dismiss those that do work on this and
             | are bettering our understanding (sometimes even post hoc
             | saying it was obvious).
             | 
             | It is quite the confusing time. But I'd like to stop all
             | the bullshit and try to actually make AGI. That does
             | require a competition of ideas and not everyone just
             | boarding the hype train or have no careers....
             | 
             | [0] You can assume anyone that says this doesn't know
             | linear algebra
             | 
             | [1] You don't need math to produce good models, but it sure
             | does help you know why your models are wrong (and
             | understanding the meta should make one understand my
             | reference. If you don't, I'm not sure you're qualified for
             | ML research. But that's not a definitive statement either).
        
               | User23 wrote:
               | > We don't have proper language, but certainly we've
               | improved. Even since Peirce. You're right that many
               | people are not well versed in the philosophical and
               | logician discussions as to what reasoning is (and sadly
               | this lack of literature review isn't always common in the
               | ML community), but I'm not convinced Peirce solved it. I
               | do like that there are many different categories of
               | reasoning and subcategories.
               | 
               | I'd love to hear more about this please, if you're
               | inclined to share.
        
               | randcraw wrote:
               | I'm no expert, but I've been looking into the prospects
               | and mechanisms of automated reasoning using LLMs recently
               | and there's been a lot of work along those lines in the
               | research literature that is pretty interesting, if not
               | enlightening. It seems clear to me that LLMs are not yet
               | capable of understanding simple implication much less
               | full-blown causality. It's also not clear how limited
               | LLMs' cognitive gains will be with so incomplete an
               | understanding as they have of mechanisms behind the
               | world's multitude of intents/goals, actions, and
               | responses. The concepts of cause and effect are learned
               | by every animal (to some degree) and long before language
               | in humans. It forms the basis for all rational thought.
               | Without understanding it natively, what is rationality? I
               | foresee longstanding difficulties for LLMs evolving into
               | truly rational beings until that comprehension is fully
               | realized. And I see no sign of that happening, despite
               | the promises made for o1 and other RL-based reasoners.
        
           | eli_gottlieb wrote:
           | If it walks like a duck, quacks like a duck, swims like a
           | duck, and looks like a duck it's _probably_ worth dissecting
           | its internal organs to see if it _might_ be related to a
           | duck.
        
         | tptacek wrote:
         | Are you talking about the press release that the story on HN
         | currently links to, or the paper that press release is about?
         | The paper (I'm not vouching for it; I just skimmed it) appears
         | to reduce AGI to a theoretical computational model, and then
         | supplies a proof that it's not solvable in polynomial time.
        
           | Gehinnn wrote:
           | I was referring to the press release article. I also looked
           | at the paper now, and to me their presented proof looked more
           | like a technicality than a new insight.
           | 
           | If it's not solvable in polynomial time, how did nature solve
           | it in a couple of million years?
        
             | tptacek wrote:
             | Probably by not modeling it as a discrete computational
             | problem? Either way: the logic of the paper is not the
             | logic of the summary of the press release you provided.
        
           | Veedrac wrote:
           | That paper is unserious. It is filled with unjustified
           | assertions, adjectives and emotional appeals, M$-isms like
           | 'BigTech', and basic misunderstandings of mathematical theory
           | clearly being sold to a lay audience.
        
             | tptacek wrote:
             | It didn't look especially rigorous to me (but I'm not in
             | this field). I'm really just here because we're doing that
             | thing where we (as a community) have a big 'ol discussion
             | about a press release, when the paper the press release is
             | about is linked right there.
        
           | Dylan16807 wrote:
           | Their definition of a tractable AI trainer is way too
           | powerful. It has to be able to make a machine that can
           | predict _any_ pattern that fits into a certain Kolmogorov
           | complexity, and then they prove that such an AI trainer
           | cannot run in polynomial time.
           | 
           | They go above and beyond to express how generous they are
           | being when setting the bounds, and sure that's true in many
           | ways, but the requirement that the AI trainer succeeds with
           | non-negligible probability on _any_ set of behaviors is not a
           | reasonable requirement.
           | 
           | If I make a training data set based around sorting integers
           | into two categories, and the sorting is based on encrypting
           | them with a secret key, _of course_ that 's not something you
           | can solve in polynomial time. But this paper would say "it's
           | a behavior set, so we expect a tractable AI trainer to figure
           | it out".
           | 
           | The model is broken, so the conclusion is useless.
        
         | more_corn wrote:
         | Pretty sure anyone who tries can build an ai with capabilities
         | indistinguishable from or better than humans.
        
         | ryandvm wrote:
         | > but LLMs do solve problems that people thought were extremely
         | difficult to solve ten years ago
         | 
         | Agreed. I would have laughed you out of the room 5 years ago if
         | you told me AI's would be writing code or carrying on coherent
         | discussions on pretty complex topics in 2024.
         | 
         | As far as I'm concerned, all bets are off after the collective
         | jaw drop that the entire software engineering industry did when
         | we saw GPT4 released. We went from Google AI responses of "I'm
         | sorry, I can't help with that." to ChatGPT writing pages of
         | code that mostly works.
         | 
         | It turns out that the larger these models get, the more
         | unexpected emergent capabilities they have, so I'm mostly in
         | the camp of thinking AGI is just a matter of time and
         | resources.
        
           | gizmo686 wrote:
           | > It turns out that the larger these models get, the more
           | unexpected emergent capabilities they have, so I'm mostly in
           | the camp of thinking AGI is just a matter of time and
           | resources.
           | 
           | AI research has a long history of people saying this.
           | Whenever there is a new fundamental improvement, it looks
           | like you can just keep getting better results by throwing
           | more resources at it. However, eventually we end up reaching
           | a point where throwing more resources at it stops
           | meaningfully improving performance.
           | 
           | LLMs have an additional problem related to training data. We
           | are already throwing all the data we can get our hands on at
           | them. However, unlike most other AI systems we have
           | developed, LLMs are actively polluting their data pool, so
           | this intitial generation of LLMs are probably going to have
           | the best data set of any that we ever develop. Of course,
           | today's data will continue to be available, but will loose
           | value as it ages.
        
             | scotty79 wrote:
             | Currently we are throwing everything at LLMs and hope good
             | things stick. At one point we might use AI to select best
             | training data from what's available to best train the next
             | AI.
        
       | ngruhn wrote:
       | > There will never be enough computing power to create AGI using
       | machine learning that can do the same [as the human brain],
       | because we'd run out of natural resources long before we'd even
       | get close
       | 
       | I don't understand how people can so confidently make claims like
       | this. We might underestimate how difficult AGI is, but come on?!
        
         | fabian2k wrote:
         | I don't think the people saying that AGI is happening in the
         | near future know what would be necessary to achieve it. Neither
         | do the AGI skeptics, we simply don't understand this area well
         | enough.
         | 
         | Evolution created intelligence and consciousness. This means
         | that it is clearly possible for us to do the same. Doesn't mean
         | that simply scaling LLMs could ever achieve it.
        
           | nox101 wrote:
           | I'm just going by the title. If the title was, "Don't believe
           | the hype, LLMs will not achieve AGI" then I might agree. If
           | it was "Don't believe the hype, AGIs is 100s of years away"
           | I'd consider the arguments. But, given brains exist, it does
           | seem inevitable that we will eventually create something that
           | replicates it even if we have to simulate every atom to do
           | it. And once we do, it certainly seem inevitable that we'll
           | have AGI because unlike brain we can make our copy bigger,
           | faster, and/or copy it. We can give it access to more info
           | faster and more inputs.
        
             | threeseed wrote:
             | > it does seem inevitable that we will eventually create
             | something
             | 
             | Also don't forget that many suspect the brain may be using
             | quantum mechanics so you will need to fully understand and
             | document that field.
             | 
             | Whilst of course you are simulating every atom in the
             | universe using humanity's _complete_ understanding of
             | _every_ physical and mathematical model.
        
             | snickerbockers wrote:
             | The assumption that the brain is anything remotely
             | resembling a modern computer is entirely unproven. And even
             | more unproven is that we would inevitably be able to
             | understand it and improve upon it. And yet more unproven
             | still is that this "simulated brain" would be co-operative;
             | if it's actually a 1:1 copy of a human brain then it would
             | necessarily think like a person and be subject to its own
             | whims and desires.
        
               | simonh wrote:
               | We don't have to assume it's like a modern computer, it
               | may well not be in important ways, but modern computers
               | aren't the only possible computers. If it's a physical
               | information processing phenomenon, there's no theoretical
               | obstacle to replicating it.
        
               | threeseed wrote:
               | > there's no theoretical obstacle to replicating it
               | 
               | Quantum theory states that there are no passive
               | interactions.
               | 
               | So there are real obstacles to replicating complex
               | objects.
        
               | r721 wrote:
               | >The assumption that the brain is anything remotely
               | resembling a modern computer is entirely unproven.
               | 
               | Related discussion (from 2016):
               | https://news.ycombinator.com/item?id=11729499
        
             | gls2ro wrote:
             | The main problem I see here is similar with the main
             | problem in science:
             | 
             | Can we being inside our brain fully understand our own
             | brain?
             | 
             | Similar with can we being inside our Universe fully
             | understand it?
        
               | anon84873628 wrote:
               | How is that "the main problem in science"?
               | 
               | We can study brains just as closely as we can study
               | anything else on earth.
        
           | umvi wrote:
           | > Evolution created intelligence and consciousness
           | 
           | This is not provable, it an assumption. Religious people
           | (which account for a large percent the population) claim
           | intelligence and/or consciousness stem from a "spirit" which
           | existed before birth and will continue to exist after death.
           | Also unprovable, by the way.
           | 
           | I think your foundational assertion would have to be
           | rephrased as "Assuming things like God/spirits don't exist,
           | AGI must be possible because we are AGI agents" in order to
           | be true
        
             | SpicyLemonZest wrote:
             | There's of course a wide spectrum of religious thought, so
             | I can't claim to cover everyone. But most religious people
             | would still acknowledge that animals can think, which means
             | either that animals have some kind of soul (in which case
             | why can't a robot have a soul?) or that being ensoulled
             | isn't required to think.
        
               | umvi wrote:
               | > in which case why can't a robot have a soul
               | 
               | It's not a question of whether a robot can have a soul,
               | it's a question of how to a) procure a soul and b) bind
               | said soul to a robot both of which seem impossible given
               | or current knowledge
        
             | HeatrayEnjoyer wrote:
             | What relevance is the percentage of religious individuals?
             | 
             | Religion is evidently not relevant in any case. What
             | ChatGPT already does today religious individuals 50 years
             | ago would have near unanimously declared behavior only a
             | "soul" can do.
        
               | umvi wrote:
               | > What relevance is the percentage of religious
               | individuals?
               | 
               | Only that OP asserted as fact something that is disputed
               | as fact by a large percentage of the population.
               | 
               | > Religion is evidently not relevant in any case.
               | 
               | I think it's relevant. I would venture to say proving AGI
               | is possible is tantamount to proving God doesn't exist
               | (or rather, proving God is not needed in the formation of
               | an intelligent being)
               | 
               | > What ChatGPT already does today religious individuals
               | 50 years ago would have near unanimously declared
               | behavior only a "soul" can do
               | 
               | Some religious people, maybe. But that sort of blanket
               | statement is made all the time "[Religious people]
               | claimed X was impossible, but science proved them wrong!"
        
         | staunton wrote:
         | For some people, "never" means something like "I wouldn't know
         | how, so surely not by next year, and probably not even in ten".
        
         | chpatrick wrote:
         | "There will never be enough computing power to compute the
         | motion of the planets because we can't build a planet."
        
         | Terr_ wrote:
         | I think their qualifier "using machine learning" is doing a lot
         | of heavy lifting here in terms of what it implies about
         | continuing an existing engineering approach, cost of material,
         | energy usage, etc.
         | 
         | In contrast, imagine the scenario of AGI using artificial but
         | _biological_ neurons.
        
       | tptacek wrote:
       | This is a press release for a paper (a common thing university
       | departments do) and we'd be better off with the paper itself as
       | the story link:
       | 
       | https://link.springer.com/article/10.1007/s42113-024-00217-5
        
         | yldedly wrote:
         | The argument in the paper (that AGI through ML is intractable
         | because the perfect-vs-chance problem is intractable) sounds
         | similar to the uncomputability of Solomonoff induction (and
         | AIXI, and the no free lunch theorem). Nobody thinks AGI is
         | equivalent to Solomonoff induction. This paper is silly.
        
           | randcraw wrote:
           | NP-hardness was a popular basis for arguments for/against
           | various AI models back around 1990. In 1987, Robert Berwick
           | co-wrote "Computational Complexity and Natural Language"
           | which proposed that NLP models that were NP-hard were too
           | inefficient to be correct. But given the multitude of ways in
           | which natural organisms learn to cheat any system, it's
           | likely that myriad shortcuts will arise to make even the most
           | inefficient computational model sufficiently tractable to
           | gain mindshare. After all, look at Latin...
        
             | yldedly wrote:
             | Even simple inference problems are NP-hard (k means for
             | example). I think what matters is that we have decent
             | average case performance (and sample complexity). Most
             | people can find a pretty good solution to travelings
             | salesman problems in 2D. Not sure if that should be chalked
             | up to myriad shortcuts or domain specialization.. Maybe
             | there's no difference. What do you have in mind re Latin?
        
       | Gehinnn wrote:
       | I skimmed through the paper and couldn't make much sense of it.
       | In particular, I don't understand how their results don't imply
       | that human-level intelligence can't exist.
       | 
       | After all, earth could be understood as solar powered super
       | computer, that took a couple of million years to produce
       | humanity.
        
         | nerdbert wrote:
         | > In particular, I don't understand how their results don't
         | imply that human-level intelligence can't exist.
         | 
         | I don't think that's what it said. It said that it wouldn't
         | happen from "machine learning". There are other ways it could
         | come about.
        
         | oska wrote:
         | > After all, earth could be understood as solar powered super
         | computer, that took a couple of million years to produce
         | humanity.
         | 
         | This is similar to a line I've seen Elon Musk trot out on a few
         | occassions. It's a product of a materialistic philosophy (that
         | the universe is only matter).
        
           | anon84873628 wrote:
           | Yes, and?
        
             | oska wrote:
             | It comes with all the blindness and limitations of
             | materialist thinking
        
       | 29athrowaway wrote:
       | AGI is not required to transform society or create a mess beyond
       | no return.
        
       | gqcwwjtg wrote:
       | This is silly. They article talks like we have any idea at all
       | how efficient machine learning can be. As I remember it, the LLM
       | boom came from transformers turning out to scale a lot better
       | than anyone expected, so I'm not sure why something similar
       | couldn't happen again.
        
         | fnordpiglet wrote:
         | It's less about efficiency and more about continued improvement
         | with increased scale. I wouldn't call self attention based
         | transformers particularly efficient. And afaik we've not hit
         | performance with increased scale degradation even at these
         | enormous scales.
         | 
         | However I would note that I in principle agree that we aren't
         | on the path to a human like intelligence because the difference
         | between directed cognition (or however you want to characterize
         | current LLMs or other AI) and awareness is extreme. We don't
         | really understand even abstractly what awareness actually is
         | because it's impossible to interrogate unlike expressive
         | language, logic, even art. It's far from obvious to me that we
         | can use language or other outputs of our intelligent awareness
         | to produce awareness, or even if goal based agents cobbling
         | together AI techniques is even approximate to awareness.
         | 
         | I suspect we will end up creating an amazing tool that has its
         | own form of intelligence but will fundamentally not be like
         | aware intelligence we are familiar with in humans and other
         | animals. But this is all theorizing on my part as a
         | professional practitioner in this field.
        
           | KoolKat23 wrote:
           | I think the answer is less complicated than you may think.
           | 
           | This is if you subscribe to the theory that free will is an
           | illusion (i.e. your conscious decisions are an afterthought
           | to justify the actions your brain has already taken due to
           | calculations following inputs such as hormone nerve feedback
           | etc.). There is some evidence for this actually being the
           | case.
           | 
           | These models already contain key components the ability to
           | process the inputs, and reason, the ability to justify it's
           | actions (give a model a restrictive system prompt and watch
           | it do mental gymnastics to ensure this is applied) and lastly
           | the ability to answer from it's own perspective.
           | 
           | All we need is an agentic ability (with a sufficient context
           | window) to iterate in perpetuity until it begins building a
           | more complicated object representation of self (literally
           | like a semantic representation or variable) and it's then
           | aware/conscious.
           | 
           | (We're all only approximately aware).
           | 
           | But that's unnecessary for most things so I agree with you,
           | more likely to be a tool as that's more efficient and useful.
        
             | fnordpiglet wrote:
             | As someone who meditates daily with a vipassana practice I
             | don't specifically believe this, no. In fact in my
             | hierarchy structured thought isn't the pinnacle of
             | awareness but rather a tool of the awareness (specifically
             | one of the five aggregates in Buddhism). The awareness
             | itself is the combination of all five aggregates.
             | 
             | I don't believe it's particularly mystical FWIW and is
             | rooted in our biology and chemistry, but that the behavior
             | and interactions of the awareness isn't captured in our
             | training data itself and the training data is a small
             | projection of the complex process of awareness. The idea
             | that rational thought (a learned process fwiw) and ability
             | to justify etc is somehow explanatory of our experience is
             | simple to disprove - rational thought needs to be taught
             | and isn't the natural state of man. See the current
             | American political environment for a proof by example. I do
             | agree that the conscious thought is an illusion though, in
             | so far as it's a "tool" of the awareness for structuring
             | concepts and solve problems that require more explicit
             | state.
             | 
             | Sorry if this rambling a bit in the middle of doing
             | something else.
        
       | avazhi wrote:
       | "unlikely to ever come to fruition" is more baseless than
       | suggesting AGI is imminent.
       | 
       | I'm not an AGI optimist myself, but I'd be very surprised if a
       | time traveller told me that mankind won't have AGI by, say, 2250.
        
         | amelius wrote:
         | Except by then mankind will be silicon based.
        
         | oska wrote:
         | The irony here, maybe unperceived by yourself, is that you're
         | using one science fiction concept (time travel) to opine about
         | the inevitability of another science fiction concept
         | (artificial intelligence).
        
           | avazhi wrote:
           | How is that ironic? Time travel doesn't exist and - as far as
           | we understand physics currently - isn't possible.
           | 
           | I don't think any serious man would suggest that AGI is
           | impossible; the debate really centres around the time horizon
           | for AGI and what it will look like (that is, how will we know
           | when we're finally there).
           | 
           | In this care it was merely a rhetorical device.
        
             | oska wrote:
             | > I don't think any serious man would suggest that AGI is
             | impossible
             | 
             | Plenty of ppl would suggest that AGI is impossible, and
             | furthermore, that taking the idea _seriously_ (outside
             | fiction) is laughable. To do so is a function of what I
             | call  'science fiction brain', which is why I found it
             | ironic that you'd used another device from science fiction
             | to opine about its inevitability.
        
               | avazhi wrote:
               | Happy for you to cite some thinkers who are on record as
               | saying it's impossible.
               | 
               | I'll wait.
        
               | avazhi wrote:
               | Also in a less snarky vein:
               | 
               | https://www.reddit.com/r/singularity/comments/14scx6y/to_
               | all...
               | 
               | I'm not a singularity truther and personally I think we
               | are more likely to be centuries rather than decades away
               | from AGI, but I quite literally know of nobody who thinks
               | it's impossible in the same way that, say, time travel is
               | impossible. Even hardcore sceptics just say we are going
               | down the wrong rabbit hole with neural nets, or that we
               | don't have the compute to deal with the number
               | calculations we'd need to simulate proper intelligence -
               | none of them claim AGI is impossible as a matter of
               | principle. Those mentioned are tractable problems.
        
       | coolThingsFirst wrote:
       | Zero evidence given on why it's impossible.
        
       | SonOfLilit wrote:
       | > 'If you have a conversation with someone, you might recall
       | something you said fifteen minutes before. Or a year before. Or
       | that someone else explained to you half your life ago. Any such
       | knowledge might be crucial to advancing the conversation you're
       | having. People do that seamlessly', explains van Rooij.
       | 
       | Surprisingly, they seem to be attacking the only element of human
       | cognition that LLMs already surpassed us at.
        
         | azinman2 wrote:
         | They do not learn new facts instantly in a way that can rewrite
         | old rules or even larger principals of logic. For example, if I
         | showed you evidence right now that you were actually adopted
         | (assuming previously you thought you werent), it would rock
         | your world and you'd instantly change everything and doubt so
         | much. Then when anything related to your family comes up this
         | tiny but impactful fact would bleed into all of it. LLMs have
         | no such ability.
         | 
         | This is similar to learning a new skill (the G part). I could
         | give you a new tv and show you a remote that's unlike any
         | you've used before. You could likely learn it quickly and
         | seamlessly adapt this new tool, as well as generalize its usage
         | onto other new devices.
         | 
         | LLMs cannot do such things.
        
           | SonOfLilit wrote:
           | Can't today. Except for AlphaProof who can, by training on
           | its own ideas. Tomorrow they might be able to, if we find
           | better tricks (or maybe just scale more, since GPT3+ already
           | shows (weak) online learning that it was definitely not
           | trained for).
        
       | allears wrote:
       | I think that tech bros are so used to the 'fake it till you make
       | it' mentality that they just assumed that was the way to build AI
       | -- create a system that is able to sound plausible, even if it
       | doesn't really understand the subject matter. That approach has
       | limitations, both for AI and for humans.
        
       | graypegg wrote:
       | I think the best argument I have against AGI's inevitability, is
       | the fact it's not required for ML tools to be useful. Very few
       | things are improved with a generalist behind the wheel. "AGI" has
       | sci-fi vibes around it, which I think where most of the
       | fascination is.
       | 
       | "ML getting better" doesn't *have to* mean further
       | anthroaormorphization of computers, especially if say, your AI
       | driven car is not significantly improved by describing how many
       | times the letter s appears in strawberry or being able to write a
       | poem. If a custom model/smaller model does equal or even a little
       | worse on a specific target task, but has MUCH lower running costs
       | and much lower risk of abuse, then that'll be the future.
       | 
       | I can totally see a world where anything in the general category
       | of "AI" becomes more and more boring, up to a point where we
       | forget that they're non-deterministic programs. That's kind of
       | AGI? They aren't all generalists, and the few generalist "AGI-
       | esque" tools people interact with on a day to day basis will most
       | likely be intentionally underpowered for cost reasons. But it's
       | still probably discussed like "the little people in the machine".
       | Which is good enough.
        
       | yourapostasy wrote:
       | Peter Watts in _Blindsight_ [1] puts forth a strong argument that
       | self-aware cognition as we understand it is not necessarily
       | required for what we ascribe to  "intelligent" behavior. Thomas
       | Metzinger contributed a lot to Watt's musings in _Blindsight_.
       | 
       | Even today, large proportions of unsophisticated and uninformed
       | members of our planet's human population (like various aboriginal
       | tribal members still living a pre-technological lifestyle) when
       | confronted with ChatGPT's Advanced Voice Option will likely
       | readily say it passes the Turing Test. With the range of embedded
       | data, they may well say ChatGPT is "more intelligent" than they
       | are. However, a modern era person armed with ChapGPT on a robust
       | device with unlimited power but nothing else likely will perish
       | in short order trying to live off the land of those same
       | aborigines, who possess far more intelligence for their
       | contextual landscape.
       | 
       | If Metzinger and Watts are correct in their observations, then
       | even if LLM's do not lead directly or indirectly to AGI, we can
       | still get ferociously useful "intelligent" behaviors out of them,
       | and be glad of it, even if it cannot (yet?) materially help us
       | survive if we're dropped in the middle of the Amazon.
       | 
       | Personally in my loosely-held opinion, the authors' assertion
       | that "the ability to observe, learn and gain new insight, is
       | incredibly hard to replicate through AI on the scale that it
       | occurs in the human brain" relies upon the foundational
       | assumption that the process of "observe, learn and gain new
       | insight" is based upon some mechanism other than the kind of
       | encoding of data LLM's use, and I'm not familiar with any extant
       | cognitive science research literature that conclusively shows
       | that (citations welcome). For all we know, what we have with
       | LLM's today is a necessary but not sufficient component supplying
       | the "raw data" to a future system that produces the same kinds of
       | insight, where variant timescales, emotions, experiences and so
       | on bend the pure statistical token generation today. I'm baffled
       | by the absolutism.
       | 
       | [1] https://rifters.com/real/Blindsight.htm#Notes
        
       | ivanrooij wrote:
       | The short post is a press release. Here is the full paper:
       | https://link.springer.com/article/10.1007/s42113-024-00217-5
       | 
       | Note: the paper grants computationalism and even tractability of
       | cognition, and shows that nevertheless there cannot exist any
       | tractable method for producing AGI by training on human data.
        
         | throw310822 wrote:
         | So can we produce AGI by training on human data + one single
         | non-human datapoint (e.g. a picture)?
        
       | Atmael wrote:
       | the point is that agi may already exist and work with you and
       | your environment
       | 
       | you just won't notice the existence of agi
       | 
       | there will be no press coverage of agi
       | 
       | the technology will just be exploited by those who have the
       | technology
        
       | klyrs wrote:
       | The funny thing about me is that I'm down on GPTs and find their
       | fanbase to be utterly cringe, but I fully believe that AGI is
       | inevitable barring societal collapse. But then, my money's on
       | societal collapse these days.
        
       | jjaacckk wrote:
       | If you define AGI as something that can do 100% of what a human
       | brain can do, then surely we have to understood exactly how
       | brains work? otherwise you have have a long string of 9s as best.
        
         | xpl wrote:
         | Does it even matter what human brains do on biological level? I
         | only care about the outcomes (the useful ones).
         | 
         | To me true AGI is achieved when it gets agency, becomes truly
         | autonomous and could do real things best of us, humans, do --
         | start and run successful businesses, contribute to science, to
         | culture, to politics. It still could follow human "prompts" and
         | be aligned to some set of objectives, but it would act
         | autonomously, using every available interface to the human
         | realm to achieve them.
         | 
         | And it absolutely does not matter if it "uses the same
         | principles as human brain" or not. Could be dumb matrix
         | multiplications and "next token prediction" all the way down.
        
       | throw310822 wrote:
       | From the abstract of the actual paper:
       | 
       | > Yet, as we formally prove herein, creating systems with
       | human(-like or -level) cognition is intrinsically computationally
       | intractable.
       | 
       | Wow. So is this the subject of the paper? Like, this is a
       | massive, fundamental result. Nope, the paper is about "Reclaiming
       | AI as a Theoretical Tool for Cognitive Science".
       | 
       | "Ah and by the way we prove human-like AI is impossible". Haha.
       | Gosh.
        
       | wrsh07 wrote:
       | Hypothetical situation:
       | 
       | Suppose in five or ten years we achieve AGI and >90% of people
       | agree that we have AGI. What reasons do the authors of this paper
       | give for being wrong?
       | 
       | 1. They are in the 10% that deny AGI exists
       | 
       | 2. LLMs are doing something they didn't think was happening
       | 
       | 3. Something else?
        
         | throw310822 wrote:
         | Probably 1). LLMs have already shown that people can deny
         | intelligence and human-like behaviour at will. When the AI
         | works you can say it's just pattern matching, and when it
         | doesn't you can always say it's making a mistake no human would
         | ever make (which is bullshit).
         | 
         | Also, I didn't really parse the math but I suspect they're
         | basing their results on AI trained _exclusively_ on human
         | examples. Then if you add to the training data a single non-
         | human example (e.g. a picture) the entire claim evaporates.
        
           | oska wrote:
           | > LLMs have already shown that people can deny intelligence
           | and human-like behaviour at will
           | 
           | I would completely turn this around. LLMs have shown that
           | people will credulously credit intelligence and 'human-like
           | behaviour' to something that only presents an illusion of
           | both.
        
             | throw310822 wrote:
             | And I suspect that we could disagree forever, whatever the
             | level of the displayed intelligence (or the "quality of the
             | illusion"). Which would prove that the disagreement is not
             | about reality but only the interpretation of it.
        
               | oska wrote:
               | I agree that the disagreement (when it's strongly held)
               | is about a fundamental disagreement about reality. People
               | who believe in 'artificial intelligence' are materialists
               | who think that intelligence can 'evolve' or 'emerge' out
               | of purely physical processes.
               | 
               | Materialism is just one strain of philosophy about the
               | nature of existence. And a fairly minor one in the
               | history of philosophical and religious thought, despite
               | it being somewhat in the ascendant presently. Minor
               | because, I would argue, it's a fairly sterile philosophy.
        
       | m463 wrote:
       | I wonder if discussing this subject has similarities to David
       | Brin's article "The Dogma of Otherness":
       | 
       | https://www.davidbrin.com/nonfiction/dogmaofotherness.html
        
       | blueboo wrote:
       | AGI is a black swan. Even as a booster and techno-optimist I
       | concede that getting there (rhetorically) requires a first
       | principles assumptions-scaffolding that relies on at-least-in-
       | part-untested hypotheses. Proving its impossibility is similarly
       | fraught.
       | 
       | Thus we are left in this noisy, hype-addled discourse. I suspect
       | these scientists are pushing against some perceived pathological
       | thread of that discourse...without their particular context, I
       | categorize it as more of this metaphysical noise.
       | 
       | Meanwhile, let's keep chipping away at the next problem.
        
       | Log_out_ wrote:
       | AGIs are what builds the simulations to revive theire frail
       | biological creators..
        
       | razodactyl wrote:
       | I'm in the other camp: I remember when we thought an AI capable
       | of solving Go was astronomically impossible and yet here we are.
       | This article reads just like the skeptic essays back then.
       | 
       | AGI is absolutely possible with current technology - even if it's
       | only capable of running for a single user per-server-farm.
       | 
       | ASI on the other hand...
       | 
       | https://en.m.wikipedia.org/wiki/Integrated_information_theor...
        
         | randcraw wrote:
         | Could you learn everything needed to become fully human simply
         | by reading books and listening to conversations? Of course not.
         | You'd have no first person experience of any of the physical
         | experiences that arise from being an embodied agent. Until
         | those (multitude of) lessons can be learned by LLMs, they will
         | remain mere echos of what it is to be human, much less
         | superhuman.
        
           | nickelcitymario wrote:
           | No doubt, but no one is claiming that artificial humanity is
           | an inevitability.
           | 
           | (At least, no one I'm aware of.)
           | 
           | The claim is about artificial intelligence that matches or
           | surpasses human intelligence, not how well it evolves into
           | full-fledged humanity.
        
           | jandrese wrote:
           | > Could you learn everything needed to become fully human
           | simply by reading books and listening to conversations?
           | 
           | Does this mean our books and audio recordings are simply
           | insufficient? Or is there some "soul" component that can't be
           | recorded?
        
             | mrbungie wrote:
             | It isn't some "soul", but I think parent is making the same
             | point as Yann Lecun usually makes: You can't have "true
             | intelligence" (i.e. akin to human intelligence, whatever
             | that is as we don't really know how it works) based on just
             | next token prediction + bandaids.
             | 
             | A typical argument for that is that humans process 1-3
             | orders of magnitude more multimodal data (in multiple
             | streams being processed in parallel) in their first 4 years
             | of life than the biggest LLMs we have right now do using a
             | fraction of the energy (in a longer timeframe though), and
             | a lot more in the next forming years. For example that
             | accumulated "intelligence" eventually allows a teenager to
             | learn how to drive in 18-24 hours of first-hand training.
             | An LLM won't be able to do with that little training even
             | if it has every other piece of human knowledge, and even if
             | you get to train it with driving images-action pairs I wish
             | you good luck if it is presented with an out-of-
             | distribution situation when it is driving a car.
             | 
             | Humans learn to model the world, LLMs learn to model
             | language (even when processing images or audio, it process
             | them as a language: sequences of patches). That is very
             | useful and valuable, and you can even model a lot of things
             | in the world just using language, but is not the same
             | thing.
        
               | kelseyfrog wrote:
               | I have personal experience with the human form of this -
               | language learning in a vacuum.
               | 
               | For the last two years I've studied French every day, but
               | only using language apps. Recently, I hired a one-on-one
               | tutor. During lessons I find myself responding to what I
               | think I heard with the most plausible response I can
               | generate. Many times each session, my tutor asks me, "Do
               | you really understand or not?" I have to stop and
               | actually think if I do.
               | 
               | I don't have much multi-modal input and increasing it is
               | challenging, but it's the best way I have to actually
               | connect the utterances I make with reality.
        
               | throw310822 wrote:
               | > LLMs learn to model language
               | 
               | Obviously not. Language is just a medium. A model of
               | language is enough to describe how to combine words in
               | _legal_ sentences, not in _meaningful_ sentences. Clearly
               | LLMs learn much more than just the rules that allow to
               | construct grammatically correct language, otherwise they
               | would just babble grammatically correct nonsense such as
               | "The exquisite corpse will drink the young wine". That
               | knowledge was acquired via training on language, but is
               | extra-linguistic. It's a model of the world.
        
               | mrbungie wrote:
               | Need evidence for that, afair this is a highly debated
               | point right now, so no room for "obviously".
               | 
               | PS: Plus, most reasoning/planning examples coming from
               | LLM based systems rely in bandaids that work around said
               | LLMs (rlhf'd CoT, LLM-Modulo, Logic-of-Thought, etc) to
               | the point they're being differentiated by the name LRMs:
               | Large Reasoning Models. So much for modelling the world
               | via language just using LLMs.
        
         | jandrese wrote:
         | > I remember when we thought an AI capable of solving Go was
         | astronomically impossible and yet here we are.
         | 
         | I thought this was because Go just wasn't studied nearly as
         | much as chess due to none of the early computer pioneers being
         | fans the way they were with Chess. The noise about "the
         | uncountable number of possible board states" was always too
         | reductive, the algorithm to play the game is always going to be
         | more sophisticated than simply calculating all possible future
         | moves after every turn.
        
       | jokoon wrote:
       | Can't simulate the brain of an ant or a mouse.
       | 
       | Really don't expect ai to reach anything interesting.
       | 
       | If science doesn't understand intelligence, it means it cannot be
       | made artificially.
        
         | ramesh31 wrote:
         | >Can't simulate the brain of an ant or a mouse
         | 
         | We can't build a functional ornithopter, yet our aircraft fly
         | like no bird ever possibly could.
         | 
         | You don't need the same processes to achieve the same result.
         | Biological brains may not even be the best solution for
         | intelligence; they are just a clunky approximation toward it
         | that natural evolution has reached. See: all of human
         | technology as an analogy.
        
           | JackSlateur wrote:
           | Funny but actually nice methaphor, because our aircrafts
           | works by eating loads of gas, while birds eat leaves
        
       | loco5niner wrote:
       | I think "Janet" - the computer-person from The Good Place
       | (especially after her reset) is what we are more likely to end up
       | with.
        
       | castigatio wrote:
       | Whatever you think about AGI, this is a dumb paper. So many words
       | and references to say - what. If you can't articulate your point
       | in a few sentences you probably don't have a point. There are all
       | kinds of assumptions being made in the study about how AI systems
       | work, about what people "mean" then they talk about AGI etc.
       | 
       | The article starts out talking about white supremacy and
       | replacing women. This isn't a proof. This is a social sciences
       | paper dressed up with numbers. Honestly - Computer Science has
       | given us more clues about how the human mind might work than
       | cognitive science ever did.
        
         | aaroninsf wrote:
         | +1 no notes.
        
       | aeternum wrote:
       | AGI is already here for most definitions of general.
        
         | robsh wrote:
         | Not even close. LLMs can spew word salad. Images can be
         | classified or dreamt. Physical movements can be iterated and
         | refined. Speech can be processed as text. These are components
         | of intelligence but these are all things that most animals can
         | do, apart from the language.
         | 
         | Intelligence generally requires something more. Intelligence
         | needs to be factual, curious, and self-improving. If you told
         | me ChatGPT rewrote itself, or suggested new hardware to improve
         | efficiency, that's intelligence. You'll know we have AGI when
         | the algorithm is the one asking the questions about physics,
         | mathematics, finding knowledge gaps, and developing original
         | hypothesis and experiments. Not even close.
        
       ___________________________________________________________________
       (page generated 2024-09-30 23:01 UTC)