[HN Gopher] Don't believe the hype: AGI is far from inevitable
___________________________________________________________________
Don't believe the hype: AGI is far from inevitable
Author : mpweiher
Score : 42 points
Date : 2024-09-29 19:02 UTC (3 hours ago)
(HTM) web link (www.ru.nl)
(TXT) w3m dump (www.ru.nl)
| loa_in_ wrote:
| AGI is about as far away as it was two decades ago. Language
| models are merely a dent, and probably will be the precursor to a
| natural language interface to the thing.
| lumost wrote:
| It's useful to consider the rise of computer graphics and cgi.
| When you first see CGI, you might think that the software is
| useful for _general_ simulations of physical systems. The
| reality is that it only provides a thin facsimile.
|
| Real simulation software has always been separate from computer
| graphics.
| Closi wrote:
| We are clearly closer than 20 years ago - o1 is an order of
| magnitude closer than anything in the mid-2000s.
|
| Also I would think most people would consider AGI science
| fiction in 2004 - now we consider it a technical possibility
| which demonstrates a huge change.
| sharadov wrote:
| The current LLMs are just good at parroting, and even that is
| sometimes unbelievably bad.
|
| We still have barely scratched the surface of how the brain truly
| works.
|
| I will start worrying about AGI when that is completely figured
| out.
| diob wrote:
| No need to worry about AGI until the LLMs are writing their own
| source.
| pzo wrote:
| So what? Current LLM has been really useful and can be still
| improved to be used in million robots that need to be good enough
| to support many specialized but repetitive tasks - this would
| have tremendous impact on economy itself.
| Gehinnn wrote:
| Basically the linked article argues like this:
|
| > That's because cognition, or the ability to observe, learn and
| gain new insight, is incredibly hard to replicate through AI on
| the scale that it occurs in the human brain.
|
| (no other more substantial arguments were given)
|
| I'm also very skeptical on seeing AGI soon, but LLMs do solve
| problems that people thought were extremely difficult to solve
| ten years ago.
| babyshake wrote:
| It's possible we see some ways in which AI becomes increasingly
| AGI like in some ways but not in others. For example, AI that
| can create novel scientific discoveries but can't make a song
| as good as your favorite musician who creates a strong
| emotional effect with their music.
| KoolKat23 wrote:
| This I'm very sure will be the case, but everyone will still
| move the goalposts and look past the fact that different
| humans have different strengths and weaknesses too. A tone
| deaf human for instance.
| godelski wrote:
| > but LLMs do solve problems that people thought were extremely
| difficult to solve ten years ago.
|
| Well for something to be G or I you need them to solve novel
| problems. These things have interested most of the Internet and
| I've yet to see a "reasoning" disentangle memorization from
| reasoning. Memorization doesn't mean they aren't useful (not
| sure why this was ever conflated since... Computers are
| useful...), but it's very different from G or I. And remember
| that these tools are trained for human preferential output. If
| humans prefer things to look like reasoning then that's what
| they optimize. [0]
|
| Sure, maybe your cousin Throckmorton is dumb but that's besides
| the point.
|
| That said, I see no reason human level cognition is impossible.
| We're not magic. We're machines that follow the laws of
| physics. ML systems may be far from capturing what goes on in
| these computers, but that doesn't mean magic exists.
|
| [0] If it walks like a duck, quacks like a duck, and swims like
| a duck, and looks like a duck it's _probably_ a duck. But
| probably doesn 't mean it isn't a well made animatronic. We
| have those too and they'll convince many humans they are ducks.
| But that doesn't change what's inside. The subtly matters.
| stroupwaffle wrote:
| I think it will be an organoid brain bio-machine. We can
| already grow organs--just need to grow a brain and connect it
| to a machine.
| Moosdijk wrote:
| The keyword being "just".
| ggm wrote:
| Just grow, just connect, just sustain, just avoid the
| many pitfalls. Indeed just is key
| godelski wrote:
| just adverb to turn a complex thing into magic
| with a simple wave of the hands E.g. To turn
| lead into gold you _just_ need to remove 3 protons
| idle_zealot wrote:
| Somehow I doubt that organic cells (structures optimized
| for independent operation and reproduction, then adapted to
| work semi-cooperatively) resemble optimal compute fabric
| for cognition. By that same token I doubt that optimal
| compute fabric for cognition resembles GPUs or CPUs as we
| understand them today. I would expect whatever this
| efficient design is to be extremely unlikely to occur
| naturally, structurally, and involve some very exotic
| manufactured materials.
| danaris wrote:
| I have seen far, far too many people say things along the
| lines of "Sure, LLMs currently don't seem to be good at
| [thing LLMs are, at least as of now, _fundamentally incapable
| of_ ], but hey, some people are pretty bad at that sometimes
| too!"
|
| It demonstrates such a complete misunderstanding of the basic
| nature of the problem that I am left baffled that some of
| these people claim to actually be in the machine-learning
| field themselves.
|
| How can you not understand the difference between "humans are
| _not absolutely perfect or reliable_ at this task " and "LLMs
| _by their very nature_ cannot perform this task "?
|
| I do not know if AGI is possible. Honestly, I'd love to
| believe that it is. However, it has not remotely been
| demonstrated that it is possible, and as such, it follows
| that it cannot have been demonstrated that it is inevitable.
| If you want to believe that it is inevitable, then I have no
| quarrel with you; if you want to _preach_ that it is
| inevitable, and draw specious inferences to "prove" it, then
| I have a big quarrel with you.
| tptacek wrote:
| Are you talking about the press release that the story on HN
| currently links to, or the paper that press release is about?
| The paper (I'm not vouching for it; I just skimmed it) appears
| to reduce AGI to a theoretical computational model, and then
| supplies a proof that it's not solvable in polynomial time.
| Gehinnn wrote:
| I was referring to the press release article. I also looked
| at the paper now, and to me their presented proof looked more
| like a technicality than a new insight.
|
| If it's not solvable in polynomial time, how did nature solve
| it in a couple of million years?
| jprete wrote:
| Nature is not necessarily computational; the size of the
| problem might also be such that four _billion_ years of
| evolution was enough.
| ngruhn wrote:
| > There will never be enough computing power to create AGI using
| machine learning that can do the same [as the human brain],
| because we'd run out of natural resources long before we'd even
| get close
|
| I don't understand how people can so confidently make claims like
| this. We might underestimate how difficult AGI is, but come on?!
| fabian2k wrote:
| I don't think the people saying that AGI is happening in the
| near future know what would be necessary to achieve it. Neither
| do the AGI skeptics, we simply don't understand this area well
| enough.
|
| Evolution created intelligence and consciousness. This means
| that it is clearly possible for us to do the same. Doesn't mean
| that simply scaling LLMs could ever achieve it.
| nox101 wrote:
| I'm just going by the title. If the title was, "Don't believe
| the hype, LLMs will not achieve AGI" then I might agree. If
| it was "Don't believe the hype, AGIs is 100s of years away"
| I'd consider the arguments. But, given brains exist, it does
| seem inevitable that we will eventually create something that
| replicates it even if we have to simulate every atom to do
| it. And once we do, it certainly seem inevitable that we'll
| have AGI because unlike brain we can make our copy bigger,
| faster, and/or copy it. We can give it access to more info
| faster and more inputs.
| threeseed wrote:
| > it does seem inevitable that we will eventually create
| something
|
| Birds fly. Therefore it seems inevitable that humans will
| evolve to also fly.
|
| Also don't forget that many suspect the brain may be using
| quantum mechanics so you will need to fully understand and
| document that field. Whilst of course you are simulating
| every atom in the universe using humanity's _complete_
| understanding of _every_ physical and mathematical model.
| umvi wrote:
| > Evolution created intelligence and consciousness
|
| This is not provable, it an assumption. Religious people
| (which account for a large percent the population) claim
| intelligence and/or consciousness stem from a "spirit" which
| existed before birth and will continue to exist after death.
| Also unprovable, by the way.
|
| I think your foundational assertion would have to be
| rephrased as "Assuming things like God/spirits don't exist,
| AGI must be possible because we are AGI agents" in order to
| be true
| staunton wrote:
| For some people, "never" means something like "I wouldn't know
| how, so surely not by next year, and probably not even in ten".
| chpatrick wrote:
| "There will never be enough computing power to compute the
| motion of the planets because we can't build a planet."
| Terr_ wrote:
| I think their qualifier "using machine learning" is doing a lot
| of heavy lifting here in terms of what it implies about the
| engineering approach, cost of material, energy usage, etc.
|
| In contrast, imagine the scenario of AGI using artificial but
| _biological_ neurons.
| tptacek wrote:
| This is a press release for a paper (a common thing university
| departments do) and we'd be better off with the paper itself as
| the story link:
|
| https://link.springer.com/article/10.1007/s42113-024-00217-5
| Gehinnn wrote:
| I skimmed through the paper and couldn't make much sense of it.
| In particular, I don't understand how their results don't imply
| that human-level intelligence can't exist.
|
| After all, earth could be understood as solar powered super
| computer, that took a couple of million years to produce
| humanity.
| 29athrowaway wrote:
| AGI is not required to transform society or create a mess beyond
| no return.
| gqcwwjtg wrote:
| This is silly. They article talks like we have any idea at all
| how efficient machine learning can be. As I remember it, the LLM
| boom came from transformers turning out to scale a lot better
| than anyone expected, so I'm not sure why something similar
| couldn't happen again.
| fnordpiglet wrote:
| It's less about efficiency and more about continued improvement
| with increased scale. I wouldn't call self attention based
| transformers particularly efficient. And afaik we've not hit
| performance with increased scale degradation even at these
| enormous scales.
|
| However I would note that I in principle agree that we aren't
| on the path to a human like intelligence because the difference
| between directed cognition (or however you want to characterize
| current LLMs or other AI) and awareness is extreme. We don't
| really understand even abstractly what awareness actually is
| because it's impossible to interrogate unlike expressive
| language, logic, even art. It's far from obvious to me that we
| can use language or other outputs of our intelligent awareness
| to produce awareness, or even if goal based agents cobbling
| together AI techniques is even approximate to awareness.
|
| I suspect we will end up creating an amazing tool that has its
| own form of intelligence but will fundamentally not be like
| aware intelligence we are familiar with in humans and other
| animals. But this is all theorizing on my part as a
| professional practitioner in this field.
| avazhi wrote:
| "unlikely to ever come to fruition" is more baseless than
| suggesting AGI is imminent.
|
| I'm not an AGI optimist myself, but I'd be very surprised if a
| time traveller told me that mankind won't have AGI by, say, 2250.
| coolThingsFirst wrote:
| Zero evidence given on why it's impossible.
| SonOfLilit wrote:
| > 'If you have a conversation with someone, you might recall
| something you said fifteen minutes before. Or a year before. Or
| that someone else explained to you half your life ago. Any such
| knowledge might be crucial to advancing the conversation you're
| having. People do that seamlessly', explains van Rooij.
|
| Surprisingly, they seem to be attacking the only element of human
| cognition that LLMs already surpassed us at.
| azinman2 wrote:
| They do not learn new facts instantly in a way that can rewrite
| old rules or even larger principals of logic. For example, if I
| showed you evidence right now that you were actually adopted
| (assuming previously you thought you werent), it would rock
| your world and you'd instantly change everything and doubt so
| much. Then when anything related to your family comes up this
| tiny but impactful fact would bleed into all of it. LLMs have
| no such ability.
|
| This is similar to learning a new skill (the G part). I could
| give you a new tv and show you a remote that's unlike any
| you've used before. You could likely learn it quickly and
| seamlessly adapt this new tool, as well as generalize its usage
| onto other new devices.
|
| LLMs cannot do such things.
___________________________________________________________________
(page generated 2024-09-29 23:00 UTC)