[HN Gopher] I doubt that anything resembling genuine AGI is with...
       ___________________________________________________________________
        
       I doubt that anything resembling genuine AGI is within reach of
       current AI tools
        
       Author : gmays
       Score  : 92 points
       Date   : 2025-12-21 05:02 UTC (17 hours ago)
        
 (HTM) web link (mathstodon.xyz)
 (TXT) w3m dump (mathstodon.xyz)
        
       | mindcrime wrote:
       | Terry Tao is a genius, and I am not. So I probably have no
       | standing to claim to disagree with him. But I find this post less
       | than fulfilling.
       | 
       | For starters, I think we can rightly ask what it means to say
       | "genuine artificial general intelligence", as opposed to just
       | "artificial general intelligence". Actually, I think it's fair to
       | ask what "genuine artificial" $ANYTHING would be.
       | 
       | I _suspect_ that what he means is something like  "artificial
       | intelligence, but that works just like human intelligence".
       | Something like that seems to be what a lot of people are saying
       | when they talk about AI and make claims like "that's not _real_
       | AI ". But for myself, I reject the notion that we need "genuine
       | artificial general intelligence" that works like human
       | intelligence in order to say we have artificial general
       | intelligence. Human intelligence is a nice existence proof that
       | some sort of "general intelligence" is possible, and a nice
       | example to model after, but the marquee sign does say
       | _artificial_ at the end of the day.
       | 
       | Beyond that... I know, I know - it's the oldest cliche in the
       | world, but I will fall back on it because it's still valid, no
       | matter how trite. We don't say "airplanes don't really fly"
       | because they don't use the exact same _mechanism_ as birds. And I
       | don 't see any reason to say that an AI system isn't "really
       | intelligent" if it doesn't use the same mechanism as human.
       | 
       | Now maybe I'm wrong and Terry meant something altogether
       | different, and all of this is moot. But it felt worth writing
       | this out, because I feel like a lot of commenters on this subject
       | engage in a line of thinking like what is described above, and I
       | think it's a poor way of viewing the issue no matter who is doing
       | it.
        
         | npinsker wrote:
         | > I suspect that what he means is something like "artificial
         | intelligence, but that works just like human intelligence".
         | 
         | I think he means "something that can discover new areas of
         | mathematics".
        
           | dr_dshiv wrote:
           | I'd love to take that bet
        
           | mindcrime wrote:
           | Very reasonable, given his background!
           | 
           | That does seem awfully specific though, in the context of
           | talking about "general" intelligence. But I suppose it could
           | rightly be argued that any intelligence capable of
           | "discovering new areas of mathematics" would inherently need
           | to be fairly general.
        
             | themafia wrote:
             | > That does seem awfully specific though
             | 
             | It's one of a large set of attributes you would expect in
             | something called "AGI."
        
           | metalcrow wrote:
           | In that case, I'm afraid many people, myself included, would
           | not be describable as "general intelligences"!
        
         | enraged_camel wrote:
         | The airplane analogy is a good one. Ultimately, if it quacks
         | like a duck and walks like a duck, does it really matter if
         | it's a real duck or an artificial one? Perhaps only if
         | something tries to eat it, or another duck tries to mate with
         | it. In most other contexts though it could be a valid
         | replacement.
        
           | clort wrote:
           | Just out of interest though, can you suggest some of these
           | other contexts where you might want a valid replacement for a
           | duck that looked like one, walked like one and quacked like
           | one but was not one?
        
             | alex43578 wrote:
             | Decoy for duck hunting?
        
               | omnimus wrote:
               | Are you suggesting LLMs are decoy for investor hunting?
        
               | heresie-dabord wrote:
               | In the same sly vein of humour, the first rule of Money
               | Club is to never admit that the duck may be lame.
        
         | catoc wrote:
         | I interpret "artificial" in "artificial general intelligence"
         | as "non-biological".
         | 
         | So in Tao's statement I interpret "genuine" not as an adverb
         | modifying the "artificial" adjective but as an attributive
         | adjective modifying the noun "intelligence", describing its
         | quality... "genuine intelligence that is non-biological in
         | nature"
        
           | mindcrime wrote:
           | _So in Tao's statement I interpret "genuine" not as an adverb
           | modifying the "artificial" adjective but as an attributive
           | adjective modifying the noun "intelligence", describing its
           | quality... "genuine intelligence that is non-biological in
           | nature"_
           | 
           | That's definitely possible. But it seems redundant to phrase
           | it that way. That is to say, the goal (the _end_ goal anyway)
           | of the AI enterprise has always been, at least as I 've
           | always understood it, to make "genuine intelligence that is
           | non-biological in nature". That said, Terry is a
           | mathematician, not an "AI person" so maybe it makes more
           | sense when you look at it from that perspective. I've been
           | immersed in AI stuff for 35+ years, so I may have developed a
           | bit of myopia in some regards.
        
             | catoc wrote:
             | I agree, it's redundant. To us humans - to me at least -
             | intelligence is always general (calculator: not;
             | chimpansee: a little), so "general intelligence" can also
             | already be considered redundant. Using "genuine" is more
             | redundancy being heaped on (with the assumed goal of making
             | a distinction between "genuine" AGI and tools that appear
             | smart in limited domains)
        
         | scellus wrote:
         | I find it odd that the post above is downvoted to grey, feels
         | like some sort of latent war of viewpoints going on, like below
         | some other AI posts. (Although these misvotes are usually fixed
         | when the US wakes up.)
         | 
         | The point above is valid. I'd like to deconstruct the concept
         | of intelligence even more. What humans are able to do is a
         | relatively artificial collection of skills a physical and
         | social organism needs. The so highly valued intelligence around
         | math etc. is a corner case of those abilities.
         | 
         | There's no reason to think that human mathematical intelligence
         | is unique by its structure, an isolated well-defined skill.
         | Artificial systems are likely to be able to do much more, maybe
         | not exactly the same peak ability, but adjacent ones, many of
         | which will be superhuman and augmentative to what humans do.
         | This will likely include "new math" in some sense too.
        
           | omnimus wrote:
           | What everybody is looking for is imagination and invention.
           | Current AI systems can give best guess statistical answer
           | from dataset the've been fed. It is always compression.
           | 
           | The problem and what most people intuitively understand is
           | that this compression is not enough. There is something more
           | going on because people can come up with novel
           | ideas/solutions and whats more important they can judge and
           | figure out if the solution will work. So even if the core of
           | the idea is "compressed" or "mixed" from past knowledge there
           | is some other process going on that leads to the important
           | part of invention-progress.
           | 
           | That is why people hate the term AI because it is just
           | partial capability of "inteligence" or it might even be
           | complete illusion of inteligence that is nowhere close what
           | people would expect.
        
             | in-silico wrote:
             | > Current AI systems can give best guess statistical answer
             | from dataset the've been fed.
             | 
             | What about reinforcement learning? RL models don't train on
             | an existing dataset, they try their own solutions and learn
             | from feedback.
             | 
             | RL models can definitely "invent" new things. Here's an
             | example where they design novel molecules that bind with a
             | protein: https://academic.oup.com/bioinformatics/article/39
             | /4/btad157...
        
               | omnimus wrote:
               | Finding variations in constrained haystack with
               | measurable defined results is what machine learning has
               | always been good at. Tracing most efficient Trackmania
               | route is impressive and the resulting route might be
               | original as in human would never come up with it. But is
               | it actually novel in creative, critical way? Isn't it
               | simply computational brute force? How big that force
               | would have to be in physical or less constrained world?
        
       | Peteragain wrote:
       | Exactly! I am going for "glorified auto complete" is far more
       | useful than it seems. In GOFAI terms, it does case-based
       | reasoning.. but better.
        
         | Izikiel43 wrote:
         | I call it clippy's revengeance
        
           | heresie-dabord wrote:
           | Clippy 2: Clippy Goes Nuclear
           | 
           | But more seriously, this is ELIZA with network effects.
           | Credulous multitudes chatting with a system that they believe
           | is sentient.
        
       | Davidzheng wrote:
       | The text continues "with current AI tools" which is not clearly
       | defined to me (does it mean current Gen + scaffold? Anything
       | which is llm reasoning model? Anything built with a large llm
       | inside? ). In any case, the title is misleading for not
       | containing the end of the sentence. Please can we fix the title?
        
         | Davidzheng wrote:
         | Also i think the main source of interest is because it is said
         | by Terry, so that should be in the title too.
        
       | blobbers wrote:
       | I think what Terry is saying is that with the current set of
       | tools, there are classes of problems requiring cleverness: where
       | you can guess and check (glorified autocomplete), check answer,
       | fail and then add information from failure and repeat.
       | 
       | I guess ultimately what is intelligence? We compact our memories,
       | forget things, and try repeatedly. Our inputs are a bit more
       | diverse but ultimately we autocomplete our lives. Hmm... maybe
       | we've already achieved this.
        
         | judahmeek wrote:
         | Some one recently told me that their definition of intelligence
         | was data-efficient extrapolation & I think that definition is
         | pretty good as it separates intelligence from knowledge,
         | sentience, & sapience.
        
       | mentalgear wrote:
       | Some researchers proposed using, instead of the term "AI", the
       | much more fitting "self-parametrising probabilistic model" or
       | just advanced auto-complete - that would certainly take the hype-
       | inducing marketing PR away.
        
         | attendant3446 wrote:
         | The term "AI" didn't make sense from the beginning, but I guess
         | it sounded cool and that's why everything is "AI" now. And I
         | doubt it will change, regardless of its correctness.
        
           | chimprich wrote:
           | John McCarthy coined the term "Artificial Intelligence" in
           | the 1950s. I doubt he was trying to be cool. The whole field
           | of research involved in getting computers to do intelligent
           | things has been referred to as AI for many decades.
        
         | metalman wrote:
         | AI is intermitent wipers, for words, and the two are completly
         | tied, as the perfect test for AI, will be to run intermitent
         | wipers, to everybodys satisfaction.
        
         | pavlov wrote:
         | That's like arguing that washing machines should be called
         | rapid-rotation water agitators.
         | 
         | It's the result that consumers are interested in, not the
         | mechanics of how it's achieved. Software engineers are often
         | extraordinarily bad at seeing the difference because they're so
         | interested in the implementation details.
        
           | ForHackernews wrote:
           | I'd be mad if washing machines were marketed as a "robot
           | maid"
        
             | pavlov wrote:
             | A woman from 1825 would probably happily accept that
             | description though (notwithstanding that the word "robot"
             | wasn't invented yet).
             | 
             | A machine that magically replaces several hours of her
             | manual work? As far as she's concerned, it's a specialized
             | maid that doesn't eat at her table and never gets sick.
        
               | auggierose wrote:
               | Machines do get "sick" though, and they eat electricity.
        
               | pavlov wrote:
               | Negligible cost compared to a real maid in 1825. The
               | washing machine also doesn't get pregnant by your teenage
               | son and doesn't run away one night with your silver
               | spoons -- the upkeep risks and replacement costs are much
               | lower.
        
               | auggierose wrote:
               | In 1825 both electricity prices and replacement costs
               | would have been unaffordable for anyone, though. Because
               | there was literally no prize you could pay to get these
               | things.
        
               | ljlolel wrote:
               | They do and will randomly kill people
        
               | omnimus wrote:
               | Shame we are in 2025 huh? Ask someone today if they
               | accept washing machine as robot maid.
        
               | pavlov wrote:
               | The point is that, as far as development of AI is
               | concerned, 2025 consumers are in the same position as the
               | 1825 housewife.
               | 
               | In both cases, automation of what was previously human
               | labor is very early and they've seen almost nothing yet.
               | 
               | I agree that in the year 2225 people are not going to
               | consider basic LLMs artificial intelligences, just like
               | we don't consider a washing machine a maid replacement
               | anymore.
        
               | watwut wrote:
               | 19 century washing machines were called washing/mangling
               | machines.
               | 
               | They were not called maids nor personified.
        
             | heresie-dabord wrote:
             | "Washer" and "dryer" are accepted colloquial terms for
             | these appliances.
             | 
             | I could even see the humour in "washer-bot" and "dryer-bot"
             | if they did anything notably more complex. But we don't
             | need/want appliances to become more complex than is
             | necessary. We usually just call such things _programmable_.
             | 
             | I can accept calling our new, over-hyped, hallucinating
             | overlords _chatbots_. But to be fair to the technology, it
             | is we chatty humans doing all the hyping and hallucinating.
             | 
             | The market capitalisation for this sector is sickly
             | feverish -- all we have done is to have built a
             | significantly better ELIZA [1]. Not a _HIGGINS_ and
             | certainly not AGI. If this results in the construction of
             | new nuclear power facilities, maybe we can do the latter
             | with significant improvement too. (I hope.)
             | 
             | My toaster and oven will never be bots to me. Although my
             | current vehicle is better than earlier generations, it
             | contains plenty of bad code and it spews telemetry. It
             | should not be trusted with any important task.
             | 
             | [1] _ https://en.wikipedia.org/wiki/ELIZA
        
           | kylebyte wrote:
           | The problem is that intelligence isn't the result, or at the
           | very least the ideas that word evokes in people don't match
           | the actual capabilities of the machine.
           | 
           | Washing is a useful word to describe what that machine does.
           | Our current setup is like if washing machines were called
           | "badness removers," and there was a widespread belief that we
           | were only a few years out from a new model of washing machine
           | being able to cure diseases.
        
             | lxgr wrote:
             | Arguably there isn't even a widely shared, coherent
             | definition of intelligence: To some people, it might mean
             | pure problem solving without in-task learning; others
             | equate it with encyclopedic knowledge etc.
             | 
             | Given that, I consider it quite possible that we'll reach a
             | point where even more people will consider LLMs having
             | reached or surpassed AGI, while others still only consider
             | it "sufficiently advanced autocomplete".
        
               | kylebyte wrote:
               | I'd believe this more if companies weren't continuing to
               | use words like reason, understand, learn, and genius when
               | talking about these systems.
               | 
               | I buy that there's disagreement on what intelligence
               | means in the enthusiast space, but "thinks like people"
               | is pretty clearly the general understanding of the word,
               | and the one that tech companies are hoping to leverage.
        
             | edanm wrote:
             | What about letting customers actually try the products and
             | figure out for themselves what it does and whether that's
             | useful to them?
             | 
             | I don't understand this mindset that because someone stuck
             | the label "AI" on it, consumers are suddenly unable to
             | think for themselves. AI as a marketing label has been used
             | for dozens of years, yet only now is it taking off like
             | crazy. The word hasn't change - what it's actually capable
             | of doing _has_.
        
               | gudii2 wrote:
               | > What about letting customers actually try the products
               | and figure out for themselves what it does and whether
               | that's useful to them?
               | 
               | Yikes. I'm guessing you've never lost anyone to
               | "alternative" medical treatments.
        
           | jononor wrote:
           | Businesses are interested in something that can work for
           | them. And the way the LLM based agentic systems are going, it
           | might actually deliver on "Automated Knowledge Workers".
           | Probably not with full autonomy, but in teams lead by a
           | human. The human needs to tend the AKW, much like we do with
           | washing machines and industrial automation machines.
        
           | dgeiser13 wrote:
           | Current "AI" is the manual washboard.
        
         | red75prime wrote:
         | It's a nice naming, fellow language-capable electrobiochemical
         | autonomous agent.
        
         | dist-epoch wrote:
         | The proof of Riemann hypothesis is [....autocomplete here...]
        
         | ponector wrote:
         | I prefer Tesla's approach to call their adaptive cruise control
         | "FSD (supervised)".
         | 
         | AI (supervised).
        
       | moktonar wrote:
       | There's a guaranteed path to AGI, but it's blocked behind
       | computational complexity. Finding an efficient algorithm to
       | simulate Quantum Mechanics should be top priority for those
       | seeking AGI. A more promising way around it is using Quantum
       | Computing, but we'll have to wait for that to become good
       | enough..
        
         | themafia wrote:
         | Required energy density at the necessary scale will be your
         | next hurdle.
        
           | moktonar wrote:
           | Once you have the efficient algorithm you approximate
           | asymptotically with the energy you have, of course you can't
           | obtain the same precision
        
             | themafia wrote:
             | Or speed. I think Frank Herbert was on to something in
             | Dune. The energy efficiency of the human brain is hard to
             | beat. Perhaps we should invest in discovering "spice." I
             | think it might be more worthwhile.
             | 
             | Okay, enough eggnog and posting.
        
         | legulere wrote:
         | How would simulating quantum mechanics help with AGI?
        
           | nddkkfkf wrote:
           | Obviously, quantum supremacy is semiologically orthogonal to
           | AGI (Artificial General Inteligence) ontological recursive
           | synapses... this is trivial.
        
             | nddkkfkf wrote:
             | now buy the stock
        
           | moktonar wrote:
           | By simulating it
        
             | legulere wrote:
             | What exactly should get simulated and how do you think
             | quantum mechanics will help with this?
        
               | moktonar wrote:
               | At least the solar system I would say. Quantum mechanics
               | will help you do that in the correct way to obtain what
               | Nature already obtained: general intelligence.
        
         | lxgr wrote:
         | That would arguably not be artificial intelligence, but rather
         | simulated natural intelligence.
         | 
         | It also seems orders of magnitude less resource efficient than
         | higher-level approaches.
        
           | moktonar wrote:
           | What's the difference? Arguably the latter will be better IMO
           | than the former
        
             | lxgr wrote:
             | How many orders of magnitude? Nearly as many as it would be
             | less efficient?
        
               | moktonar wrote:
               | It's like comparing apples and oranges
        
       | relistan wrote:
       | These things work well on the extremely limited task impetus that
       | we give them. Even if we sidestep the question of whether or not
       | LLMs are actually on the path to AGI, Imagine instead the amount
       | of computing and electrical power required with current computing
       | methods and hardware in order to respond to and process all the
       | input handled by a person at every moment of the day. Somewhere
       | in between current inputs and handling the full load of inputs
       | the brain handles may lie "AGI" but it's not clear there is
       | anything like that on the near horizon, if only because of
       | computing power constraints.
        
       | trio8453 wrote:
       | > This results in the somewhat unintuitive combination of a
       | technology that can be very useful and impressive, while
       | simultaneously being fundamentally unsatisfying and disappointing
       | 
       | Useful = great. We've made incredible progress in the past 3-5
       | years.
       | 
       | The people who are disappointed have their standards and
       | expectations set at "science fiction".
        
         | lxgr wrote:
         | I think many people are now learning that their definition of
         | intelligence was actually not very precise.
         | 
         | From what I've seen, in response to that, goalposts are then
         | often moved in the way that requires least updating of
         | somebody's political, societal, metaphysical etc. worldview.
         | (This also includes updates in favor of "this will definitely
         | achieve AGI soon", fwiw.)
        
           | knallfrosch wrote:
           | I remember when the goal posts were set at the "Turing test."
           | 
           | That's certainly not coming back.
        
             | rightbyte wrote:
             | If you know the tricks wont you be able to figure out if
             | some chat is done by a LLM?
        
         | danaris wrote:
         | _Or_ the people who are disappointed were listening to the AI
         | hype men like Sam Altman, who have, in fact, been promising AGI
         | or something very like it for years now.
         | 
         | I don't think it's fair to deride people who are disappointed
         | in LLMs for not being AGI when many very prominent proponents
         | have been claiming they are or soon will be exactly that.
        
       | Taek wrote:
       | We seem to be moving the goalposts on AGI, are we not? 5 years
       | ago, the argument that AGI wasn't here yet was that you couldn't
       | take something like AlphaGo and use it to play chess. If you
       | wanted that, you had to do a new training run with new training
       | data.
       | 
       | But now, we have LLMs that can reliably beat video games like
       | Pokemon, without any specialized training for playing video
       | games. And those same LLMs can write code, do math, write poetry,
       | be language tutors, find optimal flight routes from one city to
       | another during the busy Christmas season, etc.
       | 
       | How does that not fit the definition of "General Intelligence"?
       | It's literally as capable as a high school student for almost any
       | general task you throw it at.
        
         | lxgr wrote:
         | I think we're noticing that our goalposts for AGI were largely
         | "we'll recognize it when we see it", and now as we are getting
         | to some interesting places, it turns out that different people
         | actually understood very different things by that.
        
         | oidar wrote:
         | I think the games tasks are worth exploring more. If you look
         | at that recent Pokemon post - it's not as capable as a high
         | school student - it took a long, long time. I have a private
         | set of tests, that any 8 year old could easily solve that any
         | LLM just absolutely fails on. I suspect that plenty of the
         | people claiming AGI isn't here yet have similar personal tests.
        
       | keyle wrote:
       | I am quite happy with LLM being more and more available 24/7 to
       | be useful to human kind ... than some sentient being that never
       | sleep and is more intelligent than me, with its own agenda.
        
         | netsharc wrote:
         | There might not be a set agenda, but it might lead to the
         | domination of a particular type of knowledge and the
         | destruction of others:
         | 
         | https://aeon.co/essays/generative-ai-has-access-to-a-small-s...
        
       | brador wrote:
       | Remember when your goal posts were Turing test?
       | 
       | The only question remaining is what is the end point of AGI
       | capability.
       | 
       | What's the final IQ we'll hit, and more importantly why will it
       | end there?
       | 
       | Power limits? Hardware bandwidth limit? Storage limits? the AI
       | creation math scales to infinity so that's not an issue.
       | 
       | Source data limits? Most likely. We should have recorded more. We
       | should have recorded more.
        
         | lostmsu wrote:
         | Last I checked the Turing test stands. I've only seen reports
         | of LLMs winning under some weird conditions. Interestingly,
         | these were a year or two ago, and nobody seem to have tried
         | Turing tests lately with newer LLMs.
        
       ___________________________________________________________________
       (page generated 2025-12-21 23:01 UTC)