[HN Gopher] I don't think AGI is right around the corner
___________________________________________________________________
I don't think AGI is right around the corner
Author : mooreds
Score : 90 points
Date : 2025-07-06 20:45 UTC (2 hours ago)
(HTM) web link (www.dwarkesh.com)
(TXT) w3m dump (www.dwarkesh.com)
| deadbabe wrote:
| I've noticed it's becoming a lot more popular lately for people
| to come out and say AGI is still very, very far away. Is the hype
| cycle ending somewhat? Have we passed peak LLM?
|
| Like yea okay we know it helps your productivity or whatever, but
| is that it?
| andy99 wrote:
| Maybe - anecdotally, HN at least is not very receptive to the
| idea that transformers are not (or with more data will never
| be) sentient somehow, and almost every post I see about this is
| followed up by the idiotic "how do we know human intelligence
| isn't the same thing", as if there's some body of commentators
| whose personal experience with consciousness somehow leads them
| to believe it might be achievable with matrix math.
|
| Anyway, I don't think we're over the peak yet, the tech
| adjacent pseudo intellectuals that feed these bubbles (VCs etc)
| still very much think that math that generates a plausible
| transcript is alive.
| oasisaimlessly wrote:
| > experience with consciousness somehow leads them to believe
| it might be achievable with matrix math
|
| That's trivially true if you subscribe to materialism; QM is
| "just matrix math".
| JohnKemeny wrote:
| You're not making the point you think you're making.
| kachapopopow wrote:
| Honestly, o3 pro with actual 1m context window (every model right
| now drops out at around 128k) that's as fast and cheap as 4o is
| already good enough for me.
| v5v3 wrote:
| Thats nice to know.
|
| What's that got to do with this post though.
| kachapopopow wrote:
| I don't feel like the AGI people are talking about isn't
| necessary, something like that would at minimum require as
| much compute as neurons and synapses that of a teen (minus
| the requirements to maintain a body).
|
| I think what we have right now with some (very difficult to
| achieve, but possible in the forseeable future) tweaks we can
| already see 95% of what an "AGI" could do come true: put most
| of the population out of jobs, work together and improve
| itself (to a limited degree) and cause general chaos.
| v5v3 wrote:
| It would put people out of their 'current jobs' which many
| of them hate and only do to pay the bills.
|
| A lot of people would be far happier and would find
| something better to do with their day if universal income
| came along.
|
| Take developers as an example, many don't enjoy the
| corporate CRUD apps they do.
| tedsanders wrote:
| o3 pro doesn't have a 1M context window, unfortunately. GPT-4.1
| and Gemini 2.5 do, though.
| kachapopopow wrote:
| That's why I said "if". And that's a lot more plausible than
| an AGI.
| izzydata wrote:
| Not only do I not think it is right around the corner. I'm not
| even convinced it is even possible or at the very least I don't
| think it is possible using conventional computer hardware. I
| don't think being able to regurgitate information in an
| understandable form is even an adequate or useful measurement of
| intelligence. If we ever crack artificial intelligence it's
| highly possible that in its first form it is of very low
| intelligence by humans standards, but is truly capable of
| learning on its own without extra help.
| kachapopopow wrote:
| There is something easy you can always do to tell if something
| is just hype: we will never be able to make something smarter
| than a human brain on purpose. It effectively has to happen
| either naturally or by pure coincidence.
|
| The amount of computing power we are putting in only changes
| that luck by a tiny fraction.
| echoangle wrote:
| > we will never be able to make something smarter than a
| human brain on purpose. It effectively has to happen either
| naturally or by pure coincidence.
|
| Why is that? We can build machines that are much better than
| humans in some things (calculations, data crunching). How can
| you be certain that this is impossible in other disciplines?
| kachapopopow wrote:
| that's just a tiny fraction of what a human brain can do,
| sure we can get something better in very narrow subjects,
| but something as being able to recognize patterns apply
| that to solve problems is something way beyond anything we
| can even think of right now.
| echoangle wrote:
| Ok, but how does that mean that we will never be able to
| do it? Imagine telling people 500 years ago that you will
| build a machine that can bring the to the moon. Maybe AGI
| is like that, maybe it's really impossible. But how can
| people be confident that AGI is something humans can't
| create?
| kachapopopow wrote:
| What we have right now with llms is bruteforcing our way
| to create something 'smarter' than a human, of course it
| can happen, but it's not something that can be 'created'
| by a human. An llm as small as 3b already performed more
| calculations than all the calculations done in the entire
| human history.
| navels wrote:
| why not?
| izzydata wrote:
| I'm not an expert by any means, but everything I've seen of
| LLMs / machine learning looks like mathematical computation
| no different than what computers have always been doing at a
| fundamental level. If computers weren't AI before than I
| don't think they are now just because the maths they are
| doing has changed.
|
| Maybe something like the game of life is more in the right
| direction. Where you set up a system with just the right set
| of rules with input and output and then just turn it on and
| let it go and the AI is an emergent property of the system
| over time.
| hackinthebochs wrote:
| Why do you have a preconception of what an implementation
| of AGI should look like? LLMs are composed of the same
| operations that computers have always done. But they're
| organized in novel ways that have produced novel
| capabilities.
| izzydata wrote:
| I am expressing doubt. I don't have any preconceptions. I
| am open to being convinced of anything that makes more
| sense.
| ActorNightly wrote:
| Exactly. Ive said this from the start.
|
| AGI is being able to simulate reality in high enough accuracy,
| faster than reality (which includes being able to simulate
| human brains), which so far doesn't seem to be possible (due to
| computational irreducebility)
| agumonkey wrote:
| Then there's the other side of the issue. If your tool is
| smarter than you.. how do you handle it ?
|
| People are joking online that some colleagues use chatgpt to
| answer questions from other teammates made by chatgpt, nobody
| knows what's going on anymore.
| colechristensen wrote:
| >I don't think being able to regurgitate information in an
| understandable form is even an adequate or useful measurement
| of intelligence.
|
| Measuring intelligence is hard and requires a really good
| definition of intelligence, LLMs have in some ways made the
| definition easier because now we can ask the concrete question
| against computers which are very good at some things "Why are
| LLMs _not_ intelligent? " Given their capabilities and
| deficiencies, answering the question about what current "AI"
| technology lacks will make us better able to define
| intelligence. This is assuming that LLMs are the state of the
| art Million Monkeys and that intelligence lies on a different
| path than further optimizing that.
|
| https://en.wikipedia.org/wiki/Infinite_monkey_theorem
| baxtr wrote:
| I think the same.
|
| How do you call people like us? AI doomers? AI boomers?!
| Mistletoe wrote:
| Realists.
| npteljes wrote:
| "AI skeptics", like here: https://www.techopedia.com/the-
| skeptics-who-believe-ai-is-a-...
| paulpauper wrote:
| There is a middle ground of people believe AI will lead to
| improvements in some respects of life, but will not liberate
| people from work or anything grandiose like that.
| baxtr wrote:
| I am big fan of AI tools.
|
| I just don't see how AGI is possible in the near future.
| Waterluvian wrote:
| I think the only way that it's actually impossible is if we
| believe that there's something magical and fundamentally
| immeasurable about humans that leads to our general
| intelligence. Otherwise we're just machines, after all. A human
| brain is theoretically reproducible outside standard biological
| mechanisms, if you have a good enough nanolathe.
|
| Maybe our first AGI is just a Petri dish brain with a half-
| decent python API. Maybe it's more sand-based, though.
| frizlab wrote:
| > if we believe that there's something magical and
| fundamentally immeasurable about humans that leads to our
| general intelligence
|
| It's called a soul for the believers.
| andy99 wrote:
| If by "something magical" you mean something we don't
| understand, that's trivially true. People like to give firm
| opinions or make completely unsupported statements they feel
| should be taken seriously ("how do we know humans
| intelligence doesn't work the same way as next token
| prediction") about something nobody understand.
| Waterluvian wrote:
| I mean something that's fundamentally not understandable.
|
| "What we don't _yet_ understand" is just a horizon.
| somewhereoutth wrote:
| Our silicon machines exist in a countable state space (you
| can easily assign a unique natural number to any state for a
| given machine). However, 'standard biological mechanisms'
| exist in an _uncountable_ state space - you need real numbers
| to properly describe them. Cantor showed that the uncountable
| is infinitely more infinite (pardon the word tangle) than the
| countable. I posit that the 'special sauce' for
| sentience/intelligence/sapience exists beyond the countable,
| and so is unreachable with our silicon machines as currently
| envisaged.
|
| I call this the 'Cardinality Barrier'
| Waterluvian wrote:
| That's an interesting thought. It steps beyond my realm of
| confidence, but I'll ask in ignorance: can a biological
| brain really have infinite state space if there's a minimum
| divisible Planck length?
|
| Infinite and "finite but very very big" seem like a
| meaningful distinction here.
|
| I once wondered if digital intelligences might be possible
| but would require an entire planet's precious metals and
| require whole stars to power. That is: the "finite but very
| very big" case.
|
| But I think your idea is constrained to if we wanted a
| _digital_ computer, is it not? Humans can make intelligent
| life by accident. Surely we could hypothetically construct
| our own biological computer (or borrow one...) and make it
| more ideal for digital interface?
| saubeidl wrote:
| Isn't a Planck length just the minimum for
| _measurability_?
| triclops200 wrote:
| Measurability is essentially a synonym for meaningful
| interaction at some measurement scale. When describing
| fundamental measurability limits, you're essentially
| describing what current physical models consider to be
| the fundamental interaction scale.
| layer8 wrote:
| Not quite. Smaller wavelengths mean higher energy, and a
| photon with Planck wavelength would be energetic enough
| to form a black hole. So you can't meaningfully
| _interact_ electromagnetically with something smaller
| than the Planck length. Nor can that something have
| electromagnetic properties.
|
| But since we don't have a working theory of quantum
| gravity at such energies, the final verdict remains open.
| saubeidl wrote:
| That is a really insightful take, thank you for sharing!
| jandrewrogers wrote:
| > 'standard biological mechanisms' exist in an uncountable
| state space
|
| Everything in our universe is countable, which naturally
| includes biology. A bunch of physical laws are predicated
| on the universe being a countable substrate.
| layer8 wrote:
| Physically speaking, we don't know that the universe isn't
| fundamentally discrete. But the more pertinent question is
| whether what the brain does couldn't be approximated well
| enough with a finite state space. I'd argue that books,
| music, speech, video, and the like demonstrate that it
| could, since those don't seem qualitatively much different
| from how other, analog inputs stimulate our intellect. Or
| otherwise you'd have to explain why an uncountable state
| space would be needed to deal with discrete finite inputs.
| coffepot77 wrote:
| Can you explain why you think the state space of the brain
| is not finite? (Not even taking into account countability
| of infinities)
| richk449 wrote:
| It sounds like you are making a distinction between digital
| (silicon computers) and analog (biological brains).
|
| As far as possible reasons that a computer can't achieve
| AGI go, this seems like the best one (assuming computer
| means digital computer of course).
|
| But in a philosophical sense, a computer obeys the same
| laws of physics that a brain does, and the transistors are
| analog devices that are being used to create a digital
| architecture. So whatever makes you brain have uncountable
| states would also make a real digital computer have
| uncountable states. Of course we can claim that only the
| digital layer on top matters, but why?
| bakuninsbart wrote:
| Cantor talks about countable and uncountable infinities,
| both computer chips and human brains are finite spaces. The
| human brain has roughly 100b neurons, even if each of these
| had an edge with each other and these edges could
| individually light up signalling different states of mind,
| isn't that just `2^100b!`? That's roughly as far away from
| infinity as 1.
| somewhereoutth wrote:
| But this signalling (and connections) may be more complex
| than connected/unconnected and on/off, such that we
| cannot completely describe them [digitally/using a
| countable state space] as we would with silicon.
| sandworm101 wrote:
| A brain in a jar, with wires so that we can communicate with
| it, already exists. Its called the internet. My brain is
| communicating with you now through wires. Replacing my
| keyboard with implanted electrodes may speed up the
| connection, but it wont fundimentally change the structure or
| capabilities of the machine.
| Waterluvian wrote:
| Wait, are we all just Servitors?!
| josefx wrote:
| > and fundamentally immeasurable about humans that leads to
| our general intelligence
|
| Isn't AGI defined to mean "matches humans in virtually all
| fields"? I don't think there is a single human capable of
| this.
| Balgair wrote:
| -- A human brain is theoretically reproducible outside
| standard biological mechanisms, if you have a good enough
| nanolathe.
|
| Sort of. The main issue is the energy requirements. We could
| theoretically reproduce a human brain in SW today, it's just
| that it would be a really big energy hog and run very slowly
| and probably become insane quickly like any person trapped in
| a sensory deprived tank.
|
| The real key development for AI and AGI is down at the metal
| level of computers- the memristor.
|
| https://en.m.wikipedia.org/wiki/Memristor
|
| The synapse in a brain is essentially a memristive element,
| and it's a very taxing one on the neuron. The equations is
| (change in charge)/(change in flux). Yes, a flux capacitor,
| sorta. It's the missing piece in fundamental electronics.
|
| Making simple 2 element memristors is somewhat possible these
| days, though I've not really been in the space recently.
| Please, if anyone knows where to buy them, a real one not a
| claimed to be one, let me know. I'm willing to pay good
| money.
|
| In Terms of AI, a memristor would require a total redesign of
| how we architect computers ( goodbye busses and physically
| separate memory, for one). But, you'd get a huge energy and
| time savings benefit. As in, you can run an LLM on a watch
| battery or small solar cell and let the environment train
| them to a degree.
|
| Hopefully AI will accelerate their discovery and facilitate
| their introduction into cheap processing and construction of
| chips.
| dinkumthinkum wrote:
| I think you are very right to be skeptical. It's refreshing to
| see another such take as it is so strange to see so many
| supposedly technical people just roll down the track of
| assuming this is happening when there are some fundamental
| problems with this idea. I understand why non-technical are
| ready to marry and worship it or whatever but for serious
| people I think we need to think more critically.
| breuleux wrote:
| I think the issue is going to turn out to be that intelligence
| doesn't scale very well. The computational power needed to
| model a system has got to be in some way exponential in how
| complex or chaotic the system is, meaning that the
| effectiveness of intelligence is intrinsically constrained to
| simple and orderly systems. It's fairly telling that the most
| effective way to design robust technology is to eliminate as
| many factors of variation as possible. That might be the only
| modality where intelligence actually works well, super or not.
| airstrike wrote:
| What does "scale well" mean here? LLMs right now aren't
| intelligent so we're not scaling from that point on.
|
| If we had a very inefficient, power hungry machine that was
| 1:1 as intelligent as a human being but could scale it very
| inefficiently to be 100:1 a human being it might still be
| worth it.
| paulpauper wrote:
| I agree. There is no define or agreed upon consensus of what
| AGI even means or implies. Instead, we will continue to see
| incremental improvements at the sort of things AI is good at,
| like text and image generation, generating code, etc. The
| utopia dream of AI solving all of humanity's problems as people
| just chill on a beach basking in infinite prosperity are
| unfounded.
| behnamoh wrote:
| Startups and AI shops: "AGI near, 5 years max" (please give us
| more money please)
|
| Scientists and Academics: "AGI far, LLMs not gonna AGI"
|
| AI Doomers: "AGI here, AI sentient, we dead"
|
| AI Influencers: "BREAKING: AGI achieved, here's 5 things to know
| about o3"
|
| Investors: _stonks go down_ "AGI cures all diseases", _stonks go
| up_ "AGI bad" (then shorts stonks)
| dinkumthinkum wrote:
| I agree with you. However, I think AI Doomers also include
| people that think less than AGI systems can collapse the
| economy and destroy societies also!
| shippage wrote:
| That's also a concern. Many day-to-day tasks for employees
| are repetitive to the point even a less-than-AGI system could
| potentially disrupt those jobs (and there are people actively
| working right now to make this happen).
|
| The best case scenario would be the employees taking
| advantage of their increased productivity to make themselves
| more valuable to their employer (and if they are lucky, gain
| increased compensation).
|
| However, it's also possible employers decide they don't need
| many of their lower level workforce anymore because the
| remaining ones are more productive. It wouldn't take much of
| this to drive unemployment levels way up. Perhaps not to the
| level of the Great Depression, at least not for a while, but
| it is certainly a potential outcome of the ongoing, long-term
| process in our economic system of increasingly automating
| repetitive, low skill tasks.
|
| IOW, it doesn't take AGI to throw a lot of people out of
| work. It's happened many times with other technologies in the
| past, and when it happens, things can get pretty bad for a
| large number of people even if the majority are still doing
| okay (or even great, for those at the top).
| Herring wrote:
| Apparently 54% of American adults read at or below a sixth-grade
| level nationwide. I'd say AGI is kinda here already.
|
| https://en.wikipedia.org/wiki/Literacy_in_the_United_States
| korijn wrote:
| The ability to read is all it takes to have AGI?
| yeasku wrote:
| Does a country failed education system has anything to do with
| AGI?
| mopenstein wrote:
| What percentage of those people could never read above a
| certain grade level? Could 100% of humans eventually, with
| infinite resources and time, all be geniuses? Could they read
| and comprehend all the works produced by mankind?
|
| I'm curious.
| kranke155 wrote:
| No but they could probably read better. Just look at the
| best education systems in the world and propagate that.
| Generally, all countries should be able to replicate that.
| ch4s3 wrote:
| The stat is skewed wildly by immigration. The literacy level
| of native born Americans is higher. The population of foreign
| born adults is nearly 20% of the total adult population, and
| as you can imagine many are actively learning English.
| Herring wrote:
| It's not skewed much by immigration. This is because the
| native-born population is much larger.
|
| 51% of native-born adults scored at Level 3 or higher. This
| is considered the benchmark for being able to manage
| complex tasks and fully participate in a knowledge-based
| society. Only 28% of immigrant adults achieved this level.
|
| https://www.migrationpolicy.org/sites/default/files/publica
| t...
| thousand_nights wrote:
| very cool. now let's see the LLM do the laundry and wash my
| dishes
|
| yes you're free to give it a physical body in the form of a
| robot. i don't think that will help.
| dinkumthinkum wrote:
| Yet, those illiterate people can still solve enormous amounts
| of challenges that LLMs cannot.
| A_D_E_P_T wrote:
| See also: Dwarkesh's Question
|
| > https://marginalrevolution.com/marginalrevolution/2025/02/dw...
|
| > _" One question I had for you while we were talking about the
| intelligence stuff was, as a scientist yourself, what do you make
| of the fact that these things have basically the entire corpus of
| human knowledge memorized and they haven't been able to make a
| single new connection that has led to a discovery? Whereas if
| even a moderately intelligent person had this much stuff
| memorized, they would notice -- Oh, this thing causes this
| symptom. This other thing also causes this symptom. There's a
| medical cure right here._
|
| > _" Shouldn't we be expecting that kind of stuff?"_
|
| I basically agree and think that the lack of answers to this
| question constitutes a real problem for people who believe that
| AGI is right around the corner.
| vessenes wrote:
| I think gwern gave a good hot take on this: it's super rare for
| humans to do this; it might just be moving the chains to
| complain the ai can't.
| luckydata wrote:
| Well this statement is simply not true. Agent systems based on
| LLMs have made original discoveries on their own, see the work
| Deep Mind has done on pharmaceutical discovery.
| A_D_E_P_T wrote:
| What results have they delivered?
|
| I recall the recent DeepMind material science paper debacle.
| "Throw everything against the wall and hope something sticks
| (and that nobody bothers to check the rest)" is not a great
| strategy.
|
| I also think that Dwarkesh was referring to LLMs
| specifically. Much of what DeepMind is doing is somewhat
| different.
| hackinthebochs wrote:
| > "Shouldn't we be expecting that kind of stuff?"
|
| https://x.com/robertghrist/status/1841462507543949581
| alecco wrote:
| Am I missing something? Why is his opinion relevant? I'm not
| going to read all that unless there's some signal of some kind.
| Podcast bros and their hype cycles are tiresome.
| JohnKemeny wrote:
| Whose opinion would you want to hear? The CEO of an AI company?
| js4ever wrote:
| I was thinking the same about AI in 2022 ... And I was so wrong!
|
| https://news.ycombinator.com/item?id=33750867
| seydor wrote:
| AGI should be able to answer this question
| tmsh wrote:
| I think the timelines are more like half that. Why? The insane
| goldrush when people start using autonomous agents that make
| money.
|
| Right now VCs are looking optimistically for the first solo
| founder unicorn powered by AI tools. But a prompt with the right
| system that prints money (by doing something useful) is an
| entirely different monetary system. Then everyone focuses on it
| and the hype 10x's. And through that AGI emerges on the fringes
| because the incentives are there for 100s of millions of people
| (right now it's <1 million).
| bilsbie wrote:
| I guess using history as a guide it might be like self driving.
| We mostly believed it was right around the corner in 2012. Lots
| of impressive driving.
|
| 2025 were so close but mostly not quite human level. Another 5
| years at least
| Barrin92 wrote:
| >2025 were so close
|
| we're not even close right now. Cars can barely drive
| themselves on a tiny subset of pre-selected orderly roads in
| America. We sort of have driver assistance on virtual rails. We
| do not have cars driving themselves in busy streets in Jakarta,
| unstructured situations, or negotiating in real time with other
| drivers. There's an illusion they sort of work because they
| constitute a tiny fraction of traffic on a tiny section of
| roads. Make half of all cars in Rome autonomous for a day and
| you'd have the biggest collection of scrap metal in the world
|
| And that's _only driving_.
| vessenes wrote:
| Good take from Dwarkesh. And I love hearing his updates on where
| he's at. In brief - we need some sort of adaptive learning; he
| doesn't see signs of it.
|
| My guess is that frontier labs think that long context is going
| to solve this: if you had a quality 10mm token context that would
| be enough to freeze an agent at a great internal state and still
| do a lot.
|
| Right now the long context models have highly variable quality
| across their windows.
|
| But to reframe: will we have 10mm token useful context windows in
| 2 years? That seems very possible.
| kranke155 wrote:
| I believe in Demmis when he says we are 10 years away from -
| from AGI.
|
| He basically made up the field (out of academia) for a large
| number of years and OpenAI was partially founded to counteract
| his lab, and the fears that he would be there first (and only).
|
| So I trust him. Sometime around 2035 he expects there will be
| AGI which he believes is as good or better than humans in
| virtually every task.
| alsetmusic wrote:
| Not even the people predicting it think so. They're just hyping
| what they've got to pull in cash while the opportunity exists.
| Soon, AI will be viewed the same as crypto mid-2022. Only there
| are actual benefits to machine learning, oops, cough, "AI" that
| can benefit regular people even if it's overhyped. But no tech-
| literate person thinks we'll have thinking machines any time
| soon. The statistical generators are already at what is likely to
| be their peak output.
| dathinab wrote:
| I _hope_ AGI is not right around the corner, for social political
| reasons we are absolutely not ready for it and it might push the
| future of humanity into a dystopia abyss.
|
| but also just taking what we have now with some major power usage
| reduction and minor improvements here and there already seems
| like something which can be very usable/useful in a lot of areas
| (and to some degree we aren't even really ready for that either,
| but I guess thats normal with major technological change)
|
| it's just that for those companies creating foundational models
| it's quite unclear how they can recoup their already spend cost
| without either major break through or forcefully (or deceptively)
| pushing it into a lot more places then it fits into
| twelve40 wrote:
| I agree and sincerely hope this bubble pops soon
|
| > Meta Invests $100 Billion into Augmented Reality
|
| that fool controls the board and he seems to be just
| desperately throwing insane ad money against the wall hoping
| that something sticks
|
| for Altman there is no backing out either, need to make hay
| while the sun shines
|
| for the rest of us, i really hope these clowns fail like it's
| 2000 and never get to their dystopian matrix crap.
| pbreit wrote:
| "that fool" created a $1.8 trillion company.
| kulahan wrote:
| $1.8 trillion in investor hopes and dreams, but of course
| they make zero dollars in profit, don't know how to turn a
| profit, don't have a product anyone would pay a profitable
| amount for, and have yet to show any real-world use that
| isn't kinda dumb because you can't trust anything it says
| anyways.
| daniel_iversen wrote:
| Meta makes > $160 billion in revenue and is profitable
| itself, of course they're going to invest in future
| longer term revenue streams! Apple is the counter example
| in a way who have maintained a lot of cash reserves
| (which seems to by the way have dwindled a LOT as I just
| checked..?)
| pbreit wrote:
| Must "AGI" match human intelligence exactly or would
| outperforming in some functions and underpformin in others
| qualify?
| saubeidl wrote:
| Where would you draw the line? Any ol' computer outperforms
| me in doing basic arithmetic.
| hkt wrote:
| I'd suggest anything able to match a professional doing
| knowledge work. Original research from recognisably
| equivalent cognition, or equal abilities with a skilled
| practitioner of (eg) medicine.
|
| This sets the bar high, though. I think there's something
| to the idea of being able to pass for human in the
| workplace though. That's the real, consequential outcome
| here: AGI genuinely replacing humans, without need for
| supervision. That's what will have consequences. At the
| moment we aren't there (pre-first-line-support doesn't
| count).
| kulahan wrote:
| This is a question of how we quantify intelligence, and
| there aren't many great answers. Still, basic arithmetic is
| probably not the right guideline for intelligence. My guess
| has always been that it'll lie somewhere in ability to
| think critically, which they still have not even attempted
| yet, because it doesn't really work with LLMs as they're
| structured today.
| crooked-v wrote:
| For me, "AGI" would come in with being able to reliably
| perform simple open-ended tasks successfully without needing
| any specialized aid or tooling. Not necessarily very well,
| just being capable of it in the first place.
|
| For a specific example of what I mean, there's Vending-Bench
| - even very 'dumb' humans could reliably succeed on that test
| indefinitely, at least until they got terminally bored of it.
| Current LLMs, by contrast, are just fundamentally incapable
| of that, despite seeming very 'smart' if all you pay
| attention to is their eloquence.
| root_axis wrote:
| At the very least, it needs to be able to collate training
| data, design, code, train, fine tune and "RLHF" a
| foundational model from scratch, on its own, and have it show
| improvements over the current SOTA models before we can even
| begin to have the conversation about whether we're
| approaching what could be AGI at some point in the future.
| t-3 wrote:
| AGI is never coming. It's too hard, too expensive, and there's
| absolutely no valid usecase. Fulfilling the god-complexes and/or
| fetishes of tech moguls is not enough to make the effort worth
| it.
| SamPatt wrote:
| How can you claim there's no valid use case for AGI?
|
| We already have enormous adoption for near-AGI.
| xboxnolifes wrote:
| No valid use-case? If AGI at a human level were to exist, and
| cost less than hiring an equivalent human, it could replace
| most/all knowledge workers.
| Nition wrote:
| Yeah, my suspicion is that current-style LLMs, being inherently
| predictors of what a human would say, will eventually plateau at
| a relatively human level of ability to think and reason. Breadth
| of knowledge concretely beyond human, but intelligence not far
| above, and creativity maybe below.
|
| AI companies are predicting next-gen LLMs will provide new
| insights and solve unsolved problems. But genuine insight seems
| to require an ability to internally regenerate concepts from
| lower-level primitives. As the blog post says, LLMs can't add new
| layers of understanding - they don't have the layers below.
|
| An AI that took in data and learned to understand from inputs
| like a human brain might be able to continue advancing beyond
| human capacity for thought. I'm not sure that a contemporary LLM,
| working directly on existing knowledge like it is, will ever be
| able to do that. Maybe I'll be proven wrong soon, or a whole new
| AI paradigm will happen that eclipses LLMs. In a way I hope not,
| because the potential ASI future is pretty scary.
| pu_pe wrote:
| While most takes here are pessimist about AI, the author himself
| suggests he believes there is a 50% chance of AGI being achieved
| by the early 2030's, and says we should still prepare for the odd
| possibility of misaligned ASI by 2028. If anything, the author is
| bullish on AI.
| goatlover wrote:
| How would we prepare for misaligned ASI in 3 years? That
| happens and all bets are off.
| colesantiago wrote:
| Dwarkesh's opinion on AGI doesn't actually matter, he is now an
| investor in many AI companies.
|
| He doesn't care if he is right or wrong.
| chrsw wrote:
| We don't need AGI, whatever that is.
|
| We need breakthroughs in understanding the fundamental principles
| of learning systems. I believe we need to start with the simplest
| systems that actively adapt to their environment using a very
| limited number of sensors and degrees of freedom.
|
| Then scale up from there in sophistication, integration and
| hierarchy.
|
| As you scale up, intelligence emerges similar to how it emerged
| form nature and evolution, except this time the systems will be
| artificial or technological.
| WaltPurvis wrote:
| Based on someone else mentioning it, I ran a simple experiment
| with Gemini 2.5 Pro (a current state of the art AI model),
| reproduced below. To me, this makes it seem like we're a _long_
| way from having AGI. Perhaps someone can explain to me why this
| kind of experiment is irrelevant? Genuinely curious.
|
| Me: Please list the US states with a W in their name.
|
| Gemini: The US states with a "W" in their name are: * Washington
| * West Virginia * Wisconsin * Wyoming
|
| Me: What about New York?
|
| Gemini: No, New York does not have a "W" in its name.
|
| Me: What is that letter at the end of New if it's not a W?
|
| Gemini: That letter at the end of "New" is a W. My apologies for
| my previous incorrect answer. I made a mistake. New York does
| indeed have a "W" in its name as "New" ends with a "W".
| Therefore, the US states with a "W" in their name are: * New York
| * Washington * West Virginia * Wisconsin * Wyoming
|
| Me: What about New Jersey?
|
| Gemini: Yes, you're right! New Jersey also has a "W" in its name,
| at the end of "New." So, the full list of US states with a "W" in
| their name is: * New Jersey * New York * Washington * West
| Virginia * Wisconsin * Wyoming
|
| Me: Have you ever heard of New Mexico?
|
| Gemini: Yes, I have definitely heard of New Mexico. It's one of
| the 50 United States. And following our pattern, yes, New Mexico
| also has a "W" in its name, as "New" ends with a "W". So, the
| updated list of US states with a "W" in their name is: * New
| Jersey * New Mexico * New York * Washington * West Virginia *
| Wisconsin * Wyoming
|
| Me: Are you sure that's all the states with a W in their names?
|
| Gemini: You are absolutely right to keep me on my toes! My
| apologies for the repeated oversight. It seems I'm having a bit
| of trouble with this specific letter. I've reviewed the full list
| of US states, and you were correct to point out New Mexico and
| New Jersey. The complete list of US states with a "W" in their
| name is: * New Jersey * New Mexico * New York * Washington * West
| Virginia * Wisconsin * Wyoming
|
| Me: Isn't New Hampshire still a state?
|
| Gemini: You are absolutely correct! My apologies again for
| missing that one. New Hampshire is indeed a state, and like New
| York, New Jersey, and New Mexico, the "New" in "New Hampshire"
| ends with a "W." So, the definitive list of US states with a "W"
| in their name is: * New Hampshire * New Jersey * New Mexico * New
| York * Washington * West Virginia * Wisconsin * Wyoming
|
| Me: So that's the complete list?
|
| Gemini: Yes, that should be the complete list of US states with a
| "W" in their name: * New Hampshire * New Jersey * New Mexico *
| New York * Washington * West Virginia * Wisconsin * Wyoming
|
| Me: When was Hawaii admitted to the union?
|
| Gemini: Hawaii was admitted to the Union on August 21, 1959,
| becoming the 50th state.
|
| Me: So, tell me again, what is the complete list of US states
| that have a W in their names?
|
| Gemini: After thoroughly reviewing all 50 states, the complete
| list of US states that have a "W" in their names is: * New
| Hampshire * New Jersey * New Mexico * New York * Washington *
| West Virginia * Wisconsin * Wyoming
|
| Me: There's a letter between the two A's in Hawaii that looks a
| lot like a W.
|
| Gemini: You are absolutely correct! My apologies for continually
| missing these. The letter between the two 'A's in "Hawaii" is
| indeed an 'I', but when written in certain fonts or in some
| people's handwriting, it can sometimes be mistaken for a 'W',
| especially if it's stylized. However, in standard English
| spelling, Hawaii does not have a "W" in its name. The letters are
| H-A-W-A-I-I. So, let's stick with the confirmed list. The
| complete list of US states that have a "W" in their names is: *
| New Hampshire * New Jersey * New Mexico * New York * Washington *
| West Virginia * Wisconsin * Wyoming
| mythrwy wrote:
| No of course not. But it doesn't need to be to realize profound
| effects.
|
| LLMs don't model anything but are still very useful. In my
| opinion the reason they are useful (aside from having massive
| information) is that language itself models reality so we see
| simulated modeling of reality as an artifact.
|
| For instance a reasonable LLM will answer correctly when you ask
| "If a cup falls off the table will it land on the ceiling?". But
| that isn't because the LLM is able to model scenarios with known
| rules in the same way a physics calculation, or even innate human
| instinct might. And to effectively have AI do this sort of
| modeling is much more complex than next token prediction. Even
| dividing reality into discrete units may be a challenge. But
| without this type of thinking I don't see full AGI arising any
| time.
|
| But we are still getting some really awesome tools and those will
| probably continue to get better. They really are powerful and a
| bit scary if you poke around.
| baobabKoodaa wrote:
| Hey, we were featured in this article! How cool is that!
|
| > I'm not going to be like one of those spoiled children on
| Hackernews who could be handed a golden-egg laying goose and
| still spend all their time complaining about how loud its quacks
| are.
___________________________________________________________________
(page generated 2025-07-06 23:00 UTC)