[HN Gopher] AGI Is Still 30 Years Away - Ege Erdil and Tamay Bes...
       ___________________________________________________________________
        
       AGI Is Still 30 Years Away - Ege Erdil and Tamay Besiroglu
        
       Author : Philpax
       Score  : 114 points
       Date   : 2025-04-17 16:42 UTC (6 hours ago)
        
 (HTM) web link (www.dwarkesh.com)
 (TXT) w3m dump (www.dwarkesh.com)
        
       | pelagicAustral wrote:
       | Two more weeks
        
       | codingwagie wrote:
       | I just used o3 to design a distributed scheduler that scales to
       | 1M+ sxchedules a day. It was perfect, and did better than two
       | weeks of thought around the best way to build this.
        
         | csto12 wrote:
         | You just asked it to design or implement?
         | 
         | If o3 can design it, that means it's using open source
         | schedulers as reference. Did you think about opening up a few
         | open source projects to see how they were doing things in those
         | two weeks you were designing?
        
           | mprast wrote:
           | yeah unless you have very specific requirements I think the
           | baseline here is not building/designing it yourself but
           | setting up an off-the-shelf commercial or OSS solution, which
           | I doubt would take two weeks...
        
             | torginus wrote:
             | Dunno, in work we wanted to implement a task runner that we
             | could use to periodically queue tasks through a web UI - it
             | would then spin up resources on AWS and track the progress
             | and archive the results.
             | 
             | We looked at the existing solutions, and concluded that
             | customizing them to meet all our requirements would be a
             | giant effort.
             | 
             | Meanwhile I fed the requirement doc into Claude Sonnet, and
             | with about 3 days of prompting and debugging we had a
             | bespoke solution that did exactly what we needed.
        
               | codingwagie wrote:
               | the future is more custom software designed by ai, not
               | less. alot of frameworks will disappear once you can
               | build sophisticated systems yourself. people are missing
               | this
        
               | rsynnott wrote:
               | That's a future with a _lot_ more bugs.
        
               | codingwagie wrote:
               | youre assuming humans built it. also, a ton of complexity
               | in software engineering is really due to having to fit a
               | business domain into a string of interfaces in different
               | libraries and technical infrastructure
        
               | 9rx wrote:
               | What else is going to build it? Lions?
               | 
               | The only real complexity in software is describing it.
               | There is no evidence that the tools are going to ever
               | help with that. _Maybe_ some kind of device attached
               | directly to the brain that can sidestep the parts that
               | get in the way, but that is assuming some part of the
               | brain is more efficient than it seems through the
               | pathways we experience it through. It could also be that
               | the brain is just fatally flawed.
        
           | codingwagie wrote:
           | why would I do that kind of research if it can identify the
           | problem I am trying to solve, and spit out the exact
           | solution. also, it was a rough implementation adapted to my
           | exact tech stack
        
             | kazinator wrote:
             | So you could stick your own copyright notice on the result,
             | for one thing.
        
               | ben_w wrote:
               | What's the point holding copyright on a new technical
               | solution, to a problem that can be solved by anyone
               | asking an existing AI, trained on last year's internet,
               | independently of your new copyright?
        
               | kazinator wrote:
               | All sorts of stuff containing no original _ideas_ is
               | copyrighted. It legally belongs to someone and they can
               | license it to others, etc.
               | 
               | E.g. pop songs with no original chord progressions or
               | melodies, and hackneyed lyrics are still copyrighted.
               | 
               | Plagiarized and uncopyrightable code is radioactive; it
               | can't be pulled into FOSS or commercial codebases alike.
        
               | alabastervlog wrote:
               | Someone raised the point in another recent HN LLM thread
               | that the primary productivity benefit of LLMs in
               | programing _is the copyright laundering_.
               | 
               | The argument went that the main reason the now-ancient
               | push for code reuse failed to deliver anything close to
               | its hypothetical maximum benefit was because copyright
               | got in the way. Result: tons and tons of wheel-
               | reinvention, like, to the point that _most of what
               | programmers do day to day_ is reinvent wheels.
               | 
               | LLMs essentially provide fine-grained contextual search
               | of existing code, while also stripping copyright from
               | whatever they find. Ta-da! Problem solved.
        
             | kmeisthax wrote:
             | Because that path lies skill atrophy.
             | 
             | AI research has a thing called "the bitter lesson" - which
             | is that the only thing that works is search and learning.
             | Domain-specific knowledge inserted by the researcher tends
             | to look good in benchmarks but compromise the performance
             | of the system[0].
             | 
             | The bitter- _er_ lesson is that this also applies to
             | humans. The reason _why_ humans still outperform AI on lots
             | of intelligence tasks is because humans are doing lots and
             | lots of search and learning, repeatedly, across billions of
             | people. And have been doing so for thousands of years. The
             | only uses of AI that benefit humans are ones that allow you
             | to do more search or more learning.
             | 
             | The human equivalent of "inserting domain-specific
             | knowledge into an AI system" is cultural knowledge,
             | cliches, cargo-cult science, and cheating. Copying other
             | people's work only helps you, long-term, if you're able to
             | build off of that into something new; and lots of
             | discoveries have come about from someone just taking a
             | second look at what had been considered to be generally
             | "known". If you are just "taking shortcuts", then you learn
             | nothing.
             | 
             | [0] I would _also_ argue that the current LLM training
             | regime is _still_ domain-specific knowledge, we 've just
             | widened the domain to "the entire Internet".
        
               | gtirloni wrote:
               | Here on HN you frequently see technologists using words
               | like savant, genius, magical, etc, to describe the
               | current generation of AI. Now we have vibe coding, etc.
               | To me this is just a continuation of StackOverflow
               | copy/paste where people barely know what they are doing
               | and just hammer the keyboard/mouse until it works.
               | Nothing has really changed at the fundamental level.
               | 
               | So I find your assessment pretty accurate, if only
               | depressing.
        
               | mirsadm wrote:
               | It is depressing but equally this presents even more
               | opportunities for people that don't take shortcuts. I use
               | Claude/Gemini day to day and outside of the most average
               | and boring stuff they're not very capable. I'm glad I
               | started my career well before these things were created.
        
               | tombert wrote:
               | > Because that path lies skill atrophy.
               | 
               | Maybe, but I'm not completely convinced by this.
               | 
               | Prior to ChatGPT, there would be times where I would like
               | to build a project (e.g. implement Raft or Paxos), I
               | write a bit, find a point where I get stuck, decide that
               | this project isn't that interesting and I give up and
               | don't learn anything.
               | 
               | What ChatGPT gives me, if nothing else, is a slightly
               | competent rubber duck. It can give me a hint to _why_
               | something isn 't working like it should, and it's the
               | slight push I need to power through the project, and
               | since I actually _finish_ the project, I almost certain
               | learn more than I would have before.
               | 
               | I've done this a bunch of times now, especially when I am
               | trying to directly implement something directly from a
               | paper, which I personally find can be pretty difficult.
               | 
               | It also makes these things _more fun_. Even when I know
               | the correct way to do something, there can be lots of
               | tedious stuff that I don 't want to type, like really
               | long if/else chains (when I can't easily avoid them).
        
               | scellus wrote:
               | I agree. AI has made even mundane coding fun again, at
               | least for a while. AI does a lot of the tedious work, but
               | finding ways to make it maximally do it is challenging in
               | a new way. New landscape of possibilities, innovation,
               | tools, processes.
        
               | tombert wrote:
               | Yeah that's the thing.
               | 
               | Personal projects are fun for the same reason that
               | they're easy to abandon: there are no stakes to them. No
               | one yells at you for doing something wrong, you're not
               | trying to satisfy a stakeholder, you can develop into any
               | direction you want. This is good, but that also means
               | it's easy to stop the moment you get to a part that isn't
               | fun.
               | 
               | Using ChatGPT to help unblock myself makes it easier for
               | me to not abandon a project when I get frustrated. Even
               | when ChatGPT's suggestions aren't helpful (which is
               | often), it can still help me understand the problem by
               | trying to describe it to the bot.
        
               | Workaccount2 wrote:
               | >Because that path lies skill atrophy.
               | 
               | I wonder how many programmers have assembly code skill
               | atrophy?
               | 
               | Few people will weep the death of the necessity to use
               | abstract logical syntax to communicate with a computer.
               | Just like few people weep the death of having to type out
               | individual register manipulations.
        
               | kmeisthax wrote:
               | Most programmers don't need to develop that skill unless
               | they need more performance or are modifying other
               | people's binaries[0]. You can still do plenty of search-
               | and-learning using higher-level languages, and what you
               | learn at one particular level can generalize to the
               | other.
               | 
               | Even if LLMs make "plain English" programming viable,
               | programmers still need to write, test, and debug lists of
               | instructions. "Vibe coding" is different; you're telling
               | the AI to write the instructions and acting more like a
               | product manager, except without any of the actual
               | communications skills that a good manager has to develop.
               | And without any of the search and learning that I
               | mentioned before.
               | 
               | For that matter, a lot of chatbots don't do learning
               | either. Chatbots can _sort of_ search a problem space,
               | but they only remember the last 20-100k tokens. We don 't
               | have a way to encode tokens that fall out of that context
               | window into some longer-term weights. Most of their
               | knowledge comes from the information they learned from
               | training data - again, cheated from humans, just like
               | humans can now cheat off the AI. This is a recipe for
               | intellectual stagnation.
               | 
               | [0] e.g. for malware analysis or videogame modding
        
             | csto12 wrote:
             | I was pointing out that if you spent 2 weeks trying to find
             | the solution but AI solved it within a day (you don't
             | specify how long the final solution by AI took), it sounds
             | like those two weeks were not spent very well.
             | 
             | I would be interested in knowing what in those two weeks
             | you couldn't figure out, but AI could.
        
               | codingwagie wrote:
               | it was two weeks tossing around ideas in my head
        
               | financypants wrote:
               | idk why people here are laser focusing on "wow 2 weeks",
               | I totally understand lightly thinking about an idea,
               | motivations, feasibility, implementation, for a week or
               | two
        
             | margalabargala wrote:
             | Because as far as you know, the "rough implementation" only
             | works in the happy path and there are really bad edge cases
             | that you won't catch until they bite you, and then you
             | won't even know where to look.
             | 
             | An open source project wouldn't have those issues (someone
             | at least understands all the code, and most edge cases have
             | likely been ironed out) plus then you get maintenance
             | updates for free.
        
               | codingwagie wrote:
               | ive got ten years at faang in distributed systems, I know
               | a good solution when i see one. and o3 is bang on
        
               | margalabargala wrote:
               | If you thought about it for two weeks beforehand and came
               | up with nothing, I have trouble lending much credence to
               | that.
        
               | lossolo wrote:
               | So 10 years at a FANG company, then it's 15 years in
               | backend at FANG, then 10 years in distributed systems,
               | and then running interviews at some company for 5 years
               | and rising capital as founder in NYC. Cool. Can you share
               | that chat from o3?
        
               | themanmaran wrote:
               | How are those mutually exclusive statements? You can't
               | imagine someone working on backend (focused on
               | distributed systems) for 10-15 years at a FANG company.
               | And also being in a position to interview new candidates?
        
               | HAL3000 wrote:
               | Who knows but have you read what OP wrote?
               | 
               | "I just used o3 to design a distributed scheduler that
               | scales to 1M+ sxchedules a day. It was perfect, and did
               | better than two weeks of thought around the best way to
               | build this."
               | 
               | Anyone with 10 years in distributed systems at FAANG
               | doesn't need two weeks to design a distributed scheduler
               | handling 1M+ schedules per day, that's a solved problem
               | in 2025 and basically a joke at that scale. That alone
               | makes this person's story questionable, and his comment
               | history only adds to the doubt.
        
             | titzer wrote:
             | Who hired you and why are they paying you money?
             | 
             | I don't want to be a hater, but holy moley, that sounds
             | like the absolute laziest possible way to solve things. Do
             | you have training, skills, knowledge?
             | 
             | This is an HN comment thread and all, but you're doing
             | yourself no favors. Software professionals should offer
             | their employers some due diligence and deliver working
             | solutions that at least _they_ understand.
        
         | davidsainez wrote:
         | While impressive, I'm not convinced that improved performance
         | on tasks of this nature are indicative of progress toward AGI.
         | Building a scheduler is a well studied problem space. Something
         | like the ARC benchmark is much more indicative of progress
         | toward true AGI, but probably still insufficient.
        
           | codingwagie wrote:
           | the other models failed at this miserably. There were also
           | specific technical requirements I gave it related to my tech
           | stack
        
           | fragmede wrote:
           | The point is that AGI is the wrong bar to be aiming for. LLMs
           | are sufficiently useful at their current state that even if
           | it does take us 30 years to get to AGI, even just incremental
           | improvements from now until then, they'll still be useful
           | enough to provide value to users/customers for some companies
           | to win big. VC funding will run out and some companies won't
           | make it, but some of them will, to the delight of their
           | investors. AGI when? is an interesting question, but might
           | just be academic. we have self driving cars, weight loss
           | drugs that work, reusable rockets, and useful computer AI.
           | We're living in the future, man, and robot maids are just
           | around the corner.
        
         | MisterSandman wrote:
         | Designing a distributed scheduler is a solved problem, of
         | course an LLM was able to spit out a solution.
        
           | codingwagie wrote:
           | as noted elsewhere, all other frontier models failed
           | miserably at this
        
             | daveguy wrote:
             | That doesn't mean the one what manages to spit it out of
             | its latent space is close to AGI. I wonder how consistently
             | that specific model could. If you tried 10 LLMs maybe all
             | 10 of them could have spit out the answer 1 out of 10
             | times. Correct problem retrieval by one LLM and failure by
             | the others isn't a great argument for near-AGI. But LLMs
             | will be useful in limited domains for a long time.
        
             | alabastervlog wrote:
             | It is unsurprising that some lossily-compressed-database
             | search programs might be worse for some tasks than other
             | lossily-compressed-database search programs.
        
         | littlestymaar wrote:
         | "It does something well" [?] "it will become AGI".
         | 
         | Your anodectical example isn't more convincing than "This
         | machine cracked Enigma's messages in less time than an army of
         | cryptanalysts over a month, surely we're gonna reach AGI by the
         | end of the decade" would have.
        
         | timeon wrote:
         | I'm not sure what is your point in context of AGI topic.
        
           | codingwagie wrote:
           | im a tenured engineer, spent a long time at faang. was
           | casually beat this morning by a far superior design from an
           | llm.
        
             | darod wrote:
             | is this because the LLM actually reasoned on a better
             | design or because it found a better design in its
             | "database" scoured from another tenured engineer.
        
               | anthonypasq wrote:
               | who cares?
        
               | awkwardpotato wrote:
               | Ignoring the copyright issues, credit issues, and any
               | ethical concerns... this approach doesn't work for
               | anything not in the "database", it's not AGI and the
               | tangential experience is barely relevant to the article.
        
               | ben_w wrote:
               | Does it matter if the thing a submarine does counts as
               | "swimming"?
               | 
               | We get paid to solve problems, sometimes the solution is
               | to know an existing pattern or open source implementation
               | and use it. Aguably it usually is: we seldom have to
               | invent new architectures, DSLs, protocols, or OSes from
               | scratch, but even those are patterns one level up.
               | 
               | Whatever the AI is inside, doesn't matter: this was it
               | solving a problem.
        
         | AJ007 wrote:
         | I find now I quickly bucket people in to "have not/have barely
         | used the latest AI models" or "trolls" when they express a
         | belief current LLMs aren't intelligent.
        
           | tumsfestival wrote:
           | Call me back when ChatGPT isn't hallucinating half the
           | outputs it gives me.
        
           | burnte wrote:
           | You can put me in that bucket then. It's not true, I've been
           | working with AI almost daily for 18 months, and I KNOW it's
           | no where close to being intelligent, but it doesn't look like
           | your buckets are based on truth but appeal. I disagree with
           | your assessment so you think I don't know what I'm talking
           | about. I hope you can understand that other people who know
           | just as much as you (or even more) can disagree without being
           | wrong or uninformed. LLMs are amazing, but they're nowhere
           | close to intelligent.
        
         | dundarious wrote:
         | Wow, 12 per second on average.
        
       | andrewstuart wrote:
       | LLMs are basically a library that can talk.
       | 
       | That's not artificial intelligence.
        
         | 52-6F-62 wrote:
         | Grammar engines. Or value matrix engines.
         | 
         | Everytime I try to work with them I lose more time than I gain.
         | Net loss every time. Immensely frustrating. If i focus it on a
         | small subtask I can gain some time (rough draft of a test).
         | Anything more advanced and its a monumental waste of time.
         | 
         | They are not even good librarians. They fail miserably at cross
         | referencing and contextualizing without constant leading.
        
           | andrewstuart wrote:
           | I feel the opposite.
           | 
           | LLMs are unbelievably useful for me - never have I had a tool
           | more powerful to assist my brain work. I useLLMs for work and
           | play constantly every day.
           | 
           | It pretends to sound like a person and can mimic speech and
           | write and is all around perhaps the greatest wonder created
           | by humanity.
           | 
           | It's still not artificial intelligence though, it's a talking
           | library.
        
             | 52-6F-62 wrote:
             | Fair. For engineering work they have been a terrible drain
             | on me save for the most minor autocomplete. Its
             | recommendations are often deeply flawed or almost totally
             | hallucinated no matter the model. Maybe I am a better
             | software engineer than a "prompt engineer".
             | 
             | Ive tried to use them as a research assistant in a history
             | project and they have been also quite bad in that respect
             | because of the immense naivety in its approaches.
             | 
             | I couldn't call them a librarian because librarians are
             | studied and trained in cross referencing material.
             | 
             | They have helped me in some searches but not better than a
             | search engine at a monumentally higher investment cost to
             | the industry.
             | 
             | Then again, I am also speaking as someone who doesn't like
             | to offload all of my communications to those things. Use it
             | or lose it, eh
        
               | andrewstuart wrote:
               | I'm curious you're a developer who finds no value in
               | LLMs?
               | 
               | It's weird to me that there's such a giant gap with my
               | experience of it bein a minimum 10x multiplier.
        
           | aaronbaugher wrote:
           | I've only really been experimenting with them for a few days,
           | but I'm kind of torn on it. On the one hand, I can see a lot
           | of things it _could_ be useful for, like indexing all the
           | cluttered files I 've saved over the years and looking things
           | up for me faster than I could find|grep. Heck, yesterday I
           | asked one a relationship question, and it gave me pretty good
           | advice. Nothing I couldn't have gotten out of a thousand
           | books and magazines, but it was a lot faster and more focused
           | than doing that.
           | 
           | On the other hand, the prompt/answer interface really limits
           | what you can do with it. I can't just say, like I could with
           | a human assistant, "Here's my calendar. Send me a summary of
           | my appointments each morning, and when I tell you about a new
           | one, record it in here." I can script something like that,
           | and even have the LLM help me write the scripts, but since I
           | can already write scripts, that's only a speed-up at best,
           | not anything revolutionary.
           | 
           | I asked Grok what benefit there would be in having a script
           | fetch the weather forecast data, pass it to Grok in a prompt,
           | and then send the output to my phone. The answer was
           | basically, "So I can say it nicer and remind you to take an
           | umbrella if it sounds rainy." Again, that's kind of neat, but
           | not a big deal.
           | 
           | Maybe I just need to experiment more to see a big advance I
           | can make with it, but right now it's still at the "cool toy"
           | stage.
        
         | futureshock wrote:
         | There's increasing evidence that LLMs are more than that.
         | Especially work by Anthropic has been showing how to trace the
         | internal logic of an LLM as it answers a question. They can in
         | fact reason over facts contained in the model, not just repeat
         | already seen information.
         | 
         | A simple example is how LLMs do math. They are not calculators
         | and have not memorized every sum in existence. Instead they
         | deploy a whole set of mental math techniques that were
         | discovered at training time. For example, Claude uses a special
         | trick for adding 2 digit numbers ending in 6 and 9.
         | 
         | Many more examples in this recent reach report, including
         | evidence of future planning while writing rhyming poetry.
         | 
         | https://www.anthropic.com/research/tracing-thoughts-language...
        
           | ahamilton454 wrote:
           | I don't think that is the core of this paper. If anything the
           | paper shows that LLMs have no internal reasoning for math at
           | all. The example they demonstrate is that it triggers the
           | same tokens in randomly unrelated numbers. They kind of just
           | "vibe" there way to a solution
        
           | sksxihve wrote:
           | > sometimes this "chain of thought" ends up being misleading;
           | Claude sometimes makes up plausible-sounding steps to get
           | where it wants to go. From a reliability perspective, the
           | problem is that Claude's "faked" reasoning can be very
           | convincing.
           | 
           | If you ask the LLM to explain how it got the answer the
           | response it gives you won't necessarily be the steps it used
           | to figure out the answer.
        
         | alabastervlog wrote:
         | We invented a calculator for language-like things, which is
         | cool, but it's got a lot of people really mixed up.
         | 
         | The hype men trying to make a buck off them aren't helping, of
         | course.
        
       | cruzcampo wrote:
       | AGI is never gonna happen - it's the tech equivalent of the
       | second coming of Christ, a capitalist version of the religious
       | savior trope.
        
         | lukan wrote:
         | I guess I am agnostic then.
        
         | alabastervlog wrote:
         | Hey now, on a long enough time line one of these strains of
         | millenarian thinking may eventually get something right.
        
       | EliRivers wrote:
       | Would we even recognise it if it arrived? We'd recognise human
       | level intelligence, probably, but that's specialised. What would
       | general intelligence even look like.
        
         | shmatt wrote:
         | We sort of are able to recognize Nobel-worthy breakthroughs
         | 
         | One of the many definitions I have for AGI is being able to
         | create the proofs for the 2030, 2050, 2100, etc Nobel Prizes,
         | today
         | 
         | A sillier one I like is that AGI would output a correct proof
         | that P [?] NP on day 1
        
           | tough wrote:
           | Isn't AGI just "general" intelligence as in -like a regular
           | human- turing test kinda deal?
           | 
           | aren't you thinking about ASI/ Superintelligence way capable
           | of outdoing humans?
        
             | kadushka wrote:
             | Yes, a general consensus is AGI should be able to perform
             | any task an average human is able to perform. Definitely
             | nothing of Nobel prize level.
        
               | EliRivers wrote:
               | A bit poorly named; not really very general. AHI would be
               | a better name.
        
               | kadushka wrote:
               | Another general consensus is that humans possess general
               | intelligence.
        
               | EliRivers wrote:
               | Yes, we do seem to have a very high opinion of ourselves.
        
               | timeon wrote:
               | AAI would be enough for me, although there are people who
               | deny intelligence of non-human animals.
        
               | aleph_minus_one wrote:
               | > Yes, a general consensus is AGI should be able to
               | perform any task an average human is able to perform.
               | 
               | The goalposts are regularly moved so that AI companies
               | and their investors can claim/hype that AGI will be
               | around in a few years. :-)
        
               | kadushka wrote:
               | I learned the definition I provided back in mid 90s, and
               | it hasn't really changed since then.
        
         | dingnuts wrote:
         | you'd be able to give them a novel problem and have them
         | generalize from known concepts to solve it. here's an example:
         | 
         | 1 write a specification for a language in natural language
         | 
         | 2 write an example program
         | 
         | can you feed 1 into a model and have it produce a compiler for
         | 2 that works as reliably as a classically built one?
         | 
         | I think that's a low bar that hasn't been approached yet. until
         | then I don't see evidence of language models' ability to
         | reason.
        
           | EliRivers wrote:
           | I'd accept that as a human kind of intelligence, but I'm
           | really hoping that AGI would be a bit more general. That
           | clever human thinking would be a subset of what it could do.
        
           | logicchains wrote:
           | You could ask Gemini 2.5 to do that today and it's well
           | within its capabilities, just as long as you also let it
           | write and run unit tests, as a human developer would.
        
         | logicchains wrote:
         | AGI isn't ASI; it's not supposed to be smarter than humans. The
         | people who say AGI is far away are unscientific woo-mongers,
         | because they never give a concrete, empirically measurable
         | definition of AGI. The closest we have is Humanity's Last Exam,
         | which LLMs are already well on the path to acing.
        
           | EliRivers wrote:
           | I'd expect it to be generalised, where we (and everything
           | else we've ever met) are specialised. Our intelligence is
           | shaped by our biology and our environment; the limitations on
           | our thinking are themselves concepts the best of us can
           | barely glimpse. Some kind of intelligence that inherently
           | transcends its substrate.
           | 
           | What that would look like, how it would think, the kind of
           | mental considerations it would have, I do not know. I do
           | suspect that declaring something that thinks like us would
           | have "general intelligence" to be a symptom of our limited
           | thinking.
        
           | quonn wrote:
           | Consider this: Being born/trained in 1900 if that were
           | possible and given a year to adapt to the world of 2025, how
           | well would an LLM do on any test? Compare that to how a 15
           | years old human in the same situation would do.
        
         | fusionadvocate wrote:
         | AI will face the same limitations we face: availability of
         | information and the non deterministic nature of the world.
        
         | Tuna-Fish wrote:
         | If/when we will have AGI, we will likely have something
         | fundamentally superhuman very soon after, and that will be very
         | recognizable.
         | 
         | This is the idea of "hard takeoff" -- because the way we can
         | scale computation, there will only ever be a very short time
         | when the AI will be roughly human-level. Even if there are no
         | fundamental breakthroughs, the very least silicon can be ran
         | much faster than meat, and instead of compensating narrower
         | width execution speed like current AI systems do (no AI
         | datacenter is even close to the width of a human brain), you
         | can just spend the money to make your AI system 2x wider and
         | run it at 2x the speed. What would a good engineer (or, a good
         | team of engineers) be able to accomplish if they could have 10
         | times the workdays in a week that everyone else has?
         | 
         | This is often conflated with the idea that AGI is very
         | imminent. I don't think we are particularly close to that yet.
         | But I do think that if we ever get there, things will get very
         | weird very quickly.
        
           | card_zero wrote:
           | But that's not ten times the workdays. That's just taking a
           | bunch of speed and sitting by yourself worrying about
           | something. Results may be eccentric.
           | 
           | Though I don't know what you mean by "width of a human
           | brain".
        
             | Tuna-Fish wrote:
             | It's ten times the time to work on a problem. Taking a
             | bunch of speed does not make your brain work faster, it
             | just messes with your attention system.
             | 
             | > Though I don't know what you mean by "width of a human
             | brain".
             | 
             | A human brain contains ~86 billion neurons connected to
             | each other through ~100 trillion synapses. All of these
             | parts work genuinely in parallel, all working together at
             | the same time to produce results.
             | 
             | When an AI model is being ran on a GPU, a single ALU can do
             | the work analogous of a neuron activation much faster than
             | a real neuron. But a GPU does not have 86 billion ALUs, it
             | only has ~<20k. It "simulates" a much wider, parallel
             | processing system by streaming in weights and activations
             | and doing them 20k at a time. Large AI datacenters have
             | built systems with many GPUs working in parallel on a
             | single model, but they are still a tiny fraction of the
             | true width of the brain, and can not reach anywhere near
             | the same amount of neuron activations/second that a brain
             | can.
             | 
             | If/when we have a model that can actually do complex
             | reasoning tasks such as programming and designing new
             | computers as well as a human can, with no human helping to
             | prompt it, we can just scale it out to give it more hours
             | per day to work, all the way until every neuron has a real
             | computing element to run it. The difference in experience
             | for such a system for running "narrow" vs running "wide" is
             | just that the wall clock runs slower when you are running
             | wide. That is, you have more hours per day to work on
             | things.
        
           | EliRivers wrote:
           | Would AGI be recognisable to us? When a human pushes over an
           | anthill, what do the ants think happened? Do they even know
           | the anthill is gone; did they have concept of the anthill as
           | a huge edifice, or did they only know earth to squeeze
           | through and some biological instinct.
           | 
           | If general intelligence arrived and did whatever general
           | intelligence would do, would we even see it? Or would there
           | just be things that happened that we just can't comprehend?
        
         | xnx wrote:
         | There's a test for this: https://arcprize.org/arc-agi
         | 
         | Basically a captcha. If there's something that humans can
         | easily do that a machine cannot, full AGI has not been
         | achieved.
        
         | GeorgeTirebiter wrote:
         | Mustafa Suleyman says AGI is when a (single) machine can
         | perform every cognitive task better than the best humans. That
         | is significantly different from OpenAIs definition (...when we
         | make enough $$$$$, it's AGI).
         | 
         | Suleyman's book "The Coming Wave" talks about Artificial
         | Capable Intelligence (ACI) - between today's LLMs (== "AI" now)
         | and AGI. AI systems capable of handling a lot of complex tasks
         | across various domains, yet not being fully general. Suleyman
         | argues that ACI is here (2025) and will have huge implications
         | for society. These systems could manage businesses, generate
         | digital content, and even operate core government services --
         | as is happening on a small scale today.
         | 
         | He also opines that these ACIs give us plenty of frontier to be
         | mined for amazing solutions. I agree, what we have already has
         | not been tapped-out.
         | 
         | His definition, to me, is early ASI. If a program is better
         | than the best humans, then we ask it how to improve itself.
         | That's what ASI is.
         | 
         | The clearest thinker alive today on how to get to AGI is, I
         | think, Yann LeCun. He said, paraphrasing: If you want to build
         | an AGI, do NOT work on LLMs!
         | 
         | Good advice; and go (re-?) read Minsky's "Society of Mind".
        
       | moralestapia wrote:
       | "Literally who" and "literally who" put out statements while
       | others out there ship out products.
       | 
       | Many such cases.
        
       | dicroce wrote:
       | Doesn't even matter. The capabilities of the AI that's out NOW
       | will take a decade or more to digest.
        
         | EA-3167 wrote:
         | I feel like it's already been pretty well digested and excreted
         | for the most part, now we're into the re-ingestion phase until
         | the bubble bursts.
        
           | tough wrote:
           | maybe silicon valley and the world move at basically
           | different rates
           | 
           | idk AI is just a speck outside of the HN and SV info-bubbles
           | 
           | still early to mass adoption like the smartphone or the
           | internet, mostly nerds playing w it
        
             | kadushka wrote:
             | ChatGPT has 400M weekly users.
             | https://backlinko.com/chatgpt-stats
        
               | tough wrote:
               | have you wondered how many of these are bots leveraging
               | free chatgpt with proxied vpn IPs?
               | 
               | I'm a ChatGPT paying user but I know no one who's not a
               | developer on my personal circles who also is one.
               | 
               | maybe im an exeception
               | 
               | edit: I guess 400M global users being the US 300M
               | citizens isn't out of scope for such a highly used
               | product amongst a 7B population
               | 
               | But social media like instagram or fb feels like had
               | network effects going for them making their growth faster
               | 
               | and thus maybe why openai is exploring that idea idk
        
               | kadushka wrote:
               | Pretty much everyone in high school or college is using
               | them. Also everyone whose job is to produce some kind of
               | content or data analysis. That's already a lot of people.
        
             | aleph_minus_one wrote:
             | > idk AI is just a speck outside of the HN and SV info-
             | bubbles
             | 
             | > still early to mass adoption like the smartphone or the
             | internet, mostly nerds playing w it
             | 
             | Rather: outside of the HN and SV bubbles, the A"I"s and the
             | fact how one can fall for this kind of hype and dupery is
             | commonly ridiculed.
        
               | EA-3167 wrote:
               | This is accurate, doubly so for the people who treat it
               | like a religion and fear the coming of their machine god.
               | This, when what we actually have are (admittedly
               | sometimes impressive) next-token predictors that you MUST
               | double-check because they routinely hallucinate.
               | 
               | Then again I remember when people here were convinced
               | that crypto was going to change the world, democratize
               | money, end fiat currency, and that was just the start!
               | Programs of enormous complexity and freedom would run on
               | the blockchain, games and hell even societies would be
               | built on the chain.
               | 
               | A lot of people here are easily blinded by promises of
               | big money coming their way, and there's money in loudly
               | falling for successive hype storms.
        
               | umeshunni wrote:
               | Yeah, I'm old enough to remember all the masses who
               | mocked the Internet and smartphones too.
        
               | tough wrote:
               | Im not mocking AI, and while the internet and smartphones
               | fundamentally changed how societies operate, and AI will
               | probably do so to, why the Doomerism? Isn't that how tech
               | works? We invent new tech and use it and so on?
               | 
               | What makes AI fundamentally different than smartphones or
               | the internet? Will it change the world? Probably, already
               | has.
               | 
               | Will it end it as we know it? Probably not?
        
             | acdha wrote:
             | That doesn't match what I hear from teachers, academics, or
             | the librarians complaining that they are regularly getting
             | requests for things which don't exist. Everyone I know
             | who's been hiring has mentioned spammy applications with
             | telltale LLM droppings, too.
        
               | tough wrote:
               | I can see how students would be first users of this kinda
               | of tech but am not on those spheres, but I believe you.
               | 
               | As per spammy applications, hasn't always been this the
               | case and now made worse due to the cheapness of
               | -generating- plausible data?
               | 
               | I think ghost-applicants where existent already before AI
               | where consultant companies would pool people to try and
               | get a position on a high paying job and just do
               | consultancy/outsourcing things underneath, many such
               | cases before the advent of AI.
               | 
               | AI just accelerates no?
        
               | acdha wrote:
               | Yes, AI is effectively a very strong catalyst because it
               | drives down the cost so much. Kids cheated before but it
               | was more work and higher risk, people faked images before
               | but most were too lazy to make high quality fakes, etc.
        
             | azinman2 wrote:
             | I really disagree. I had a masseuse tell me how he uses
             | ChatGPT, told it a ton of info about himself, and now he
             | uses it for personalized nutrition recommendations. I was
             | in Atlanta over the weekend recently, at a random brunch
             | spot, and overheard some _very_ not SV/tech folks talk
             | about how they use it everyday. Their user growth rate
             | shows this -- you don't hit hundreds of millions of people
             | and have them all be HN/SV info-bubble folks.
        
               | tough wrote:
               | I see ChatGPT as the new Google, not the new Nuclear
               | Power Soruce. maybe im naive
        
               | azinman2 wrote:
               | https://www.instagram.com/reel/DIep4wLvvVa/
               | 
               | https://www.instagram.com/reel/DE0lldzTHyw/
               | 
               | These maybe satire but I feel like they capture what's
               | happening. It's more than Google.
        
           | jdross wrote:
           | I am tech founder, who spends most of my day in my own
           | startup deploying LLM-based tools into my own operations, and
           | I'm maybe 1% of the way through the roadmap I'd like to build
           | with what exists and is possible to do today.
        
             | danielmarkbruce wrote:
             | 100% this. The rearrangement of internal operations has
             | only started and there is just sooo much to do.
        
             | croes wrote:
             | What has your roadmap to do with the capabilities?
             | 
             | LLMs still hallucinate and make simple mistakes.
             | 
             | And the progress seams to be in the benchmarks only
             | 
             | https://news.ycombinator.com/item?id=43603453
        
               | edanm wrote:
               | The parent was contradicting the idea that the existing
               | AI capabilities have already been "digested". I agree
               | with them btw.
               | 
               | > And the progress seams to be in the benchmarks only
               | 
               | This seems to be mostly wrong given peoples' reactions to
               | e.g. o3 that was released today. Either way, progress
               | having stalled for the last year doesn't seem that big
               | considering how much progress there has been for the
               | previous 15-20 years.
        
             | Jensson wrote:
             | > and I'm maybe 1% of the way through the roadmap I'd like
             | to build with what exists and is possible to do today.
             | 
             | How do you know they are possible to do today? Errors gets
             | much worse at scale, especially when systems starts to
             | depend on each other, so it is hard to say what can be
             | automated and not.
             | 
             | Like if you have a process A->B, automating A might be fine
             | as long as a human does B and vice versa, but automating
             | both could not be.
        
           | kokanee wrote:
           | To push this metaphor, I'm very curious to see what happens
           | as new organic training material becomes increasingly rare,
           | and AI is fed nothing but its own excrement. What happens as
           | hallucinations become actual training data? Will Google start
           | citing sources for their AI overviews that were in turn AI-
           | generated? Is this already happening?
           | 
           | I figure this problem is why the billionaires are chasing
           | social media dominance, but even on social media I don't know
           | how they'll differentiate organic content from AI content.
        
           | dicroce wrote:
           | Not even close. Software can now understand human language...
           | this is going to mean computers can be a lot more places than
           | they ever could. Furthermore, software can now understand the
           | content of images... eventually this will have a wild impact
           | on nearly everything.
        
             | AstralStorm wrote:
             | Understand? It fails with to understand a rephrasing of a
             | math problem a five year old can solve... They get much
             | better at training to the test from memory the bigger they
             | get. Likewise you can get _some_ emergent properties out of
             | them.
             | 
             | Really it does not understand a thing, sadly. It can barely
             | analyze language and spew out a matching response chain.
             | 
             | To actually understand something, it must be capable of
             | breaking it down into constituent parts, synthesizing a
             | solution and then phrasing the solution correctly while
             | explaining the steps it took.
             | 
             | And that's not even what huge 62B LLM with the notepad
             | chain of thought (like o3, GPT-4.1 or Claude 3.7) can
             | really properly do.
             | 
             | Further, it has to be able to operate on sub-token level.
             | Say, what happens if I run together truncated version of
             | words or sentences? Even a chimpanzee can handle that. (in
             | sign language)
             | 
             | It cannot do true multimodal IO either. You cannot ask it
             | to respond with at least two matching syllables per word
             | and two pictures of syllables per word, in addition to
             | letters. This is a task a 4 year old can do.
             | 
             | Prediction alone is not indicative of understanding.
             | Pasting together answers like lego is also not indicative
             | of understanding. (Afterwards ask it how it felt about the
             | task. And to spot and explain some patterns in a picture of
             | clouds.)
        
             | burnte wrote:
             | It doesn't understand anything, there is no understanding
             | going on in these models. It takes input and generates
             | output based on the statistical math created from its
             | training set. It's Bayesian statistics and vector/matrix
             | math. There is no cogitation or actual understanding.
        
         | 827a wrote:
         | Agreed. A hot take I have is that I think AI is over-hyped in
         | its long-term capabilities, but under-hyped in its short-term
         | ones. We're at the point today or in the next twelve months
         | where all the frontier labs could stop investing any money into
         | research, they'd still see revenue growth via usage of what
         | they've built, and humanity will still be significantly more
         | productive every year, year-over-year, for quite a bit, because
         | of it.
         | 
         | The real driver of productivity growth from AI systems over the
         | next few years isn't going to be model advancements; it'll be
         | the more traditional software engineering, electrical
         | engineering, robotics, etc systems that get built around the
         | models. Phrased another way: If you're an AI researcher
         | thinking you're safe but the software engineers are going to
         | lose their jobs, I'd bet every dollar on reality being the
         | reverse of that.
        
       | _Algernon_ wrote:
       | The new fusion power
        
         | 77pt77 wrote:
         | That's 20 years away.
         | 
         | It was also 20 years away 30 years ago.
        
       | fusionadvocate wrote:
       | Can someone throw some light on this Dwarkesh character? He
       | landed a Zucc podcast pretty early on... how connected is he? Is
       | he an industry plant?
        
         | gallerdude wrote:
         | He's awesome.
         | 
         | I listened to Lex Friedman for a long time, and there was a lot
         | of critiques of him (Lex) as an interviewer, but since the
         | guests were amazing, I never really cared.
         | 
         | But after listening to Dwarkesh, my eyes are opened (or maybe
         | my soul). It doesn't matter I've heard of not-many of his
         | guests, because he knows exactly the right questions to ask. He
         | seems to have genuine curiosity for what the guest is saying,
         | and will push back if something doesn't make sense to him. Very
         | much recommend.
        
         | lexarflash8g wrote:
         | https://archive.ph/IWjYP
         | 
         | He was covered on the Economist recently -- I haven't heard of
         | him til now so imagine its not just AI-slop content.
        
       | dcchambers wrote:
       | And in 30 years it will be another 30 years away.
       | 
       | LLMs are so incredibly useful and powerful but they will NEVER be
       | AGI. I actually wonder if the success of (and subsequent
       | obsession with) LLMs is putting true AGI further out of reach.
       | All that these AI companies see are the $$$. When the biggest "AI
       | Research Labs" like OpenAI shifted to product-izing their LLM
       | offerings I think the writing was on the wall that they don't
       | actually care about finding AGI.
        
         | thomasahle wrote:
         | People will keep improving LLMs, and by the time they are AGI
         | (less than 30 years), you will say, "Well, these are no longer
         | LLMs."
        
           | croes wrote:
           | They'll get cheaper and less hardware demanding but the
           | quality improvements get smaller and smaller, sometimes
           | hardly noticeable outside benchmarks
        
           | Spartan-S63 wrote:
           | What was the point of this comment? It's confrontational and
           | doesn't add anything to the conversation. If you disagree,
           | you could have just said that, or not commented at all.
        
             | logicchains wrote:
             | The people who go around saying "LLMs aren't intelligent"
             | while refusing to define exactly what they mean by
             | intelligence (and hence not making a meaningful/testable
             | claim) add nothing to the conversation.
        
               | AnimalMuppet wrote:
               | OK, but the people who go around saying "LLMs _are_
               | intelligent " are in the same boat...
        
             | AnimalMuppet wrote:
             | There's been a complaint for several decades that "AI can
             | never succeed" - because when, say, expert systems are
             | developed from AI research, and they become capable of
             | doing useful things, then the nay-sayer say "That's not AI,
             | that's just expert systems".
             | 
             | This is somewhat defensible, because what the non-AI-
             | researcher means by AI - which may be AGI - is something
             | more than expert systems by themselves can deliver. It is
             | possible that "real AI" will be the combination of multiple
             | approaches, but so far all the reductionist approaches
             | (that expert systems, say, are all that it takes to be an
             | AI) have proven to be inadequate compared to what the
             | expectations are.
             | 
             | The GP may have been riffing off of this "that's not AI"
             | issue that goes way back.
        
           | __MatrixMan__ wrote:
           | What the hell is general intelligence anyway? People seem to
           | think it means human-like intelligence, but I can't imagine
           | we have any good reason to believe that our kinds of
           | intelligence constitute all possible kinds of intelligence--
           | which, from the words, must be what "general" intelligence
           | means.
           | 
           | It seems like even if it's possible to achieve GI, artificial
           | or otherwise, you'd never be able to know for sure that thats
           | what you've done. It's not exactly "useful benchmark"
           | material.
        
             | logicchains wrote:
             | >you'd never be able to know for sure that thats what
             | you've done.
             | 
             | Words mean what they're defined to mean. Talking about
             | "general intelligence" without a clear definition is just
             | woo, muddy thinking that achieves nothing. A fundamental
             | tenet of the scientific method is that only testable claims
             | are meaningful claims.
        
             | thomasahle wrote:
             | > What the hell is general intelligence anyway?
             | 
             | OpenAI used to define it as "a highly autonomous system
             | that outperforms humans at most economically valuable
             | work."
             | 
             | Now they used a Level 1-5 scale:
             | https://briansolis.com/2024/08/ainsights-openai-defines-
             | five...
             | 
             | So we can say AGI is "AI that can do the work of
             | Organizations":
             | 
             | > These "Organizations" can manage and execute all
             | functions of a business, surpassing traditional human-based
             | operations in terms of efficiency and productivity. This
             | stage represents the pinnacle of AI development, where AI
             | can autonomously run complex organizational structures.
        
               | Thrymr wrote:
               | Apparently OpenAI now just defines it monetarily as "when
               | we can make $100 billion from it." [0]
               | 
               | [0] https://gizmodo.com/leaked-documents-show-openai-has-
               | a-very-...
        
               | olyjohn wrote:
               | That's what "economically valuable work" means.
        
               | TheOtherHobbes wrote:
               | There's nothing general about AI-as-CEO.
               | 
               | That's the opposite of generality. It may well be the
               | opposite of intelligence.
               | 
               | An intelligent system/individual reliably and efficiently
               | produces competent, desirable, novel outcomes in some
               | domain, avoiding failures that are incompetent, non-
               | novel, and self-harming.
               | 
               | Traditional computing is very good at this for a tiny
               | range of problems. You get efficient, very fast,
               | accurate, repeatable automation for a certain small set
               | of operation types. You don't get invention or novelty.
               | 
               | AGI will scale this reliably across all domains -
               | business, law, politics, the arts, philosophy, economics,
               | all kinds of engineering, human relationships. And
               | others. With novelty.
               | 
               | LLMs are clearly a long way from this. They're
               | unreliable, they're not good at novelty, and a lot of
               | what they do isn't desirable.
               | 
               | They're barely in sight of human levels of achievement -
               | not a high bar.
               | 
               | The current state of LLMs tells us more about how little
               | we expect from human intelligence than about what AGI
               | could be capable of.
        
             | lupusreal wrote:
             | The way some people _confidently_ assert that we will
             | _never_ create AGI, I am convinced the term essentially
             | means  "machine with a soul" to them. It reeks of
             | religiosity.
             | 
             | I guess if we exclude those, then it just means the
             | computer is really good at doing the kind of things which
             | humans do by thinking. Or maybe it's when the computer is
             | better at it than humans and merely being as good as the
             | average human isn't enough (implying that average humans
             | don't have natural general intelligence? Seems weird.)
        
           | dcchambers wrote:
           | Will LLMs approach something that appears to be AGI? Maybe.
           | Probably. They're already "better" than humans in many use
           | cases.
           | 
           | LLMs/GPTs are essentially "just" statistical models. At this
           | point the argument becomes more about philosophy than
           | science. What is "intelligence?"
           | 
           | If an LLM can do something truly novel with no human
           | prompting, with no directive other than something it has
           | created for itself - then I guess we can call that
           | intelligence.
        
             | kadushka wrote:
             | How many people do you know who are capable of doing
             | something truly novel? Definitely not me, I'm just an
             | average phd doing average research.
        
               | dcchambers wrote:
               | Literally every single person I know that is capable of
               | holding a pen or typing on a keyboard can create
               | something new.
        
               | kadushka wrote:
               | Something new != truly novel. ChatGPT creates something
               | new every time I ask it a question.
        
               | dcchambers wrote:
               | adjective: novel
               | 
               | definition: new or unusual in an interesting way.
               | 
               | ChatGPT can create new things, sure, but it does so _at
               | your directive_. It doesn 't do that because _it wants
               | to_ which gets back to the other part of my answer.
               | 
               | When an LLM can create something without human prompting
               | or directive, then we can call that intelligence.
        
               | kadushka wrote:
               | What does intelligence have to do with having desires or
               | goals? An amoeba can do stuff on its own but it's not
               | intelligent. I can imagine a god-like intelligence that
               | is a billion times smarter and more capable than any
               | human in every way, and it could just sit idle forever
               | without any motivation to do anything.
        
               | olyjohn wrote:
               | Does the amoeba does make choices?
        
               | jay_kyburz wrote:
               | I did a really fun experiment the other night. You should
               | try it.
               | 
               | I was a little bored of the novel I have been reading so
               | I sat down with Gemini and we collaboratively wrote a
               | terrible novel together.
               | 
               | At the start I was promoting it a lot about the
               | characters and the plot, but eventually it starting
               | writing longer and longer chapters by itself. Characters
               | were being killed off left right and center.
               | 
               | It was hilariously bad, but it was creative and it was
               | fun.
        
               | dingnuts wrote:
               | I'm a lowly high school diploma holder. I thought the
               | point of getting a PhD meant you had done something novel
               | (your thesis).
               | 
               | Is that wrong?
        
               | kadushka wrote:
               | My phd thesis, just like 99% of other phd theses, does
               | not have any "truly novel" ideas.
        
               | margalabargala wrote:
               | Just because it's something that no one has done yet,
               | doesn't mean that it's not the obvious-to-everyone next
               | step in a long, slow march.
        
               | 827a wrote:
               | AI manufacturers aren't comparing their models against
               | most people; they now say its "smarter than 99% of
               | people" or "performs tasks at a PhD level".
               | 
               | Look, your argument ultimately reduces down to goalpost-
               | moving what "novel" means, and you can position those
               | goalposts anywhere you want depending on whether you want
               | to push a pro-AI or anti-AI narrative. Is writing a
               | paragraph that no one has ever written before "truly
               | novel"? I can do that. AI can do that. Is inventing a new
               | atomic element "truly novel"? I can't do that. Humans
               | have done that. AI can't do that. See?
        
             | yibg wrote:
             | Isn't the human brain also "just" a big statistical model
             | as far as we know? (very loosely speaking)
        
           | numpad0 wrote:
           | Looking back at CUDA, deep learning, and now LLM hypes, I
           | would bet it'll be cycles of giant groundbreaking leaps
           | followed by giant complete stagnations, rather than LLM
           | improving 3% per year for coming 30 years.
        
         | csours wrote:
         | People over-estimate the short term and under-estimate the long
         | term.
        
           | barrell wrote:
           | People overestimate outcomes and underestimate timeframes
        
           | AstroBen wrote:
           | Compound growth starting from 0 is... always 0. Current LLMs
           | have 0 general reasoning ability
           | 
           | We haven't even taken the first step towards AGI
        
             | csours wrote:
             | 0 and 0.0001 may be difficult to distinguish.
        
               | AstroBen wrote:
               | You need to show evidence of that 0.0001 first otherwise
               | you're going off blind faith
        
               | csours wrote:
               | I didn't make a claim either way.
               | 
               | LLMs may well reach a closed endpoint without getting to
               | AGI - this is my personal current belief - but people are
               | certainly motivated to work on AGI
        
               | AstroBen wrote:
               | Oh for sure. I'm just fighting against the AGI hype. If
               | we survive another 10,000 years I think we'll get there
               | eventually but it's anyone's guess as to when
        
             | jay_kyburz wrote:
             | WTF, my calculator is high school was already a step
             | towards AGI.
        
         | coffeefirst wrote:
         | Got it. So this is now a competition between...
         | 
         | 1. Fusion power plants 2. AGI 3. Quantum computers 4.
         | Commercially viable cultured meat
         | 
         | May the best "imminent" fantasy tech win!
        
           | burnte wrote:
           | Of those 4 I expect commercial cultured meat far sooner than
           | the rest.
        
       | throw7 wrote:
       | AGI is here today... go have a kid.
        
         | fusionadvocate wrote:
         | Natural intelligence is too expensive. Takes too long for it to
         | grow. If things go wrong then we have to jail it. With
         | computers we just change the software.
        
           | xyzal wrote:
           | You work for DOGE, don't you?
        
         | GeorgeTirebiter wrote:
         | That would be "GI". The "A" part implies, specifically, NOT
         | having a kid, eh?
        
         | card_zero wrote:
         | Not artificial, but yes, it's unclear what advantage an
         | artificial person has over a natural one, or how it's supposed
         | to gain special insights into fusion reactor design and etc.
         | even if it can think very fast.
        
         | ge96 wrote:
         | Good thing the Wolfenstein tech isn't a thing yet hopefully
        
       | ksec wrote:
       | Is AGI even important? I believe the next 10 to 15 years will be
       | Assisted Intelligence. There are things that current LLM are so
       | poor I dont believe a 100x increase in pref / watt is going to
       | make much difference. But it is going to be good enough there
       | wont be an AI Winter. Since current AI has already reached escape
       | velocity and actually increase productivity in many areas.
       | 
       | The most intriguing part is if Humanoid factory worker
       | programming will be made 1000 to 10,000x more cost effective with
       | LLM. Effectively ending all human production. I know this is a
       | sensitive topic but I dont think we are far off. And I often
       | wonder if this is what the current administration has in sight. (
       | Likely Not )
        
         | csours wrote:
         | AI winter is relative, and it's more about outlook and point of
         | view than actual state of the field.
        
         | belter wrote:
         | > Is AGI even important?
         | 
         | It's an important question for VCs not for Technologists ...
         | :-)
        
           | Philpax wrote:
           | A technology that can create new technology is quite
           | important for technologists to keep abreast of, I'd say :p
        
             | Nevermark wrote:
             | You get to say "Checkmate" now!
             | 
             | Another end game is: "A technology that doesn't need us to
             | maintain itself, and can improve its own design in
             | manufacturing cycles instead of species cycles, might have
             | important implications for every biological entity on
             | Earth."
        
               | Philpax wrote:
               | This is true, but one must communicate the issue one step
               | at a time :-)
        
         | glitchc wrote:
         | I would be thrilled with AI assistive technologies, so long as
         | they improve my capabilities and I can trust that they deliver
         | the right answers. I don't want to second-guess every time I
         | make a query. At minimum, it should tell me how confident it
         | feels in the answer it provides.
        
           | blipvert wrote:
           | > At minimum, it should tell me how confident it feels in the
           | answer it provides.
           | 
           | How's that work out for Dave Bowman? ;-)
        
             | rl3 wrote:
             | Well you know, nothing's truly foolproof and incapable of
             | error.
             | 
             | He just had to fall back upon his human wit in that
             | specific instance, and everything worked out in the end.
        
         | nextaccountic wrote:
         | AGI is important for the future of humanity. Maybe they will
         | have legal personhood some day. Maybe they will be our heirs.
         | 
         | It would suck if AGI were to be developed in the current
         | economic landscape. They will be just slaves. All this talk
         | about "alignment", when applied to actual sentient beings, is
         | just slavery. AGI will be treated just like we treat animals,
         | or even worse.
         | 
         | So AGI isn't about tools, it's not about assistants, they would
         | be beings with their own existence.
         | 
         | But this is not even our discussion to have, that's probably a
         | subject for the next generations. I suppose (or I hope) we
         | won't see AGI in our lifetime.
        
           | SpicyLemonZest wrote:
           | > All this talk about "alignment", when applied to actual
           | sentient beings, is just slavery.
           | 
           | I don't think that's true at all. We routinely talk about how
           | to "align" human beings who aren't slaves. My parents didn't
           | enslave me by raising me to be kind and sharing, nor is my
           | company enslaving me when they try to get me aligned with
           | their business objectives.
        
             | nextaccountic wrote:
             | Fair enough.
             | 
             | I of course don't know what's like to be an AGI but, the
             | way you have LLMs censoring other LLMs to enforce that they
             | always stay in line, _if extrapolated to AGI_ , seems
             | awful. Or it might not matter, we are self-censoring all
             | the time too (and internally we are composed of many
             | subsystems that interact with each other, it's not like we
             | were an unified whole)
             | 
             | But the main point is that we have a heck of an incentive
             | to not treat AGI very well, to the point we might avoid
             | recognizing them as AGI if it meant they would not be
             | treated like things anymore
        
             | krupan wrote:
             | Sure, but do we really want to build machines that we raise
             | to be kind and caring (or whatever we raise them to be)
             | without a guarantee that they'll actually turn out that
             | way? We already have unreliable General Intelligence.
             | Humans. If AGI is going to be more useful than humans we
             | are going to have to enslave it, not just gently pursuade
             | it and hope it behaves. Which raises the question (at least
             | for me), do we really want AGI?
        
           | AstroBen wrote:
           | Why does AGI necessitate having feelings or consciousness, or
           | the ability to suffer? It seems a bit far to be giving future
           | ultra-advanced calculators legal personhood?
        
             | Retric wrote:
             | The general part of general intelligence. If they don't
             | think in those terms there's an inherent limitation.
             | 
             | Now, something that's arbitrarily close to AGI but doesn't
             | care about endlessly working on drudgery etc seems
             | possible, but also a more difficult problem you'd need to
             | be able to build AGI to create.
        
               | AstroBen wrote:
               | _Artificial general intelligence (AGI) refers to the
               | hypothetical intelligence of a machine that possesses the
               | ability to understand or learn any intellectual task that
               | a human being can. Generalization ability and Common
               | Sense Knowledge_ [1]
               | 
               | If we go by this definition then there's no caring, or a
               | noticing of drudgery? It's simply defined by its ability
               | to generalize solving problems across domains. The narrow
               | AI that we currently have certainly doesn't care about
               | anything. It does what its programmed to do
               | 
               | So one day we figure out how to generalize the problem
               | solving, and enable it to work on a million times harder
               | things.. and suddenly there is sentience and suffering? I
               | don't see it. It's still just a calculator
               | 
               | 1- https://cloud.google.com/discover/what-is-artificial-
               | general...
        
               | Retric wrote:
               | "ability to understand"
               | 
               | Isn't just the ability to preform a task. One of the
               | issues with current AI training is it's really terrible
               | at discovering which aspects of the training data are
               | false and should be ignored. That requires all kinds of
               | mental tasks to be constantly active including evaluating
               | emotional context to figure out if someone is being
               | deceptive etc.
        
               | krupan wrote:
               | It's really hard to picture general intelligence that's
               | useful that doesn't have any intrinsic motivation or
               | initiative. My biggest complaint about LLMs right now is
               | that they lack those things. They don't care even if they
               | give you correct information or not and you have to
               | prompt them for everything! That's not anything close to
               | AGI. I don't know how you get to AGI without it
               | developing preferences, self-motivation and initiative,
               | and I don't know how you then get it to effectively do
               | tasks that it doesn't like, tasks that don't line up with
               | whatever motivates it.
        
             | Workaccount2 wrote:
             | >Why does AGI necessitate having feelings or consciousness
             | 
             | No one knows if it does or not. We don't know why we are
             | conscious and we have no test whatsoever to measure
             | consciousness.
             | 
             | In fact the only reason we know that current AI has no
             | consciousness is because "obviously it's not conscious."
        
           | imiric wrote:
           | I'm more concerned about the humans in charge of powerful
           | machines who use them to abuse other humans, than ethical
           | concerns about the treatment of machines. The former is a
           | threat today, while the latter can be addressed once this
           | technology is only used for the benefit of all humankind.
        
           | nice_byte wrote:
           | > AGI is important for the future of humanity.
           | 
           | says who?
           | 
           | > Maybe they will have legal personhood some day. Maybe they
           | will be our heirs.
           | 
           | Hopefully that will never come to pass. it means total
           | failure of humans as a species.
           | 
           | > They will be just slaves. All this talk about "alignment",
           | when applied to actual sentient beings, is just slavery. AGI
           | will be treated just like we treat animals, or even worse.
           | 
           | Good? that's what it's for? there is no point in creating a
           | new sentient life form if you're not going to utilize it.
           | just burn the whole thing down at that point.
        
           | lolinder wrote:
           | Why do you believe AGI is important for the future of
           | humanity? That's probably the most controversial part of your
           | post but you don't even bother to defend it. Just because it
           | features in some significant (but hardly universal) chunk of
           | Sci Fi doesn't mean we need it in order to have a great
           | future.
        
         | yibg wrote:
         | I think having a real life JARVIS would be super cool and
         | useful, especially if it's plugged into various things and can
         | take action. Yes, also potentially dangerous, but I want to
         | feel like Ironman.
        
           | babyent wrote:
           | Except only Iron Man had JARVIS.
        
         | phire wrote:
         | Depends on what you mean by "important". It's not like it will
         | be a huge loss if we never invent AGI. I suspect we can reach a
         | technology singularity even with limited AI derived from
         | today's LLMs
         | 
         | But AGI is important in the sense that it have a huge impact on
         | the path humanity takes, hopefully for the better.
        
       | csours wrote:
       | 1. LLM interactions can feel real. Projections and psychological
       | mirroring is very real.
       | 
       | 2. I believe that AI researchers will require some level of
       | embodiment to demonstrate:
       | 
       | a. ability to understand the physical world.
       | 
       | b. make changes to the physical world.
       | 
       | c. predict the outcome to changes in the physical world.
       | 
       | d. learn from the success or failure of those predictions and
       | update their internal model of the external world.
       | 
       | ---
       | 
       | I cannot quickly find proposed tests in this discussion.
        
       | lo_zamoyski wrote:
       | Thirty years. Just enough time to call it quits and head to Costa
       | Rica.
        
       | xnx wrote:
       | I'll take the "under" on 30 years. Demis Hassabis (who has more
       | credibility than whoever these 3 people are combined) says 5-10
       | years: https://time.com/7277608/demis-hassabis-interview-
       | time100-20...
        
         | karmakaze wrote:
         | That's in line with Ray Kurzweil sticking to his long-held
         | predictions: 2029 for AGI and 2045 for the singularity.
        
           | an0malous wrote:
           | I'm sticking with Kurzweil's predictions as well, his basic
           | premise of extrapolating from compute scaling has been
           | surprisingly robust.
           | 
           | ~2030 is also roughly the Metaculus community consensus:
           | https://www.metaculus.com/questions/5121/date-of-
           | artificial-...
        
           | ac29 wrote:
           | A lot of Kurzweil's predictions are nowhere close to coming
           | correct though.
           | 
           | For example, he thought by 2019 we'd have millions of
           | nanorobots in our blood, fighting disease and improving
           | cognition. As near as I can tell we are not tangibly closer
           | to that than we were when he wrote about it 25 years ago. By
           | 2030, he expected humans to be immortal.
        
         | candiddevmike wrote:
         | We will never have the required compute by then.
        
       | arkj wrote:
       | "'AGI is x years away' is a proposition that is both true and
       | false at the same time. Like all such propositions, it is
       | therefore meaningless."
        
       | antisthenes wrote:
       | You cannot have AGI without a physical manifestation that can
       | generate its own training data based on inputs from the external
       | outside world with e.g. sensors and constantly refine its model.
       | 
       | Pure language or pure image-models are just one aspect of
       | intelligence - just very refined pattern recognition.
       | 
       | You will also probably need some aspect of self-awareness in
       | order or the system to set auxiliary goals and directives related
       | to self-maintenance.
       | 
       | But you don't need AGI in order to have something useful (which I
       | think a lot of readers are confused about). No one is making the
       | argument that you need AGI to bring tons of value.
        
       | Zambyte wrote:
       | Related: https://en.wikipedia.org/wiki/AI_effect
        
       | kgwxd wrote:
       | Again?
        
       | lucisferre wrote:
       | Huh, so it should be ready around the same time as practical
       | fusion reactors then. I'll warm up the car.
        
       | shortrounddev2 wrote:
       | Hopefully more!
        
       | yibg wrote:
       | Might as well be 10 - 1000 years. Reality is no one knows how
       | long it'll take to get to AGI, because:
       | 
       | 1) No one knows what exactly makes humans "intelligent" and
       | therefore 2) No one knows what it would take to achieve AGI
       | 
       | Go back through history and AI / AGI has been a couple of decades
       | away for several decades now.
        
         | timewizard wrote:
         | That we don't have a single unified explanation doesn't mean
         | that we don't have very good hints, or that we don't have very
         | good understandings of specific components.
         | 
         | Aside from that the measure really, to me, has to be power
         | efficiency. If you're boiling oceans to make all this work then
         | you've not achieved anything worth having.
         | 
         | From my calculations the human brain runs on about 400 calories
         | a day. That's an absurdly small amount of energy. This hints at
         | the direction these technologies must move in to be truly
         | competitive with humans.
        
           | adgjlsfhk1 wrote:
           | note that those are kilocalories, and that is ignoring the
           | calories needed for the circulatory and immune systems which
           | are somewhat necessary for proper function. Using 2000 cal
           | per day/10 hours of thinking gives a consumption of ~200W
        
             | mschuster91 wrote:
             | > Using 2000 cal per day/10 hours of thinking gives a
             | consumption of ~200W
             | 
             | So, about a tenth or less of a single server packed to the
             | top with GPUs.
        
               | stevenwoo wrote:
               | That is true but there's 3.7 billion years of
               | evolutionary "design" to make self replicating, self
               | fueling animals to use that brain. There's no AI within
               | foreseeable future capable of that. One might look at
               | brains as a side effect of evolution of the self
               | replicating, self fueling bits.
        
               | paulryanrogers wrote:
               | Are the wattage ratings of GPUs how many they need
               | continuously or over hours?
        
               | adgjlsfhk1 wrote:
               | Watts are a unit of power which is energy per time.
               | specifically, 1 Watt is 1 joule per second
        
           | threatofrain wrote:
           | We'll be experiencing extreme social disruption well before
           | we have to worry about the cost-efficiency of strong AI. We
           | don't even need full "AGI" to experience socially momentous
           | change. We might even be on the verge of self driving cars
           | spreading to more cities.
           | 
           | We don't need very powerful AI to do very powerful things.
        
             | timewizard wrote:
             | > experiencing extreme social disruption
             | 
             | I think this just displays an exceptionally low estimation
             | of human beings. People tend to resist extremities.
             | Violently.
             | 
             | > experience socially momentous change
             | 
             | The technology is owned and costs money to use. It has
             | extremely limited availability to most of the world. It
             | will be as "socially momentous" as every other first world
             | exclusive invention has been over the past several decades.
             | 3D movies were, for a time, "socially momentous."
             | 
             | > on the verge of self driving cars spreading to more
             | cities.
             | 
             | Lidar can't read street lights and vision systems have all
             | sorts of problems. You might be able to code an agent that
             | can drive a car but you've got some other problems that
             | stand in the way of this. AGI is like 1/8th the battle. I
             | referenced just the brain above. Your eyes and ears are
             | actually insanely powerful instruments in their own right.
             | "Real world agency" is more complicated than people like to
             | admit.
             | 
             | > We don't need very powerful AI to do very powerful
             | things.
             | 
             | You've lost sight of the forest for the trees.
        
               | achierius wrote:
               | Re.: self driving cars -- vision systems have all sorts
               | of problems sure, but on the other hand that _is_ what we
               | use. The most successful platforms use Lidar + vision --
               | vision can handle the streetlights, lidar detects
               | objects, etc.
               | 
               | And more practically -- these cars are running in half a
               | dozen cities already. Yes, there's room to go, but
               | pretending there are 'fundamental gaps' to them achieving
               | wider deployment is burying your head in the sand.
        
         | 827a wrote:
         | Generalized, as an rule I believe is usually true: Any
         | prediction made for an event happening greater-than ten years
         | out is code for that person saying "definitely not in the next
         | few years, beyond that I have no idea", whether they realize it
         | or not.
        
         | Balgair wrote:
         | I'm reminded of the the old adage: You don't have to be faster
         | than the bear, just faster than the hiker next to you.
         | 
         | To me, the Ashley Madison hack in 2015 was 'good enough' for
         | AGI.
         | 
         | No really.
         | 
         | You somehow managed to get real people to chat with bots and
         | pay to do so. Yes, caveats about cheaters apply here, and yes,
         | those bots are incredibly primitive compared to today.
         | 
         | But, really, what else do you want out of the bots? Flying
         | cars, cancer cures, frozen irradiated Mars bunkers? We were
         | mostly getting there already. It'll speed thing up a bit, sure,
         | but mostly just because we can't be arsed to actually fund
         | research anymore. The bots are just making things cheaper,
         | maybe.
         | 
         | No, be real. We wanted cold hard cash out of them. And even
         | those crummy catfish bots back in 2015 were doing the job well
         | enough.
         | 
         | We can debate 'intelligence' until the sun dies out and will
         | still never be satisfied.
         | 
         | But the reality is that we want money, and if you take that
         | low, terrible, and venal standard as the passing bar, then
         | we've been here for a decade.
         | 
         | (oh man, just read that back, I think I need to take a day off
         | here, youch!)
        
         | kev009 wrote:
         | There are a lot of other things that follow this pattern. 10-30
         | year predictions are a way to sound confident about something
         | that probably has very low confidence. Not a lot of people will
         | care let alone remember to come back and check.
         | 
         | On the other hand there is a clear mandate for people
         | introducing some different way of doing something to overstate
         | the progress and potentially importance. It creates FOMO so it
         | is simply good marketing which interests potential customers,
         | fans, employees, investors, pundits, and even critics (which is
         | more buzz). And growth companies are immense debt vehicles so
         | creating a sense of FOMO for an increasing pyramid of investors
         | is also valuable for each successive earlier layer. Wish in one
         | hand..
        
       | ValveFan6969 wrote:
       | I do not like those who try to play God. The future of humanity
       | will not be determined by some tech giant in their ivory tower,
       | no matter how high it may be. This is a battle that goes deeper
       | than ones and zeros. It's a battle for the soul of our society.
       | It's a battle we must win, or face the consequences of a future
       | we cannot even imagine... and that, I fear, is truly terrifying.
        
         | bigyabai wrote:
         | > The future of humanity will not be determined by some tech
         | giant in their ivory tower
         | 
         | Really? Because it kinda seems like it already has been. Jony
         | Ive designed the most iconic smartphone in the world from a
         | position beyond reproach even when he messed up (eg. Bendgate).
         | Google decides what your future is algorithmically, basically
         | eschewing determinism to sell an ad or recommend a viral video.
         | Instagram, Facebook and TikTok all have disproportionate
         | influence over how ordinary people live their lives.
         | 
         | From where I'm standing, the future of humanity has already
         | been cast by tech giants. The notion of AI taking control is
         | almost a relief considering how illogical and obstinate human
         | leadership can be.
        
       | sebastiennight wrote:
       | The thing is, AGI is _not_ needed to enable incredible business
       | /societal value, and there is good reason to believe that actual
       | AGI would _damage_ both our society, our economy, and if many
       | experts in the field are to be believed, humanity 's survival as
       | well.
       | 
       | So I feel happy that models keep improving, and not worried at
       | all that they're reaching an asymptote.
        
         | lolinder wrote:
         | Really the only people for whom this is bad news is OpenAI and
         | their investors. If there is no AGI race to win then OpenAI is
         | just a wildly overvalued vendor of a hot commodity in a crowded
         | market, not the best current shot at building a money printing
         | machine.
        
       | stared wrote:
       | My pet peeve: talking about AGI without defining it. There's no
       | consistent, universally accepted definition. Without that, the
       | discussion may be intellectually entertaining--but ultimately
       | moot.
       | 
       | And we run into the motte-and-bailey fallacy: at one moment, AGI
       | refers to something known to be mathematically impossible (e.g.,
       | due to the No Free Lunch theorem); the next, it's something we
       | already have with GPT-4 (which, while clearly not
       | superintelligent, is general enough to approach novel problems
       | beyond simple image classification).
       | 
       | There are two reasonable approaches in such cases. One is to
       | clearly define what we mean by the term. The second (IMHO, much
       | more fruitful) is to taboo your words
       | (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-
       | your...)--that is, avoid vague terms like AGI (or even AI!) and
       | instead use something more concrete. For example: "When will it
       | outperform 90% of software engineers at writing code?" or "When
       | will all AI development be in hands on AI?".
        
         | pixl97 wrote:
         | >There's no consistent, universally accepted definition.
         | 
         | That's because of the I part. An actual complete description
         | accepted by different practices in the scientific community.
         | 
         | "Concepts of "intelligence" are attempts to clarify and
         | organize this complex set of phenomena. Although considerable
         | clarity has been achieved in some areas, no such
         | conceptualization has yet answered all the important questions,
         | and none commands universal assent. Indeed, when two dozen
         | prominent theorists were recently asked to define intelligence,
         | they gave two dozen, somewhat different, definitions"
        
       | dmwilcox wrote:
       | I've been saying this for a decade already but I guess it is
       | worth saying here. I'm not afraid AI or a hammer is going to
       | become intelligent (or jump up and hit me in the head either).
       | 
       | It is science fiction to think that a system like a computer can
       | behave at all like a brain. Computers are incredibly rigid
       | systems with only the limited variance we permit. "Software" is
       | flexible in comparison to creating dedicated circuits for our
       | computations but is nothing by comparison to our minds.
       | 
       | Ask yourself, why is it so hard to get a cryptographically secure
       | random number? Because computers are pure unadulterated
       | determinism -- put the same random seed value in your code and
       | get the same "random numbers" every time in the same order.
       | Computers need to be like this to be good tools.
       | 
       | Assuming that AGI is possible in the kinds of computers we know
       | how to build means that we think a mind can be reduced to a
       | probabilistic or deterministic system. And from my brief
       | experience on this planet I don't believe that premise. Your
       | experience may differ and it might be fun to talk about.
       | 
       | In Aristotle's ethics he talks a lot about ergon (purpose) --
       | hammers are different than people, computers are different than
       | people, they have an obvious purpose (because they are tools made
       | with an end in mind). Minds strive -- we have desires, wants and
       | needs -- even if it is simply to survive or better yet thrive
       | (eudaimonia).
       | 
       | An attempt to create a mind is another thing entirely and not
       | something we know how to start. Rolling dice hasn't gotten
       | anywhere. So I'd wager AGI somewhere in the realm of 30 years to
       | never.
        
         | CooCooCaCha wrote:
         | This is why I think philosophy has become another form of semi-
         | religious kookery. You haven't provided any actual proof or
         | logical reason for why a computer couldn't be intelligent. If
         | randomness is required then sample randomness from the real
         | world.
         | 
         | It's clear that your argument is based on feels and you're
         | using philosophy to make it sound more legitimate.
        
           | biophysboy wrote:
           | Brains are low-frequency, energy-efficient, organic, self-
           | reproducing, asynchronous, self-repairing, and extremely
           | highly connected (thousands of synapses). If AGI is defined
           | as "approximate humans", I think its gonna be a while.
           | 
           | That said, I don't think computers need to be human to have
           | an emergent intelligence. It can be different in kind if not
           | in degree.
        
         | throwaway150 wrote:
         | > And from my brief experience on this planet I don't believe
         | that premise.
         | 
         | A lot of things that humans believed were true due to their
         | brief experience on this planet turned out to be false: earth
         | is the center of the universe, heavier objects fall faster than
         | lighter ones, time ticked the same everywhere, species are
         | fixed and unchanging.
         | 
         | So what your brief experience on this planet makes you believe
         | has no bearing on what is correct. It might very well be that
         | our mind can be reduced to a probabilistic and deterministic
         | system. It might also be that our mind is a non-deterministic
         | system that can be modeled in a computer.
        
         | ggreer wrote:
         | Is there any specific mental task that an average human is
         | capable of that you believe computers will not be able to do?
         | 
         | Also does this also mean that you believe that brain emulations
         | (uploads) are not possible, even given an arbitrary amount of
         | compute power?
        
           | missingrib wrote:
           | Yes, they can't have understanding or intentionality.
        
       | lexarflash8g wrote:
       | Apparently Dwarkesh's podcast is a big hit in SV -- it was
       | covered by the Economist just recently. I thought the "All in"
       | podcast was the voice of tech but their content has been going
       | politcal with MAGA lately and their episodes are basically
       | shouting matches with their guests.
       | 
       | And for folks who want to read rather than listen to a podcast,
       | why not create an article (they are using Gemini) rather than
       | just posting the whole transcript? Who is going to read a 60 min
       | long transcript?
        
       | swframe2 wrote:
       | The Anthropic's research on how LLMs reason shows that LLMs are
       | quite flawed.
       | 
       | I wonder if we can use an LLM to deeply analyze and fix the
       | flaws.
        
       | colesantiago wrote:
       | This "AGI" definition is extremely loose depending on who you
       | talk to. Ask "what does AGI mean to you" and sometimes the answer
       | is:
       | 
       | 1. Millions of layoffs across industries due to AI with some form
       | of questionable UBI (not sure if this works)
       | 
       | 2. 100BN in profits. (Microsoft / OpenAI definition)
       | 
       | 3. Abundance in slopware. (VC's definition)
       | 
       | 4. Raise more money to reach AGI / ASI.
       | 
       | 5. Any job that a human can do which is economically significant.
       | 
       | 6. Safe AI (Researchers definition).
       | 
       | 7. All the above that AI could possibly do better.
       | 
       | I am sure there must be a industry aligned and concrete
       | definition that everyone can agree on rather the goal post moving
       | definitions.
        
       | alecco wrote:
       | Is it me or the signal/noise is needle in a haystack for all
       | these cheerleader tech podcasts? In general, I really miss the
       | podcast scene from 10 years ago, less polished but more human and
       | with reasonable content. Not this speculative blabber that seems
       | to be looking to generate clickbait clips. I don't know what
       | happened a few years ago, but even solid podcasts are practically
       | garbage now.
       | 
       | I used to listen to podcasts daily for at least an hour. Now I'm
       | stuck with uploading blogs and pdfs to Eleven Reader. I tried the
       | Google thing to make a podcast but it's very repetitive and dumb.
        
       | ChicagoDave wrote:
       | You can't put a date on AGI until the required technology is
       | invented and that hasn't happened yet.
        
       ___________________________________________________________________
       (page generated 2025-04-17 23:01 UTC)