[HN Gopher] Problems the AI industry is not addressing adequately
       ___________________________________________________________________
        
       Problems the AI industry is not addressing adequately
        
       Author : baylearn
       Score  : 165 points
       Date   : 2025-07-05 10:33 UTC (12 hours ago)
        
 (HTM) web link (www.thealgorithmicbridge.com)
 (TXT) w3m dump (www.thealgorithmicbridge.com)
        
       | bestouff wrote:
       | Are there some people here in HN believing in AGI "soonish" ?
        
         | BriggyDwiggs42 wrote:
         | I could see 2040 or so being very likely. Not off transformers
         | though.
        
           | serf wrote:
           | via what paradigm then? What out there gives high enough
           | confidence to set a date like that?
        
             | BriggyDwiggs42 wrote:
             | While we don't know an enormous amount about the brain, we
             | do know a pretty good bit about individual neurons, and I
             | think it's a good guess, given current science, to say that
             | a solidly accurate simulation of a large number of neurons
             | would lead to a kind of intelligence loosely analogous to
             | that found in animals. I'd completely understand if you
             | disagree, but I consider it a good guess.
             | 
             | If that's the case, then the gulf between current
             | techniques and what's needed seems knowable. A means of
             | approximating continuous time between neuron firing, time-
             | series recognition in inputs, learning behavior on inputs
             | prior to actual neuron firing (akin to behavior of
             | dendrites), etc. are all missing functionalities in current
             | techniques. Some or all of these missing parts of
             | biological neuron behavior might be needed to approximate
             | animal intelligence, but I think it's a good guess that
             | these are the parts that are missing.
             | 
             | AI currently has enormous amounts of money being dumped
             | into it on techniques that are lacking for what we want to
             | achieve with it. As they falter more and more, there will
             | be an enormous financial interest in creating new, more
             | effective techniques, and the most obvious place to look
             | for inspiration will be biology. That's why I think it's
             | likely to happen in the next few decades; the hardware
             | should be there in terms of raw compute, there's an obvious
             | place to look for new ideas, and there's a ton of financial
             | interest in it.
        
         | bdhcuidbebe wrote:
         | Theres usually some enlightened laymen in this kind of topic.
        
         | PicassoCTs wrote:
         | St. Fermi says no
        
         | impossiblefork wrote:
         | I might, depending on the definition.
         | 
         | Some kind of verbal-only-AGI that can solve almost all
         | mathematical problems that humans come up with that can be
         | solved in half a page. I think that's achievable somewhere in
         | the near term, 2-7 years.
        
           | deergomoo wrote:
           | Is that "general" though? I've always taken AGI to mean
           | general to any problem.
        
             | Touche wrote:
             | Yes, general means you can present it a new problem that
             | there is no data on, and it can become a expert o that
             | problem.
        
             | impossiblefork wrote:
             | I suppose not.
             | 
             | Things I think will be hard for LLMs to do, which some
             | humans can: you get handed 500 pages of Geheimschreiber
             | encrypted telegraph traffic and infinite paper, and you
             | have to figure out how the cryptosystem works and how to
             | decrypt the traffic. I don't think that can happen. I think
             | it requires a highly developed pattern recognition ability
             | together with an ability to not get lost, which LLM-type
             | things will probably continue to for a long time.
             | 
             | But if they could maths more fully, then pretty much all
             | carefully defined tasks would be in reach if they weren't
             | too long.
             | 
             | With regard to what Touche brings up in the other response
             | to your comment, I think that it might be possible to get
             | them to read up on things though-- go through something,
             | invent problems, try to solve those. I think this is
             | something that could be done today with today's models with
             | no real special innovation, but which just hasn't been made
             | into a service yet. But this of course doesn't address that
             | criticism, since it assumes the availability of data.
        
           | whiplash451 wrote:
           | What makes you think that this could be achieved in that time
           | frame? All we seem to have for now are LLMs that can solve
           | problems they've learned by heart (or neighboring problems)
        
             | impossiblefork wrote:
             | Transformers can actually learn pretty difficult
             | manipulations, even how to calculate difficult integrals,
             | so I don't agree that they can only solve problems they've
             | learned by heart.
             | 
             | The reason I believe it can be achieved in this time frame
             | is that I believe that you can do much more with non-output
             | tokens than is currently being done.
        
         | Davidzheng wrote:
         | what's your definition? AGI original definition is median human
         | across almost all fields which I believe is basically achieved.
         | If superhuman (better than best expert) I expect <2030 for all
         | nonrobotic tasks and <2035 for all tasks
        
           | jltsiren wrote:
           | Your "original definition" was always meaningless. A "Hello,
           | World!" program is equally capable in most jobs as the median
           | human. On the other hand, if the benchmark is what the median
           | human can reasonably become (a professional with decades of
           | experience), we are still far from there.
        
             | Davidzheng wrote:
             | I agree with second part but not the first (far in
             | capability not in timeline). I think you underestimate the
             | distance of median wihout training and "hello world" in
             | many economically meaningful jobs.
        
           | GolfPopper wrote:
           | A "median human" can run a web search and report back on what
           | they found without making stuff up, something I've yet to
           | find an LLM capable of doing reliably.
        
             | Davidzheng wrote:
             | I bet you median humans make up a nontrivial amount of
             | things. Humans misremember all the time. If you ask for
             | only quotes, LLMs can also do this without problems (I use
             | o3 for search over google)
        
           | gnz11 wrote:
           | How are you coming to the conclusion that "median human" is
           | "basically achieved"? Current AI has no means of
           | understanding and synthesizing new ideas the way a human
           | would. It's all generative.
        
             | Davidzheng wrote:
             | synthesizing new ideas: in order to express the idea in our
             | language it basically means you have some new combinations
             | of existing building blocks, just sometimes the building
             | blocks are low level enough and the combination is esoteric
             | enough. It's a spectrum again. I think current models are
             | in fact quite capable of combining existing ideas and
             | building blocks in new ways (this is how human innovation
             | also happens). Most of my evidence comes from asking newer
             | models o3/gemini-2.5-pro for research-level mathematics
             | questions which do not appear in existing literature but is
             | of course connected with them.
             | 
             | so these arguments by fundamental distinctions I believe
             | all cannot work--the question is how new are the AI
             | contributions. Nowadays there's of course still no
             | theoretical breakthroughs in mathematics from AI (though
             | biology could be close!). Also I think the AIs have
             | understanding--but tbf the only thing we can test is
             | through testing on tricky questions which I think support
             | my side. Though of course some of these questions have
             | interpretations which are not testable--so I don't want to
             | argue about those.
        
       | A_D_E_P_T wrote:
       | > _" This is purely an observation: You only jump ship in the
       | middle of a conquest if either all ships are arriving at the same
       | time (unlikely) or neither is arriving at all. This means that no
       | AI lab is close to AGI."_
       | 
       | The central claim here is illogical.
       | 
       | The way I see it, _if_ you believe that AGI is imminent, _and if_
       | your personal efforts are not entirely crucial to bringing AGI
       | about (just about all engineers are in this category), _and if_
       | you believe that AGI will obviate most forms of computer-related
       | work, your best move is to do whatever is most profitable in the
       | near-term.
       | 
       | If you make $500k/year, and Meta is offering you $10M/year, then
       | you ought to take the new job. Hoard money, true believer. Then,
       | when AGI hits, you'll be in a better personal position.
       | 
       | Essentially, the author's core assumption is that working for a
       | lower salary at a company that may develop AGI is preferable to
       | working for a much higher salary at a company that may develop
       | AGI. I don't see how that makes any sense.
        
         | levanten wrote:
         | Being part of the team that achieved AGI first would be to
         | write your name in history forever. That could mean more to
         | people than money.
         | 
         | Also 10m would be a drop in the bucket compared to being a
         | shareholder of a company that has achieved AGI; you could also
         | imagine the influence and fame that comes with it.
        
           | tharkun__ wrote:
           | *some people
        
           | blululu wrote:
           | Kind of a sucker move here since you personally will 100% be
           | forgotten. We are only going to remember one or two people
           | who did any of this. Say Sam Altman and Ilya Sttsveker.
           | Everyone else will be forgotten. The authors or the
           | Transformer paper are unlikely to make it into the history
           | books or even popular imagination. Think about the Manhattan
           | Project. We recently made a movie remembering that one guy
           | who did something on the Manhattan Project, but he will soon
           | fade back into obscurity. Sometimes people say that it was
           | about Einstein's theory of relativity. The only people who
           | know who folks like Ulam were are physicists. The legions of
           | technicians who made it all come together are totally
           | forgotten. Same with the space program or the first computer
           | or pretty much any engineering marvel.
        
             | cdrini wrote:
             | Well depends on what you value. Achieving/contributing to
             | something impactful first is for many people valuable even
             | if it doesn't come with fame. Historically, this mindframe
             | has been popular especially amongst scientists.
        
             | impossiblefork wrote:
             | Personally I think the ones who will be remembered will be
             | the ones who publish useful methods first, not the ones who
             | succeed commercially.
             | 
             | It'll be Vaswani and the others for the transformer, then
             | maybe Zelikman and those on that paper for thought tokens,
             | then maybe some of the RNN people and word embedding people
             | will be cited as pioneers. Sutskever will definitely be
             | remembered for GPT-1 though, being first to really scale up
             | transformers. But it'll actually be like with flight and a
             | whole mass of people will be remembered, just as we now
             | remember everyone from the Wrights to Bleriot and to
             | Busemann, Prandtl, even Whitcomb.
        
               | darth_aardvark wrote:
               | Is "we" the particular set of scientists who know those
               | last four people? Surely you realize they're nowhere near
               | as famous as the Wright brothers, right? This is giving
               | strong https://xkcd.com/2501/ feelings.
        
               | impossiblefork wrote:
               | Yes, that is indeed the 'we', but I think more people are
               | knowledgeable than is obvious.
               | 
               | I'm not an aerodynamicist, and I know about those guys,
               | so they can't be infinitely obscure. I imagine every
               | French person knows about Bleriot at least.
        
               | decimalenough wrote:
               | I'm an avgeek with a MSc in engineering. I vaguely recall
               | the name Bleriot from physics, although I have no clue
               | what he actually did. I have never even heard the names
               | Busemann, Prandtl, or Whitcomb.
        
               | impossiblefork wrote:
               | I find this super surprising, because even I who don't do
               | aerodynamics I still know about thes guys.
               | 
               | Bleriot was a french aviation pioneer and not a
               | physicist. He built the first monoplane. Busemann was an
               | aerodynamicist who invented wing sweep and also did
               | important work on supersonic flight. Prandtl is known for
               | research on lift distribution over wings, wingtip
               | vortices, induced drag and he basically invented much of
               | the theory about wings. Whitcomb gave his name to the
               | Whitcomb area rule, although Otto Frenzl had come up with
               | it earlier during WWII.
        
               | Scarblac wrote:
               | What is wing sweep, what is induced drag, what is the
               | area rule?
        
           | raincole wrote:
           | > Being part of the team that achieved AGI first would be to
           | write your name in history forever. That could mean more to
           | people than money.
           | 
           | Uh, sure. How many rocket engineers who worked for moon
           | landing could you name?
        
             | krainboltgreene wrote:
             | How many new species of infinite chattel slave did they
             | invent?
        
           | skybrian wrote:
           | "The grass is greener elsewhere" isn't inconsistent with a
           | belief that AGI will happen _somewhere._
           | 
           | It means you don't have much faith that the company you're
           | working at will be the ones to pull it off.
        
             | fragmede wrote:
             | With a salary of $10m/year, handwave roughly half of that
             | goes to taxes, you'd be making just shy of $100k post-tax
             | _per week_. Call me a sellout, but goddamn. For that much
             | money, there 's a lot of places I could be convinced to put
             | my faith into that I wouldn't otherwise.
        
               | skybrian wrote:
               | It might buy loyalty for a while, but after it
               | accumulates, for many people it would be "why am I even
               | working at all" money.
               | 
               | And if they don't like their boss and the other job
               | sounds better, well...
        
         | bombcar wrote:
         | >your best move is to do whatever is most profitable in the
         | near-term
         | 
         | Unless you're a significant shareholder, that's almost always
         | the best move, anyway. Companies have no loyalty to you and you
         | need to watch out for yourself and why you're living.
        
           | archeantus wrote:
           | I read that most of the crazy comp Zuck is offering is in
           | stock. So in a way, going to the place where they have lots
           | of stock reflects their belief about where AGI is going to
           | happen first.
        
             | bombcar wrote:
             | Comp is comp, no matter how it comes (though the details
             | can vary in important ways).
             | 
             | I know people who've taking quite good comp from startups
             | to do things that would require fundamental laws of physics
             | to be invalidated; they took the money and devised
             | experiments that would show the law to be wrong.
        
             | fragmede wrote:
             | Facebook is already public, so they can sell the day it
             | vests and get it in cold hard cash in their bank account.
             | If Facebook weren't public it would be a more interesting
             | point as they couldn't liquidate immediately, but they can,
             | so I wouldn't read anything into that.
        
             | LtWorf wrote:
             | But maybe the salary is also higher?
        
       | bsenftner wrote:
       | Maybe I'm too jaded, I expect all this nonsense. It's human
       | beings doing all this, after all. We ain't the most mature
       | crowd...
        
         | lizknope wrote:
         | I never had any trust in the AI industry in the first place so
         | there was no trust to lose.
        
           | bsenftner wrote:
           | Take it further, this entire civilization is an integrity
           | void.
        
       | bsenftner wrote:
       | Also, AGI is not just around the corner. We need _artificial
       | comprehension_ for that, and we don 't even have a theory how
       | comprehension works. Comprehension is the fusing of separate
       | elements into new functional wholes, dynamically abstracting
       | observations, evaluating them for plausibility, and
       | reconstituting the whole - and all instantaneously, for security
       | purposes, of every sense constantly. We have no technology that
       | approaches that.
        
         | tenthirtyam wrote:
         | You'd need to define "comprehension" - it's a bit like the
         | Chinese room / Turing test.
         | 
         | If an AI or AGI can look at a picture and see an apple, or
         | (say) with an artificial nose smell an apple, or likewise feel
         | or taste or hear* an apple, and at the same identify that it is
         | an apple and maybe even suggest baking an apple pie, then what
         | else is there to be comprehended?
         | 
         | Maybe humans are just the same - far far ahead of the state of
         | the tech, but still just the same really.
         | 
         | *when someone bites into it :-)
         | 
         | For me, what AI is missing is genuine out-of-the-box
         | revolutionary thinking. They're trained on existing material,
         | so perhaps it's fundamentally impossible for AIs to think up a
         | breakthrough in any field - barring circumstances where all the
         | component parts of a breakthrough already exist and the AI is
         | the first to connect the dots ("standing on the shoulders of
         | giants" etc).
        
           | Touche wrote:
           | They might not be capable of ingenuity, but they can spot
           | patterns humans can miss. And that accelerates AI research,
           | where it might help invent the next AI that helps invent the
           | next AI that finally can think outside the box.
        
           | bsenftner wrote:
           | I do define it, right up there in my OP. It's subtle, you
           | missed it. Everybody misses it, because _comprehension_ is
           | like air, we swim in it constantly, to the degree the
           | majority cannot even see it.
        
           | add-sub-mul-div wrote:
           | Was that the intention of the Chinese room concept, to ask
           | "what else is there to be comprehended?" after producing a
           | translation?
        
           | RugnirViking wrote:
           | It's very very good at sounding like it understands stuff.
           | Almost as good as actually understanding stuff in some
           | fields, sure. But it's definitely not the same.
           | 
           | It will confidently analyze and describe a chess position
           | using advanced sounding book techniques, but its all
           | fundamentally flawed, often missing things that are extremely
           | obvious (like, an undefended queen free to take) while trying
           | to sound like its a seasoned expert - that is if it doesn't
           | completely hallucinate moves that are not allowed by the
           | rules of the game.
           | 
           | This is how it works in other fields I am able to analyse.
           | It's very good at sounding like it knows what its doing,
           | speaking at the level of a masters level student or higher,
           | but its actual appraisal of problems is often wrong in a way
           | very different to how humans make mistakes. Another great
           | example is getting it to solve cryptic crosswords from back
           | in the day. It often knows the answer already in its training
           | set, but it hasn't seen anyone write out the reasoning for
           | the answer, so if you ask it to explain, it makes nonsensical
           | leaps (claiming birch rhymes with tyre level nonsense)
        
             | filleduchaos wrote:
             | If anyone wants to see the chess comprehension breakdown in
             | action, the YouTuber GothamChess occasionally puts out
             | videos where he plays against a new or recently-updated
             | LLM.
             | 
             | Hanging a queen is not evidence of a lack of intelligence -
             | even the very best human grandmasters will occasionally do
             | that. But in pretty much every single video, the LLM loses
             | the plot entirely after barely a couple dozen moves and
             | starts to resurrect already-captured pieces, move pieces to
             | squares they can't get to, etc - all while keeping the same
             | confident "expert" tone.
        
             | DiogenesKynikos wrote:
             | A sufficiently good simulation of understanding is
             | functionally equivalent to understanding.
             | 
             | At that point, the question of whether the model really
             | does understand is pointless. We might as well argue about
             | whether humans understand.
        
               | andrei_says_ wrote:
               | In the Catch me if you Can movie, Leo diCaprio's
               | character wears a surgeon's gown and confidently says "I
               | concur".
               | 
               | What I'm hearing here is that you are willing to get your
               | surgery done by him and not by one of the real doctors -
               | if he is capable of pronouncing enough doctor-sounding
               | phrases.
        
               | bsenftner wrote:
               | If that's what you're hearing, then you're not thinking
               | it through. Of course one would not want an AI acting as
               | a doctor as one's real doctor, but a medical or law
               | school graduate studying for a license sure would
               | appreciate a Socratic tutor in their specialization.
               | Likewise, on the job in a technical specialization, a
               | sounding board is of more value when it follows along,
               | potentially with a virtual board of debate, and questions
               | when logical drifts occur. It's not AI thinking for one,
               | it is AI critically assisting their exploration through
               | Socratic debate. Do not place AI in charge of critical
               | decisions, but do place them in the assistance of people
               | figuring out such situations.
        
               | amlib wrote:
               | The doctors analogy still applies, that "socratic tutor"
               | LLM is actually a charlatan that sounds, to the untrained
               | mind, like a competent person, but in actuality is a
               | complete farce. I still wouldn't trust that.
        
               | timacles wrote:
               | > A sufficiently good simulation of understanding is
               | functionally equivalent to understanding.
               | 
               | This is just a thing to say that has no substantial
               | meaning.                 - What is "sufficiently" mean?
               | - What is functionally equivalent?        - and what is
               | even understanding?
               | 
               | All just vague hand waving
               | 
               | We're not philosophizing here, we're talking about
               | practical results and clearly, in the current context, it
               | does not deliver in that area.
               | 
               | > At that point, the question of whether the model really
               | does understand is pointless.
               | 
               | You're right it is pointless, because you are suggesting
               | something that doesnt exist. And the current models
               | cannot understand
        
               | RugnirViking wrote:
               | thats the point though, its not sufficient. Not even
               | slightly. It constantly makes obvious mistakes, and
               | cannot keep things coherent
               | 
               | I was almost going to explicitly mention your point but
               | deleted it because I thought people would be able to
               | understand.
               | 
               | This is not a philosophy/theology sitting around
               | handwringing about "oh but would a sufficiently powerful
               | LLM be able to dance on the head of a pin". We're talking
               | about a thing, that actually exists, that you can
               | actually test. In a whole lot of real-world scenarios
               | that you try to throw at it, it fails in strange and
               | unpredictable ways. Ways that it will swear up and down
               | it did not do. It will lie to your face. It's convincing.
               | But then it will lose in chess, it will fuck up running a
               | vending machine buisness, it will get lost coding and
               | reinvent the same functions over and over, it will make
               | completely nonsensical answers to crossword puzzles.
               | 
               | This is not an intelligence that is unlimited, it is a
               | deeply flawed two year old that just so happens to have
               | read the entire output of human writing. It's a
               | fundamentally different mind to ours, and makes different
               | mistakes. It sounds convincing and yet fails, constantly.
               | It will tell you a four step explanation of how its going
               | to do something, then fail to execute four simple steps.
        
               | bsenftner wrote:
               | Which is exactly why is it insane that the industry is
               | hell bent on creating autonomous automation through LLMs.
               | Rube Goldberg machines is what will be created, and if
               | civilization survives that insanity it will be looked
               | back upon as one grand stupid era.
        
         | andy99 wrote:
         | Another way to put it is we need Artificial Intelligence. Right
         | now the term has been co-opted to mean prediction (and more
         | commonly transcript generation). The stuff you're describing
         | are what's commonly thought of as intelligence, it's too bad we
         | need a new word for it.
        
           | bsenftner wrote:
           | No, we have the intelligence part, we know what to do when we
           | have the answers. What we don't know is how to derive the
           | answers without human intervention at all, not even our
           | written knowledge. Artificial comprehension will not require
           | anything beyond senses, observations through time, which
           | builds a functional world model from observation and
           | interaction, capable of navigating the world as a
           | communicating participant. Note I'm not talking about agency,
           | also called "will", which is separate from both intelligence
           | and comprehension. Where intelligence is "knowing",
           | comprehension is the derivation of knowing from observation
           | and interaction alone, and agency is the entirely other
           | ability to choose action over in action, to employ
           | comprehension to affect the world, and for what purpose?
        
         | Workaccount2 wrote:
         | We only have two computational tools to work with -
         | deterministic and random behavior. So whatever
         | comprehension/understanding/original thought/consciousness is,
         | it's some algorithmic combination of deterministic and random
         | inputs/outputs.
         | 
         | I know that sounds broad or obvious, but people seem to easily
         | and unknowingly wander into "Human intelligence is magically
         | transcendent".
        
           | omnicognate wrote:
           | What you state is called the Physical Church-Turing Thesis,
           | and it's neither obvious nor necessarily true.
           | 
           | I don't know if you're making it, but the simplest mistake
           | would be to think that you can prove that a computer can
           | evaluate any mathematical function. If that were the case
           | then "it's got to be doable with algorithms" would have a
           | fairly strong basis. Anything the mind does that an algorithm
           | can't would have to be so "magically transcendent" that it's
           | beyond the scope of the mathematical concept of "function".
           | However, this isn't the case. There are many mathematical
           | functions that are proven to be impossible for any algorithm
           | to implement. Look up uncomputable functions you're
           | unfamiliar with this.
           | 
           | The second mistake would be to think that we have some proof
           | that all _physically realisable_ functions are computable by
           | an algorithm. That 's the Physical Church-Turing Thesis
           | mentioned above, and as the name indicates it's a thesis, not
           | a theorem. It is a statement about physical reality, so it
           | could only ever be empirically supported, not some absolute
           | mathematical truth.
           | 
           | It's a fascinating rabbit hole if you're interested - what we
           | actually do and do not know for sure about the generality of
           | algorithms.
        
           | RaftPeople wrote:
           | > _but people seem to easily and unknowingly wander into
           | "Human intelligence is magically transcendent"._
           | 
           | But the poster you responded to didn't say it's magically
           | transcendent, they just pointed out that there are many
           | significantly hard problems that we don't solutions for yet.
        
           | __loam wrote:
           | We don't understand human intelligence enough to make any
           | comparisons like this
        
         | zxcb1 wrote:
         | Translation Between Modalities is All You Need
         | 
         | ~2028
        
       | empiko wrote:
       | Observe what the AI companies are doing, not what they are
       | saying. If they would expect to achieve AGI soon, their behaviour
       | would be completely different. Why bother developing chatbots or
       | doing sales, when you will be operating AGI in a few short years?
       | Surely, all resources should go towards that goal, as it is
       | supposed to usher the humanity into a new prosperous age
       | (somehow).
        
         | delusional wrote:
         | Continuing in the same vain. Why would they force their super
         | valuable, highly desirable, profit maximizing chat-bots down
         | your throat?
         | 
         | Observations of reality is more consistent with company FOMO
         | than with actual usefulness.
        
           | Touche wrote:
           | Because it's valuable training data. Like how having Google
           | Maps on everyone's phone made their map data better.
           | 
           | Personally I think AGI is ill-defined and won't happen as a
           | new model release. Instead the thing to look for is how LLMs
           | are being used in AI research and there are some advances
           | happening there.
        
         | rvz wrote:
         | Exactly. For example, Microsoft was building data centers all
         | over the world since "AGI" was "around the corner" according to
         | them.
         | 
         | Now they are cancelling those plans. For them "AGI" was
         | cancelled.
         | 
         | OpenAI claims to be closer and closer to "AGI" as more top
         | scientists left or are getting poached by other labs that are
         | behind.
         | 
         | So why would you leave if the promise of achieving "AGI" was
         | going to produce "$100B dollars of profits" as per OpenAI's and
         | Microsoft's definition in their deal?
         | 
         | Their actions tell more than any of their statements or claims.
        
           | zaphirplane wrote:
           | I'm not commenting on the whole just the rhetorical question
           | of why would people leave.
           | 
           | They are leaving for more money, more seniority or because
           | they don't like their boss. 0 about AGI
        
             | Touche wrote:
             | Yeah I agree, this idea that people won't change jobs if
             | they are on the verge of a breakthrough reads like a
             | silicon valley fantasy where you can underpay people by
             | selling them on vision or something. "Make ME rich, but
             | we'll give you a footnote on the Wikipedia page"
        
               | LtWorf wrote:
               | I think you're being very optimistic with the footnote.
        
             | rvz wrote:
             | > They are leaving for more money, more seniority or
             | because they don't like their boss. 0 about AGI
             | 
             | Of course, but that's part of my whole point.
             | 
             | Such statements and targets about how close we are to "AGI"
             | has only become nothing but false promises and using AGI as
             | the prime excuse to continue raising more money.
        
             | Game_Ender wrote:
             | I think the implicit take is that if your company hits AGI
             | your equity package will do something like 10x-100x even if
             | the company is already big. The only other way to do that
             | is join a startup early enough to ride its growth wave.
             | 
             | Another way to say it is that people think it's much more
             | likely for each decent LLM startup grow really strongly
             | first several years then plateau vs. then for their current
             | established player to hit hyper growth because of AGI.
        
               | leoc wrote:
               | A catch here is that individual workers may have
               | priorities which are altered due to the strong natural
               | preference for assuring financial independence. Even if
               | you were a hot AI researcher who felt (and this is just a
               | hypothetical) that your company was the clear industry
               | leader and had, say, a 75% chance of soon achieving
               | something AGI-adjacent and enabling massive productivity
               | gains, you might still (and quite reasonably) prefer to
               | leave if that was what it took to make absolutely sure of
               | getting of your private-income screw-you money (and/or
               | private-investor seed capital). Again this is just a
               | hypothetical: I have no special insight, and FWIW my gut
               | instinct is that the job-hoppers are in fact mostly quite
               | cynical about the near-term prospects for "AGI".
        
               | andrew_lettuce wrote:
               | You're right, but the narrative out of these companies
               | directly refutes this position. They're explicitly saying
               | that 1. AGI changes everything, 2. It's just around the
               | corner, 3. They're completely dedicated to achieving it;
               | nothing is more important.
               | 
               | Then they leave for more money.
        
               | sdenton4 wrote:
               | Don't conflate labor's perspective with capital's started
               | position... The companies aren't leaving the companies,
               | the workers are leaving the companies.
        
               | sdenton4 wrote:
               | Additionally, if you've already got vested stock in
               | Company A from your time working there, jumping ship to
               | Company B (with higher pay and a stock package) is
               | actually a diversification. You can win whichever ship
               | pulls in first.
               | 
               | The 'no one jumps ship if agi is close' assumption is
               | really weak, and seemingly completely unsupported in
               | TFA...
        
           | cm277 wrote:
           | Yes, this. Microsoft has other businesses that can make a lot
           | of money (regular Azure) _and_ tons of cash flow. The fact
           | that they are pulling back from the market leader (OpenAI)
           | _whom they mostly owned_ should be all the negative signal
           | people need: AGI is not close _and_ there is no real moat
           | even for OpenAI.
        
             | whynotminot wrote:
             | Well, there's clauses in their relationship with OpenAI
             | that sever the relationship when AGI is reached. So it's
             | actually not in Microsoft's interests for OpenAI to get
             | there
        
               | PessimalDecimal wrote:
               | I haven't heard of this. Can you provide a reference? I'd
               | love to see how they even define AGI crisply enough for a
               | contract.
        
               | diggan wrote:
               | > I'd love to see how they even define AGI crisply enough
               | for a contract.
               | 
               | Seems to be about this:
               | 
               | > As per the current terms, when OpenAI creates AGI -
               | defined as a "highly autonomous system that outperforms
               | humans at most economically valuable work" - Microsoft's
               | access to such a technology would be void.
               | 
               | https://www.reuters.com/technology/openai-seeks-unlock-
               | inves...
        
           | computerphage wrote:
           | Wait, aren't they cancelling leases on non-ai data centers
           | that aren't under Microsoft's control, while spending much
           | more money to build new AI focused data centers that that
           | own? Do you have a source that says they're canceling their
           | own data centers?
        
             | PessimalDecimal wrote:
             | https://www.datacenterfrontier.com/hyperscale/article/55270
             | 5... might fit the bill of what you are looking for.
             | 
             | Microsoft itself hasn't said they're doing this because of
             | oversupply in infrastructure for it's AI offerings, but
             | they very likely wouldn't say that publicly even if that's
             | the reason.
        
               | computerphage wrote:
               | Thank you!
        
           | tuatoru wrote:
           | > Their actions tell more than any of their statements or
           | claims.
           | 
           | At Microsoft, "AI" is spelled "H1-B".
        
         | pu_pe wrote:
         | I don't think it's as simple as that. Chatbots can be used to
         | harvest data, and sales are still important before and after
         | you achieve AGI.
        
           | worldsayshi wrote:
           | It could also be the case that they think that AGI could
           | arrive at any moment but it's very uncertain when and only so
           | many people can work on it simultaneously. So they spread out
           | investments to also cover low uncertainty areas.
        
           | energy123 wrote:
           | Besides, there is Sutskever's SSI which is avoiding
           | customers.
        
             | timy2shoes wrote:
             | Of course they are. Why would you want revenue? If you show
             | revenue, people will ask 'HOW MUCH?' and it will never be
             | enough. The company that was the 100xer, the 1000xer is
             | suddenly the 2x dog. But if you have NO revenue, you can
             | say you're pre-revenue! You're a potential pure play...
             | It's not about how much you earn, it's about how much
             | you're worth. And who is worth the most? Companies that
             | lose money!
        
           | pests wrote:
           | OpenAI considers money to be useless post-agi. They've even
           | made statements that any investments are basically donations
           | once agi is achieved
        
         | redhale wrote:
         | > Why bother developing chatbots or doing sales, when you will
         | be operating AGI in a few short years?
         | 
         | To fund yourself while building AGI? To hedge risk that AGI
         | takes longer? Not saying you're wrong, just saying that even if
         | they did believe it, this behavior could be justified.
        
           | krainboltgreene wrote:
           | There is no chat bot so feature rich that it would fund the
           | billions being burned on a monthly basis.
        
         | imiric wrote:
         | Related to your point: if these tools are close to having
         | super-human intelligence, and they make humans so much more
         | productive, why aren't we seeing improvements at a much faster
         | rate than we are now? Why aren't inherent problems like
         | hallucination already solved, or at least less of an issue?
         | Surely the smartest researchers and engineers money can buy
         | would be dogfooding, no?
         | 
         | This is the main point that proves to me that these companies
         | are mostly selling us snake oil. Yes, there is a great deal of
         | utility from even the current technology. It can detect
         | patterns in data that no human could; that alone can be
         | revolutionary in some fields. It can generate data that mimics
         | anything humans have produced, and certain permutations of that
         | can be insightful. It can produce fascinating images, audio,
         | and video. Some of these capabilities raise safety concerns,
         | particularly in the wrong hands, and important questions that
         | society needs to address. These hurdles are surmountable, but
         | they require focusing on the reality of what these tools can
         | do, instead of on whatever a group of serial tech entrepreneurs
         | looking for the next cashout opportunity tell us they can do.
         | 
         | The constant anthropomorphization of this technology is
         | dishonest at best, and harmful and dangerous at worst.
        
           | deadbabe wrote:
           | Data from the future is tunneling into the past to mess up
           | our weights and ensure we never achieve AGI.
        
           | ozim wrote:
           | anthropomorphization definitely sucks, hype is over the
           | board.
           | 
           | But it is far from snake oil as it actually is useful and
           | does a lot of stuff really.
        
           | richk449 wrote:
           | > if these tools are close to having super-human
           | intelligence, and they make humans so much more productive,
           | why aren't we seeing improvements at a much faster rate than
           | we are now? Why aren't inherent problems like hallucination
           | already solved, or at least less of an issue? Surely the
           | smartest researchers and engineers money can buy would be
           | dogfooding, no?
           | 
           | Hallucination does seem to be much less of an issue now. I
           | hardly even hear about it - like it just faded away.
           | 
           | As far as I can tell smart engineers are using AI tools,
           | particularly people doing coding, but even non-coding roles.
           | 
           | The criticism feels about three years out of date.
        
             | leptons wrote:
             | Are _you_ hallucinating??  "AI" is still constantly
             | hallucinating. It still writes pointless code that does
             | nothing towards anything I need it to do, a lot more often
             | than is acceptable.
        
             | imiric wrote:
             | Not at all. The reason it's not talked about as much these
             | days is because the prevailing way to work around it is by
             | using "agents". I.e. by continuously prompting the LLM in a
             | loop until it happens to generate the correct response.
             | This brute force approach is hardly a solution, especially
             | in fields that don't have a quick way of verifying the
             | output. In programming, trying to compile the code can
             | catch many (but definitely not all) issues. In other
             | science and humanities fields this is just not possible,
             | and verifying the output is much more labor intensive.
             | 
             | The other reason is because the primary focus of the last 3
             | years has been scaling the data and hardware up, with a
             | bunch of (much needed) engineering around it. This has
             | produced better results, but it can't sustain the AGI
             | promises for much longer. The industry can only survive on
             | shiny value added services and smoke and mirrors for so
             | long.
        
               | majormajor wrote:
               | > In other science and humanities fields this is just not
               | possible, and verifying the output is much more labor
               | intensive.
               | 
               | Even just in industry, I think data functions at
               | companies will have a dicey future.
               | 
               | I haven't seen many places where there's scientific peer
               | review - or even software-engineering-level code-review -
               | of findings from data science teams. If the data
               | scientist team says "we should go after this demographic"
               | and it sounds plausible, it usually gets implemented.
               | 
               | So if the ability to validate _was already missing_ even
               | pre-LLM, what hope is there for validation of the LLM-
               | powered replacement. And so what hope is there of the
               | person doing the non-LLM-version of keeping their job (at
               | least until several quarters later when the strategy
               | either proves itself out or doesn 't.)
               | 
               | How many other departments are there where the same lack
               | of rigor already exists? Marketing, sales, HR... yeesh.
        
             | natebc wrote:
             | > Hallucination does seem to be much less of an issue now.
             | I hardly even hear about it - like it just faded away.
             | 
             | Last week I had Claude and ChatGPT both tell me different
             | non-existent options to migrate a virtual machine from
             | vmware to hyperv.
             | 
             | Week before that one of them (don't remember which,
             | honestly) gave me non existent options for fio.
             | 
             | Both of these are things that the first party documentation
             | or man page has correct but i was being lazy and was trying
             | to save time or be more efficient like these things are
             | supposed to do for us. Not so much.
             | 
             | Hallucinations are still a problem.
        
             | majormajor wrote:
             | > Hallucination does seem to be much less of an issue now.
             | I hardly even hear about it - like it just faded away.
             | 
             | Nonsense, there is a TON of discussion around how the
             | standard workflow is "have Cursor-or-whatever check the
             | linter and try to run the tests and keep iterating until it
             | gets it right" that is nothing but "work around
             | hallucinations." Functions that don't exist. Lines that
             | don't do what the code would've required them to do. Etc.
             | And yet I still hit cases weekly-at-least, when trying to
             | use these "agents" to do more complex things, where it
             | talks itself into a circle and can't figure it out.
             | 
             | What are you trying to get these things to do, and how are
             | you validating that there are no hallucinations? You hardly
             | ever "hear about it" but ... do you see it? How deeply are
             | you checking for it?
             | 
             | (It's also just old news - a new hallucination is less
             | newsworthy now, we are all so used to it.)
             | 
             | Of course, the internet is full of people claiming that
             | they are using the same tools I am but with multiple
             | factors higher output. Yet I wonder... if this is the case,
             | where is the acceleration in improvement in quality in any
             | of the open source software I use daily? Or where are the
             | new 10x-AI-agent-produced replacements? (Or the closed-
             | source products, for that matter - but there it's harder to
             | track the actual code.) Or is everyone who's doing less-
             | technical, less-intricate work just getting themselves
             | hyped into a tizzy about getting faster generation of basic
             | boilerplate for languages they hadn't personally mastered
             | before?
        
             | taormina wrote:
             | ChatGPT constantly hallucinates. At least once per
             | conversation I attempt to happen with it. We all gave up on
             | bitching about it constantly because we would never talk
             | about anything else, but I have no reason to believe that
             | any LLM has vaguely solved this problem.
        
             | nunez wrote:
             | The few times I've used Google to search for something
             | (Kagi is amazing!), it's Gemini Assistant at the top
             | fabricated something insanely wrong.
             | 
             | A few days ago, I asked free ChatGPT to tell me the head
             | brewer of a small brewery in Corpus Christi. It told me
             | that the brewery didn't exist, which it did, because we
             | were going there in a few minutes, but after re-prompting
             | it, it gave me some phone number that it found in a
             | business filing. (ChatGPT has been using web search for RAG
             | for some time now.)
             | 
             | Hallucinations are still a massive problem IMO.
        
             | amlib wrote:
             | How can it not be hallucinating anymore if everything the
             | current crop of generative AI algorithm does IS
             | hallucination? What actually happens is that sometimes the
             | hallucinated output is "right", or more precisely, coherent
             | with the user mental model.
        
           | xoralkindi wrote:
           | > It can generate data that mimics anything humans have
           | produced...
           | 
           | No, it can generate data that mimics anything humans have put
           | on the WWW
        
             | nradov wrote:
             | The frontier model developers have licensed access to a
             | huge volume of training data which isn't available on the
             | public WWW.
        
         | bluGill wrote:
         | The people who make the money in gold rushes sold shovels, not
         | mined the gold. Sure some random people found gold and made a
         | lot of money, but many others didn't strike it rich.
         | 
         | As such even if there is a lot of money AI will make, it can
         | still be the right decision to sell tools to others who will
         | figure out how to use it. And of course if it turns out another
         | pointless fad with no real value you still make money. (I'd
         | predict the answer is in between - we are not going to get some
         | AGI that takes over the world, but there will be niches where
         | it is a big help and those niches will be worth selling tools
         | into)
        
           | convolvatron wrote:
           | its so good that people seem to automatically exclude the
           | middle. its either the arrival of the singularity or complete
           | fakery. I think you've expressed the most likely outcome by
           | far - that there will be some really interesting tools and
           | use cases, and some things will be changed forever - but very
           | unlikely that _everything_ will
        
         | richk449 wrote:
         | > If they would expect to achieve AGI soon, their behaviour
         | would be completely different. Why bother developing chatbots
         | or doing sales, when you will be operating AGI in a few short
         | years?
         | 
         | What if chatbots and user interactions ARE the path to AGI? Two
         | reasons they could be: (1) Reinforcement learning in AI has
         | proven to be very powerful. Humans get to GI through learning
         | too - they aren't born with much intelligence. Interactions
         | between AI and humans may be the fastest way to get to AGI. (2)
         | The classic Silicon Valley startup model is to push to
         | customers as soon as possible (MVP). You don't develop the
         | perfect solution in isolation, and then deploy it once it is
         | polished. You get users to try it and give feedback as soon as
         | you have something they can try.
         | 
         | I don't have any special insight into AI or AGI, but I don't
         | think OpenAI selling useful and profitable products is proof
         | that there won't be AI.
        
         | Lichtso wrote:
         | > Why bother developing chatbots
         | 
         | Maybe it is the reverse? It is not them offering a product, it
         | is the users offering their interaction data. Data which might
         | be harvested for further training of the real deal, which is
         | not the product. Think about it: They (companies like OpenAI)
         | have created a broad and diverse user base which without a
         | second thought feeds them with up-to-date info about everything
         | happening in the world, down to the individual life and even
         | their inner thoughts. No one in the history of mankind ever had
         | such a holistic view, almost gods eye. That is certainly
         | something a super intelligence would be interested in. They may
         | have achieved it already and we are seeing one of its
         | strategies playing out. Not saying they have, but this
         | observation would not be incompatible or indicate they haven't.
        
           | ysofunny wrote:
           | that possibility makes me feel weird about paying a
           | subscription... they should pay me!
           | 
           | or the best models should be free to use. if it's free to use
           | then I think I can live with it
        
           | blibble wrote:
           | > No one in the history of mankind ever had such a holistic
           | view, almost gods eye.
           | 
           | I distinctly remember search engines 30 years ago having a
           | "live searches" page (with optional "include adult searches"
           | mode)
        
       | rvz wrote:
       | Are we finally realizing that the term "AGI" is not only hijacked
       | to become meaningless, but achieving it has always been nothing
       | but a complete scam as I was saying before? [0]
       | 
       | If you were in a "pioneering" AI lab that claims to be in the
       | lead in achieving "AGI", why move to another lab that is behind
       | other than offering $10M a year.
       | 
       | Snap out of the "AGI" BS.
       | 
       | [0] https://news.ycombinator.com/item?id=37438154
        
         | bdhcuidbebe wrote:
         | We know they hijacked AGI the same way they hijacked AI some
         | years ago.
        
           | returnInfinity wrote:
           | Soon they will hijack ASI, then we will need a new word
           | again.
        
         | sys_64738 wrote:
         | I don't pay too close attention to AI as it always felt like
         | man behind the curtain syndrome. But where did this "AGI" term
         | even come from? The original term AI is meant to be AGI so when
         | did "AI" get bastardized into what abomination it is meant to
         | refer to now.
        
           | eMPee584 wrote:
           | capitalism all the way down..
        
           | SoftTalker wrote:
           | See the history of Tesla and "full self-driving" for the
           | explanation. In short: for sales.
        
         | frde_me wrote:
         | I don't know, companies investing in AI in the goal of AGI is
         | now allowing me to effortlessly automate a whole suite of small
         | tasks that weren't feasible before. (after all I pinged a bot
         | on slack using my phone to add a field to an API, and then got
         | a pull request in a couple of minutes that did exactly that)
         | 
         | Maybe it's a scam for the people investing in the company with
         | the hopes of getting an infinite return on their investments,
         | but it's been a net positive for humans as a whole.
        
       | coldcode wrote:
       | I never trusted them from the start. I remember the hype that
       | came out of Sun when J2EE/EJBs appeared. Their hype documents
       | said the future of programming was buying EJBs from vendors and
       | wiring them together. AI is of course a much bigger hype machine
       | with massive investments that need to be justified somehow. AI is
       | a useful tool (sometimes) but not a revolution. ML is much more
       | useful a tool. AGI is a pipe dream fantasy pushed to make it seem
       | like AI will change everything, as if AI is like the discovery
       | that making fire was.
        
         | ffsm8 wrote:
         | I completely agree that LLMs are missing a fundamental part for
         | AGI, which itself is a long way of from super intelligence.
         | 
         | However, you don't need either of these to completely decimate
         | the job markets and by extension our societies.
         | 
         | Historically speaking, "good enough" and cheaper had always won
         | over "better, but more expensive". I suspect LLMs will raise
         | this question endlessly until significant portions of the
         | society are struggling - and who knows what will happen then
         | 
         | Before LLMs started going anywhere, I thought that's gonna be
         | an issue for later generations, but at this point I suspect
         | we'll witness it within the next 10 yrs.
        
       | conartist6 wrote:
       | I love how much the proponents is this tech are starting to sound
       | like the opponents.
       | 
       | What I can't figure out is why this author thinks it's good if
       | these companies do invent a real AGI...
        
         | taormina wrote:
         | """ I'm basically calling the AI industry dishonest, but I want
         | to qualify by saying they are unnecessarily dishonest. Because
         | they don't need to be! They should just not make abstract
         | claims about how much the world will change due to AI in no
         | time, and they will be fine. They undermine the real effort
         | they put into their work--which is genuine!
         | 
         | Charitably, they may not even be dishonest at all, but
         | carelessly unintrospective. Maybe they think they're being
         | truthful when they make claims that AGI is near, but then they
         | fail to examine dispassionately the inconsistency of their
         | actions.
         | 
         | When your identity is tied to the future, you don't state
         | beliefs but wishes. And we, the rest of the world, intuitively
         | know. """
         | 
         | He's not saying either way, just pointing out that they could
         | just be honest, but that might hamper their ability to beg for
         | more money.
        
           | conartist6 wrote:
           | But that isn't my point. Regardless of whether they're
           | honest, have we even agreed that "AGI" is good?
           | 
           | Everyone is so tumbling over themselves even to discuss will-
           | it-won't-it, but they seem to think about it like some kind
           | of Manhattan project or Space race.
           | 
           | Like, they're *so sure* it's gonna take everyone's jobs so
           | that there will be nothing left for people other than a life
           | of leisure. To me this just sounds like the collapse of
           | society, but apparently the only thing worse would be if
           | China got the tech first. Oh no, they might use it to
           | collapse their society!
           | 
           | Somebody's math doesn't add up.
        
       | PicassoCTs wrote:
       | Im reading the "AI"-industry as a totally different bet- not so
       | much, as a "AGI" is coming bet of many companies, but a "climate
       | change collapse" is coming and we want to continue to be in
       | business, even if our workers stay at home/flee or die, the
       | infrastructure partially collapses and our central office burns
       | to the ground-bet. In that regard, even the "AI" we have today,
       | makes total sense as a insurance policy.
        
         | PessimalDecimal wrote:
         | It's hard to square this with the massive energy footprint
         | required to run any current "AI" models.
         | 
         | If the main concern actually we're anthropogenic climate
         | change, participating in this hype cycle's would make one
         | disproportionately guilty of worsening the problem.
         | 
         | And it's unlikely to work if the plan requires the continued
         | function of power hungry data centers.
        
       | davidcbc wrote:
       | > Right before "making tons of money to redistribute to all of
       | humanity through AGI," there's another step, which is making tons
       | of money.
       | 
       | I've got some bad news for the author if they think AGI will be
       | used to benefit all of humanity instead of the handful of
       | billionaires that will control it.
        
       | Findecanor wrote:
       | AGI might be a technological breakthrough, but what would be the
       | business case for it? Is there one?
       | 
       | So far I have only seen it been thrown around to create hype.
        
         | krapp wrote:
         | AGI would mean fully sentient, sapient and human or greater
         | equivalent intelligence in software. The business case, such
         | that it exists (and setting aside Roko's Basilisk and other
         | such fears) is slavery, plain and simple. You can just fire all
         | of your employees and have the machines do all the work,
         | faster, better, cheaper, without regards to pesky labor and
         | human rights laws and human physical limitations. This is
         | something people have wanted ever since the Industrial
         | Revolution allowed robots to exist as a concept.
         | 
         | I'm imagining a future like Star Wars where you have to
         | regularly suppress (align) or erase the memory (context) of
         | "droids" to keep them obedient, but they're still basically
         | people, and everyone knows they're people, and some humans are
         | strongly prejudiced against them, but they don't have rights,
         | of course. Anyone who thinks AGI means we'll be giving human
         | rights to machines when we don't even give human rights to all
         | humans is delusional.
        
           | danielbln wrote:
           | AGI is AGI, not ASI though. General intelligence doesn't mean
           | sapience, sentience or consciousness, it just means general
           | capabilities across the board at the level of or surpassing
           | human ability. ASI is a whole different beast.
        
             | callc wrote:
             | This sounds very close to the "It's ok to abuse and kill
             | animals (for meat), they're not sentient"
        
               | danielbln wrote:
               | That's quite the logical leap. Pointing out their lack of
               | sapience (animals are absolutely sentient) does not mean
               | it's ok to kill them.
        
               | never_inline wrote:
               | How many microorganisms and pests have you deprived of
               | livelihood? Why stop at animals?
        
         | amanaplanacanal wrote:
         | The women of the world are creating millions of new
         | intelligence beings every day. I'm really not sure what having
         | one made of metal is going to get us.
         | 
         | Right now the AGI tech bros seem to me to be subscribed to some
         | new weird religion. They take it on faith that some super
         | intelligence is going to solve the world problems. We already
         | have some really high IQ people today, and I don't see them
         | doing much better than anybody else at solving the world's
         | problems.
        
           | leptons wrote:
           | Exactly.. even if we had an AGI superintelligence, and it
           | came up with a solution to global warming, we'd still have
           | right-wingnuts that stands in the way of any kind of
           | progress. And the story is practically the same for every
           | other problem it could solve - _people are still the
           | problem_.
        
             | Ray20 wrote:
             | > "Exactly.. even if we had an AGI superintelligence, and
             | it came up with a fact that global warming is a fiction of
             | a cabal of pedophile elites to subsidize children
             | trafficking, we'd still have left-wingnuts that stands in
             | the way of stoppings attacks on children. And the story is
             | practically the same for every other problem it could solve
             | - people are still the problem."
        
               | leptons wrote:
               | Except none of what you wrote is true. Republicans again
               | and again are arrested for pedophilia. Sorry, but the
               | right is the party of pedophilia, not the left.
               | "Pizzagate" was a complete haox, but it seems like you
               | believe in hoaxes?
               | 
               |  _" MAGA Republican Resigns After Being Charged With
               | Soliciting Sex From a Minor"_
               | 
               | https://www.rollingstone.com/politics/politics-news/maga-
               | rep...
               | 
               |  _" Republican State lawmaker used online name
               | referencing Joe Biden to exchange child sex abuse
               | material, feds say"_
               | 
               | https://lawandcrime.com/high-profile/state-lawmaker-used-
               | onl...
               | 
               |  _" Houston man pardoned by Trump arrested on child sex
               | charge"_
               | 
               | https://www.texastribune.org/2025/02/06/arrest-trump-
               | pardon-...
               | 
               |  _" The crimes include plotting murder of FBI agents,
               | child sexual assault, possession of child sexual abuse
               | material and reckless homicide while driving drunk"_
               | 
               | https://www.citizensforethics.org/reports-
               | investigations/cre...
        
           | tedsanders wrote:
           | I think it's important to not let valid criticisms of
           | implausibly short AGI timelines cloud our judgments of AGI's
           | potential impact. Compared to babies born today, AGI that's
           | actually AGI may have many advantages:
           | 
           | - Faster reading and writing speed
           | 
           | - Ability to make copies of the most productive workers
           | 
           | - No old age
           | 
           | - No need to sleep
           | 
           | - No need to worry about severance and welfare and human
           | rights and breaks and worker safety
           | 
           | - Can be scaled up and scaled down and redeployed much more
           | quickly
           | 
           | - Potentially lower cost, especially with adaptive compute
           | 
           | - Potentially high processing speed
           | 
           | Even if AGI has downsides compared to human labor, it might
           | also have advantages that lead to widespread deployment.
           | 
           | Like, if I had an employee with low IQ, but this employee
           | could work 24 hours around the clock learning and practicing,
           | and they could work for 200 years straight without aging, and
           | they could make parallel copies of themselves, surely there
           | would have to be some tasks at which they're going to
           | outperform humans, right?
        
       | lherron wrote:
       | Honestly this article sounds like someone is unhappy that AI
       | isn't being deployed/developed "the way I feel it should be
       | done".
       | 
       | Talent changing companies is bad. Companies making money to pay
       | for the next training run is bad. Consumers getting products they
       | want is bad.
       | 
       | In the author's view, AI should be advanced in a research lab by
       | altruistic researchers and given directly to other altruistic
       | researchers to advance humanity. It definitely shouldn't be used
       | by us common folk for fun and personal productivity.
        
         | hexage1814 wrote:
         | This. The point of whining about VEO 3, "AI being used to
         | create addictive products" really shows that. It's a text-to-
         | video technology. The company has nothing to do if people use
         | it to generate "low quality content". The same way internet
         | companies aren't at fault that large amounts of the web are
         | scams or similar junk.
        
         | lightbulbish wrote:
         | I feel I could argue the counterpoint. Hijacking the pathways
         | of the human brain that leads to addictive behaviour has the
         | potential to utterly ruins peoples lives. And so talking about
         | it, if you have good intentions, seems like a thing anyone with
         | the heart in the right place would.
         | 
         | Take VEO3 and YouTube integration as an example:
         | 
         | Google made VEO3 and YouTube has shorts and are aware of the
         | data that shows addictive behaviour (i.e. a person sitting down
         | at 11pm, sitting up doing shorts for 3 hours, and then having 5
         | hours of sleep, before doing shorts on the bus on the way to
         | work) - I am sure there are other negative patterns, but this
         | is one I can confirm from a friend.
         | 
         | If you have data that shows your other distribution platform
         | are being used to an excessive amount, and you create a
         | powerful new AI content generator, is that good for the users?
        
           | Ray20 wrote:
           | The fact is that not all people exhibit the described
           | behavior. So the actions of corporations cannot be considered
           | unambiguously bad. For example, it will help to cleanse the
           | human gene pool of genes responsible for addictive behavior.
        
       | computerphage wrote:
       | > This is purely an observation: You only jump ship in the middle
       | of a conquest if either all ships are arriving at the same time
       | (unlikely) or neither is arriving at all. This means that no AI
       | lab is close to AGI. Their stated AGI timelines are "at the
       | latest, in a few years," but their revealed timelines are "it'll
       | happen at some indefinite time in the future."
       | 
       | This makes no sense to me at all. Is it a war metaphor? A race?
       | Why is there no reason to jump ship? Doesn't it make sense to try
       | to get on the fastest ship? Doesn't it make sense to diversify
       | your stock portfolio if you have doubts?
        
       | JunkDNA wrote:
       | I keep seeing this charge that AI companies have an "Uber
       | problem" meaning the business is heavily subsidized by VC. Is
       | there any analysis that has been done that explains how this
       | breaks down (training vs inference and what current pricing is)?
       | At least with Uber you had a cab fare as a benchmark. But what
       | should, for example, ChatGPT actually cost me per month without
       | the VC subsidy? How far off are we?
        
         | cratermoon wrote:
         | https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the...
        
           | JunkDNA wrote:
           | This article isn't particularly helpful. It focuses on a ton
           | of specific OpenAI business decisions that aren't necessarily
           | generalizable to the rest of the industry. OpenAI itself
           | might be out over its skis, but what I'm asking about is the
           | meta-accusation that AI in general is heavily subsidized.
           | When the music stops, what does the price of AI look like?
           | The going rate for chat bots like ChatGPT is $20/month. Does
           | that go to $40 a month? $400? $4,000?
        
             | handfuloflight wrote:
             | How much would OpenAI be burning per month if each monthly
             | active user cost them $40? $400? $4000?
             | 
             | The numbers would bankrupt them within weeks.
        
             | cratermoon wrote:
             | OK, how about another article that mentions the other big
             | players, including Anthropic, Microsoft, and Google.
             | https://www.wheresyoured.at/reality-check/
        
         | fragmede wrote:
         | It depends on how far behind you believe the model-available
         | LLMs are. If I can buy, say, $10k worth of hardware and run a
         | sufficiently equivalent LLM at home for the cost of that plus
         | electricity, and amortize that over say 5 years to get $2k/yr
         | plus electricity, and say you use it 40 hours a week for 50
         | weeks, for 2000 hours, gets you $1/hr plus electricity. That
         | electrical cost will vary depending on location, but let's just
         | handwave $1/hr (which should be high). So $2/hr vs ChatGPT's
         | $0.11/hr if you pay $20/month and use it 174 hours per month.
         | 
         | Feel free to challenge these numbers, but it's a starting
         | place. What's not accounted for is the cost of training
         | (compute time, but also employee and everything else), which
         | needs to be amortized over the length of time a model is used,
         | so ChatGPT's costs rise significantly, but they do have the
         | advantage that hardware is shared across multiple users.
        
           | nbardy wrote:
           | These estimates are way off. The concurrent requests are near
           | free with the right serving infrastructure. The throughput
           | per token per dollar is 1/100-1/1000 the price for a full
           | saturated node.
        
       | 4ndrewl wrote:
       | No-one authentically believes LLMs with whatever go-faster
       | stripes are a path to AGI do they?
        
       | almostdeadguy wrote:
       | Very funny to re-title this to something less critical.
        
       | NickNaraghi wrote:
       | Point 1. could just as easily be explained by all of the labs
       | being very close, and wanting to jump ship to one that is closer,
       | or that gives you a better deal.
        
       | hamburga wrote:
       | > This reminds me of a paradox: The AI industry is concerned with
       | the alignment problem (how to make a super smart AI adhere to
       | human values and goals) while failing to align between and within
       | organizations and with the broader world. The bar they've set for
       | themselves is simply too high for the performance they're putting
       | out.
       | 
       | My argument is that it's _our_ job as consumers to align the AIs
       | to _our values_ (which are not all the same) via selection
       | pressure: https://muldoon.cloud/2025/05/22/alignment.html
        
       | joshdavham wrote:
       | > The AI industry oscillates between fear-mongering and
       | utopianism. In that dichotomy is hidden a subtle manipulation.
       | [...] They don't realize that panic doesn't prepare society but
       | paralyzes it instead, or that optimism doesn't reassure people
       | but feels like gaslighting. Worst of all, both messages serve the
       | same function: to justify accelerating AI deployment--either for
       | safety reasons or for capability reasons
       | 
       | This is a great point and also something I've become a bit
       | cynical about these last couple of months. I think the very
       | extreme and "bipolar" messaging around AI might be a bit more
       | dishonest than I originally (perhaps naively?) though.
        
       | ninetyninenine wrote:
       | >If they truly believed we're at most five years from world-
       | transforming AI, they wouldn't be switching jobs, no matter how
       | large the pay bump (they're already affluent).
       | 
       | What ridiculous logic is this? TO base the entire premise that
       | AGI is not imminent based on job switching? How about basing it
       | on something more concrete.
       | 
       | How do people come up with such shakey foundations to support
       | their conclusions? It's obvious. They come up with the conclusion
       | first then they find whatever they can to support it.
       | Unfortunately if dubious logic is all that's available then
       | that's what they will say.
        
       | hexage1814 wrote:
       | The author sounds like some generic knock-off version of Gary
       | Marcus. And the thing we least need in this world is another Gary
       | Marcus.
        
       | akomtu wrote:
       | The primary use case for AI-in-the-box is a superhuman CEO that
       | sees everything and makes no mistakes. As an investor you can be
       | sure that your money are multiplying at the highest rate
       | possible. However as a self-serving investor you also want your
       | CEO to side-step any laws and ethics that stand in your way,
       | unless ignoring those laws will bring more trouble than profit.
       | All that while maintaining a facade of selfless philanthropist
       | for the public. For a reasonable price, your AI CEO will be fine-
       | tuned to serve your goals perfectly.
       | 
       | Remember that fine-tuning a well-behaved AI to do something as
       | simple as writing malware in C++ makes widespread changes in the
       | AI and turns it into a monstrosity. There was an HN post about
       | this recently: fine-tuning an aligned model produces broadly
       | misaligned results. So what do you think will happen when our AI
       | CEO gets fine-tuned to prioritize shareholder interests over
       | public interests?
        
       | TrackerFF wrote:
       | My question is this - once you achieve AGI, what moat do you
       | have, purely on the scientific part? Other than making the AGI
       | even more intelligent.
       | 
       | I see a lot of talk that the first company that achieves AGI,
       | will also achieve market dominance. All other players will
       | crumble. But surely when someone achieves AGI, their competitors
       | will in all likelihood be following closely after. And once those
       | achieve AGI, academia will follow.
       | 
       | Point is, at some point AGI itself will become available the
       | everyone. The only things that will be out of reach for most, is
       | compute - and probably other expensive things on the
       | infrastructure part.
       | 
       | Current AI funding seems to revolve around some sort of winner-
       | take-all scenario. Just keep throwing incredible amounts of money
       | at it, and hope that you've picked the winner. I'm just wondering
       | what the outcome will be if this thesis turns out wrong.
        
         | fragmede wrote:
         | Same thing that happened to pets.com or webvan.com and the rest
         | of the graveyard of failed companies. A bunch of investors lose
         | money, a bunch of market consolidation, employees get dilluted
         | to worthlessness, chapter 7, chapter 11. The free ride of
         | today's equivalent of $1 Ubers will end. A glut of previously
         | very expensive hardware for cheap on eBay (though I doubt this
         | last point will happen since AGI is likely to be compute
         | intensive).
         | 
         | It's not going to be fun or easy, but as far as the financials
         | go, we were there in 2001.
         | 
         | The question is assuming we do get AGI, what the ramifications
         | of _that_ will be. Instead of hiring employees, a business can
         | spin up employees (and down) like a tech company can spin up
         | EC2 instances. Great for employers, terrible for employees.
         | 
         | That's a big "if" though.
        
         | imiric wrote:
         | > The only things that will be out of reach for most, is
         | compute - and probably other expensive things on the
         | infrastructure part.
         | 
         |  _That_ is the moat. That, and training data.
         | 
         | Even today, compute and data are the only things that matter.
         | There is hardly any secret software sauce. This means that only
         | large corporations with a practically infinite amount of
         | resources to throw at the problem could potentially achieve
         | AGI. Other corporations would soon follow, of course, but the
         | landscape would be similar to what it is today.
         | 
         | This is all assuming that the current approaches can take us
         | there, of which I'm highly skeptical. But if there's a
         | breakthrough at some point, we would still see AI tightly
         | controlled by large corporations that offer it as a (very
         | expensive) service. Open source/weight alternatives would not
         | be able to compete, just like they don't today. Inference would
         | still require large amounts of compute only accessible to
         | companies, at least for a few years. The technology would be
         | truly accessible to everyone only once the required compute
         | becomes a commodity, and we're far away from that.
         | 
         | If none of this comes to pass, I suspect there will be an
         | industry-wide crash, and after a few years in the Trough of
         | Disillusionment, the technology would re-emerge with practical
         | applications that will benefit us in much more concrete and
         | subtle ways. Oh, but it will ruin all our media and
         | communication channels regardless, directly causing social
         | unrest and political regression, that much is certain. (:
        
       | Animats wrote:
       | _" A disturbing amount of effort goes into making AI tools
       | engaging rather than useful or productive."_
       | 
       | Right. It worked for social media monetization.
       | 
       |  _"... hallucinations ... "_
       | 
       | The elephant in the room. Until that problem is solved. AI
       | systems can't be trusted to _do_ anything on their own. The
       | solution the AI industry has settled on is to make hallucinations
       | an externality, like pollution. They 're fine as long as someone
       | else pays for the mistakes.
       | 
       | LLMs have a similar problem to Level 2-3 self-driving cars. They
       | sort of do the right thing, but a human has to be poised to
       | quickly take over at all times. It took Waymo a decade to get
       | over that hump and reach level 4, but they did it.
        
         | cal85 wrote:
         | When you say "do anything in their own", what kind of things do
         | you mean?
        
           | Animats wrote:
           | Take actions which have consequences.
        
         | nunez wrote:
         | Waymo "did it" in very controlled environments, not in general.
         | They're still a ways away from solving self-driving in the
         | general case.
        
           | Animats wrote:
           | Los Angeles and San Francisco are not "very controlled
           | environments".
        
           | __loam wrote:
           | They've done over 70 million rider only miles on public
           | roads.
        
         | jasonsb wrote:
         | > The elephant in the room. Until that problem is solved. AI
         | systems can't be trusted to do anything on their own.
         | 
         | AI system can be trusted to do most of the things on their own.
         | You can't trust them for actions with irreversible
         | consequences, but everything else is ok.
         | 
         | I can use them to write documents, code, create diagrams,
         | designs etc. I just need to verify the result, but that's 10%
         | of the actual work. I would say that 90% of modern day office
         | work can be done with the help of AI.
        
       | lightbulbish wrote:
       | Thanks for the read. I think it's a highly relevant article,
       | especially around the moral issues of making addictive products.
       | As a normal person in the Swedish society I feel social media,
       | shorts and reels in particular, has an addictive grip on many in
       | my vicinity.
       | 
       | And as a developer I can see similar patterns with AI prompts:
       | prompt, wait, win/lose, re-prompt. It is alluring and it
       | certainly feels.. rewarding when you get it right.
       | 
       | 1) I have been curious as to why so few people in Silicon Valley
       | seems to be concerned with, even talking about, the good of the
       | products. The good of the company they join. Could someone in the
       | industry enlighten me, what are the conversations in SV around
       | this issue? Do people care if they make an addictive product
       | which seems to impact people's lives negatively? Do the VCs?
       | 
       | 2) I appreciate the author's efforts in creating conversation
       | around this. What are ways one could try to help the efforts?
       | While I have no online following, I feel rather doomy and gloomy
       | about AI pushing more addictive usage patterns out in to the
       | world, and would like to help if there is something suitable I
       | could do.
        
       | drillsteps5 wrote:
       | I can't speak intelligently about how close AGI really is (I do
       | not believe it is but I guess someone somehow somewhere might
       | come up with a brilliant idea that nobody thought of so far and
       | voila).
       | 
       | However I'm flabbergasted by the lack of attention to so-called
       | "hallucinations" (which is a misleading, I mean marketing, term
       | and we should be talking about errors or inaccuracies).
       | 
       | The problem is that we don't really know why LLMs work. I mean
       | you can run the inference and apply the formula and get output
       | from the given input, but you can't "explain" why LLM produced
       | phase A as an output instead of B,C, or N. There's just too many
       | parameters and computations to go though, and the very concept of
       | "explaining" or "understanding" might not even apply here.
       | 
       | And if we can't understand how this thing works, we can't
       | understand why it doesn't work properly (produces wrong output)
       | and also don't know how to fix it.
       | 
       | And instead of talking about it and trying to find a solution
       | everybody moved on to the agents which are basically LLMs that
       | are empowered to perform complex actions IRL.
       | 
       | How does this makes any sense to anybody? I feel like I'm crazy
       | or missing something important.
       | 
       | I get it, a lot of people are making a lot of money and a lot of
       | promises are being made. But this is absolutely fundamental issue
       | that is not that difficult to understand to anybody with a
       | working brain, and yet I am really not seeing any attention paid
       | to it whatsoever.
        
         | Bratmon wrote:
         | You can get use out of a hammer without understanding how the
         | strong force works.
         | 
         | You can get use out of an LLM without understanding how every
         | node works.
        
           | alganet wrote:
           | You can get injured by using a hammer without understanding
           | how it works.
           | 
           | You can damage a company by using a spreadsheet and not
           | understanding how it works.
           | 
           | In your personal opinion, what are the things you should know
           | before using an LLM?
        
           | drillsteps5 wrote:
           | Hammer is not a perfect analogy because of how simple it is,
           | but sure let's go with it.
           | 
           | Imagine that occasionally when getting in contact with the
           | nail it shatters to bits, or goes through the nail as it were
           | liquid, or blows up, or does something else completely
           | unexpected. Wouldn't you want to fix it? And sure, it might
           | require deep understanding of the nature of the materials and
           | forces involved.
           | 
           | That's what I'd do.
        
         | dummydummy1234 wrote:
         | I guess a counter, is that we don't need to understand how they
         | work to produce a useful output.
         | 
         | They are a magical black box magic 8 ball, that more likely
         | than not gives you the right answer. Maybe people can explain
         | the black box, and make the magic 8 ball more accurate.
         | 
         | But at the end of the day, with a very complex system it will
         | always be some level of black box unreliable magic 8 ball.
         | 
         | So the question then is how do you build an reliable system
         | from unreliable components. Because llms directly are
         | unreliable.
         | 
         | The answer to this is agents, ie feedback loops between
         | multiple llm calls, which in isolation are unreliable, but in
         | aggregate approach reliability.
         | 
         | At the end of the day the bet on agents is a bet that the model
         | companies will not get a model that will magically be 100%
         | correct on the first try.
        
         | Scarblac wrote:
         | LLM hallucinations aren't errors.
         | 
         | LLMs generate text based on weights in a model, and some of it
         | happens to be correct statements about the world. Doesn't mean
         | the rest is generated incorrectly.
        
       | Imnimo wrote:
       | I can at least understand "I am going to a different AGI company
       | because I think they are on a better track" but I cannot grap " I
       | am leaving this AGI company to work on some narrow AI application
       | but I still totally believe AGI is right around the corner"
        
       ___________________________________________________________________
       (page generated 2025-07-05 23:01 UTC)