_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
(HTM) Visit Hacker News on the Web
COMMENT PAGE FOR:
(HTM) Arm AGI CPU
ThomIves wrote 11 hours 2 min ago:
Thanks for this article. Very important news.
kylehotchkiss wrote 16 hours 7 min ago:
"The I is for IPO" :D
maxekman wrote 22 hours 17 min ago:
What is âagentic AI cloud eraâ referring to? I honestly donât
know what this buzz-speak is targeting. Running models locally on the
server, for cloud workloads? Agentic, that is just a LLM pattern.
nananana9 wrote 21 hours 49 min ago:
Don't overthink it. Shut up and buy some ARM stock.
wang_pp8 wrote 23 hours 55 min ago:
If rich people are this stupid then they deserve to be parted with
their cash
pjmlp wrote 1 day ago:
For those wanting to know more about software stack,
> Arm is actively collaborating with leading Linux distributions from
Canonical, Red Hat, and SUSE to ensure certified support for the
production systems.
Taken from
(HTM) [1]: https://developer.arm.com/community/arm-community-blogs/b/serv...
snvzz wrote 1 day ago:
ARM must be feeling the heat from all those RISC-V AI startups.
int0x29 wrote 1 day ago:
This looks like an existing pre planned product hastily rebranded AI
KnuthIsGod wrote 1 day ago:
Today everything is AGI.
Yesterday everything was Agentic.
Everything was AI last week.
Waiting for AGI Agentic AI Crypto toilet paper to be in the supermarket
shelves , next to the superseded Object oriented UML Rational Rose
tuna.
mattfrommars wrote 1 day ago:
Hmm all my experience with using AI has been mostly VRAM. I haven't
experienced any bottleneck on the CPU side. What does this chip offer
over Intel or Apple Silicon? Anyone expert here know whatit is?
arrty88 wrote 1 day ago:
The arm family of chips (apple A series, m series, and qcom
snapdragon) are better on energy usage (thus battery life) and
performance and design compared to many x86 style chips (intel, amd).
Time will tell if ARMs owncpu is on par or better than Appleâs ARM
based chips
nektro wrote 1 day ago:
arm what we want is an arm chip that can rival m-series not this
wmf wrote 1 day ago:
We have C1 Ultra at home.
rapatel0 wrote 1 day ago:
RISC-V will start making more waves now
mghackerlady wrote 1 day ago:
Yep, smart people will jump ship since having a competitor control
your product is not an amazing idea
creantum wrote 1 day ago:
Well that explains it, the guy in charge is a wad.
creantum wrote 1 day ago:
Agl? @gi? Heck if we canât compete weâll confuse!
giancarlostoro wrote 1 day ago:
AGI will just become the new "Smart Phone" or "Smart Car" losing all
meaning.
bhewes wrote 1 day ago:
Yeah dumb name, but we will still use these we have been using Ampere
in our office.
rsynnott wrote 1 day ago:
I feel like this is one of the things that people will look back on as
the peaking of the bubble.
Like, câmon, this is ridiculous.
checker659 wrote 1 day ago:
Why?
varenc wrote 1 day ago:
"AGI" continues to lose all meaning.
zackmorris wrote 1 day ago:
It only took a quarter century, but I'm glad that somebody is finally
adding a little multicore competition since Moore's law began failing
in the mid-2000s.
I looked around a bit, and the going rate appears to be about $10,000
per 64 cores, or around $150 per core. Here is an Intel Xeon Platinum
8592+ 64 Core Processor with 61 billion transistors: [1] So that's
about 500 million transistors per dollar, or 1 billion transistors for
$2.
It looks like Arm's 136 core Neoverse V3 has between 150 and 200
billion transistors, so it should cost around $400. Each blade has 2 of
those chips, so should be around $800-1000 for compute. It doesn't say
how much memory the blades come with, but that's a secondary concern.
Note that this is way too many cores for 1 bus, since by Amdahl's law,
more than about 4-8 cores per bus typically results in the remaining
cores getting wasted. Real-world performance will be bandwidth-limited,
so I would expect a blade to perform about the same as a 16-64 core
computer. But that depends on mesh topology, so maybe I'm wrong (AI
thinks I might be):
Intel Xeon Scalable: Switched from a Ring to a Mesh Architecture
starting with Skylake-SP to handle higher core counts.
Arm Neoverse V3 / AGI: Uses the Arm CMN-700 (Coherent Mesh Network),
which is a high-bandwidth 2D mesh designed specifically to link over
100 cores and multiple memory controllers.
I find all of this to be somewhat exhausting. We're long overdue for
modular transputers. I'm envisioning small boards with 4-16 cores
between 1-4 GHz and 1-16 GB of memory approaching $100 or less with
economies of scale. They would be stackable horizontally and
vertically, to easily create clusters with as many cores as one
desires. The cluster could appear to the user as an array of separate
computers, a single multicore computer running in a unified address
space, or various custom configurations. Then libraries could provide
APIs to run existing 3D, AI, tensor and similar SIMD code, since it's
trivial to run SIMD on MIMD but very challenging to run MIMD on SIMD.
This is similar to how we often see Lisp runtimes written in C/C++, but
never C/C++ runtimes written in Lisp.
It would have been unthinkable to design such a thing even a year ago,
but with the arrival of AI, that seems straightforward, even
pedestrian. If this design ever manifests, I do wonder how hard it
would be to get into a fab. It's a chicken and egg problem, because
people can't imagine a world that isn't compute-bound, just like they
couldn't imagine a world after the arrival of AI.
Edit: [2] has Arm AGI specs. Looks like it has DDR5-8800 (12x DDR5
channels) so that's just under 12 cores per bus, which actually aligns
well with Amdahl's law. Maybe Arm is building the transputer I always
wanted. I just wish prices were an order of magnitude lower so that we
could actually play around with this stuff.
(HTM) [1]: https://www.itcreations.com/product/144410
(HTM) [2]: https://news.ycombinator.com/item?id=47506641
pixelpoet wrote 1 day ago:
Amdahl's law is about the maximum speedup obtainable from
parallelism, not balancing memory bandwidth with compute.
HeyMeco wrote 1 day ago:
The non marketing fluff version of the press release can be found here:
(HTM) [1]: https://news.ycombinator.com/item?id=47506641
JSR_FDED wrote 1 day ago:
This canât come fast enough, Iâll finally be able to use CSS.
wewewedxfgdf wrote 1 day ago:
Seems like hubris to use this name.
moritzwarhier wrote 1 day ago:
I miss the all-capitals ARM spelling.
Seeing "Arm AGI" spelled out on a page with an "arm" logo looks
slightly cheesy.
But maybe it's actually a good fit for the societal revolution driven
by AGI, comparable to the one driven by the DOT.com RevoLut.Ion. (dot
com).
Anyways, it sounds like an A.R.M. branded version of the AppleSilicon
revolution?
But maybe that's just my shallow categorization.
lostmsu wrote 17 hours 59 min ago:
On a related note I miss LLaMA spelling.
als0 wrote 1 day ago:
I also miss the all-capitals ARM spelling. I think they've never been
the same since they've changed that, since around the same time their
business strategy went from sensible to nonsense.
pixelpoet wrote 1 day ago:
It's an acronym (like Nasa), not an initialism (like the NSA). I
think it might be a British English thing.
MathMonkeyMan wrote 23 hours 34 min ago:
I'd say all acronyms are initialisms, but not the other away
around, and that fancy writing capitalizes them.
einpoklum wrote 1 day ago:
If I try to cut through the hype, it seems the main features of this
processor, or rather processor + memory controller + system
architecture, is < 100 ns for accessing anything in system memory and 6
GB/sec for each of a large-ish number of cores, so a (much?) higher
overall bandwidth than what we would see in a comparable Intel x86_64
machine.
Am I right or am I misunderstanding?
wmf wrote 1 day ago:
It's the same memory bandwidth as Intel and moderately higher than
AMD.
einpoklum wrote 1 day ago:
Even if you get the 136 cores or whatever?
adrian_b wrote 16 hours 11 min ago:
AMD old CPUs (to be replaced by the end of the year) have 192
cores per socket, where each core is significantly faster than
Neoverse V3.
The latest Intel server CPU, Clearwater Forest, uses Darkmont
cores that have approximately the same performance, cost and
power consumption as Neoverse V3, but Intel provides 288 cores
per socket and 576 cores per board.
Even supposing that Intel Xeons would be used in relatively big
2U servers, that still provides at least 50% more cores per rack
than these new Arm AGI CPUs.
The claim of Arm that they provide better performance per rack is
false. They must have compared their new CPUs with some antique
Intel Granite Rapids Xeon CPUs, instead of comparing with
state-of-the-art Intel and AMD CPUs, which offer much more
performance per rack than the new Arm AGI.
oxag3n wrote 1 day ago:
Why not ASI? They aim too low.
vsgherzi wrote 1 day ago:
is this a cpu that's meant for AI training or is it more for serving
inference? I don't quite get why I would want to buy an arm CPU over a
nvidia GPU for ai applications.
Azantys wrote 1 day ago:
It is for orchastrating inference/creating firecracker instances for
agents etc. It does'nt have anything to do with actual AI usage.
vsgherzi wrote 1 day ago:
Interesting thanks
tombert wrote 1 day ago:
The name of this CPU is bordering on securities fraud. When people see
the term "AGI" now, they are assuming "Artificial General
Intelligence", not "Agentic AI Infrastructure".
Of course people don't realize that, and people will buy ARM stock
thinking they've cracked AGI. The people running Arm absolutely know
this, so this name is what we in the industry call a "lie".
Zopieux wrote 12 hours 57 min ago:
On the contrary, I love that companies are semantically overloading
this stupid concept (purposefully or not) which is 100% hype
marketing.
I don't understand why this label is still a thing in the current
discourse, and I hope such moves will finally help people and the
industry move on.
rayiner wrote 19 hours 5 min ago:
Can you imagine being an engineer and working hard to create
something new and cool and some jackass in marketing slaps the name
âAGI CPUâ on it?
fidotron wrote 19 hours 15 min ago:
An unappreciated aspect of Arm is they really were the Robin Saxby
show. [1] Whichever ISA had him selling it was going to win.
While AArch64 represents the technical revolution they needed their
business compass has just gone ever since he stepped down. This grimy
stuff, and as others noted competing with your own customers, were no
goes in the earlier era.
(HTM) [1]: https://en.wikipedia.org/wiki/Robin_Saxby
andsoitis wrote 20 hours 8 min ago:
> The name of this CPU is bordering on securities fraud.
No. For it to be securities fraud, Arm would need to make a
materially false statement of fact that misleads investors. Naming
the CPU in this way doesn't clear the bar because:
a) the name is clearly product brand, similar to how macOS Lion, or
Microsoft Windows, or Ford Mustang, or Yves Saint Laurent Black Opium
don't mean literally what they say)
b) Arm explicitly defines it as silicon "designed to power the next
generation of AI infrastructure", with the technical specs fully
disclosed
c) sophisticated investors, the relevant standard for securities
fraud, can read a spec sheet
d) Arms' EVP said "We think that the CPU is going to be fundamental
to ultimately achieving AGI", framing it as contribution towards AGI,
not AGI itself
croon wrote 20 hours 2 min ago:
I was on board with A through C, but then with D it's either
clearly a lie or stupidity. I guess it's not a lie technically if
they believe it though, so the latter then. But I also don't want
to assume someone in their position to be stupid, so then I'm back
to the former.
andsoitis wrote 19 hours 59 min ago:
So D undermines A - C in your mind? That doesn't make sense.
croon wrote 18 hours 37 min ago:
Huge IANAL disclaimer to start, but your post started off with:
> No. For it to be securities fraud, Arm would need to make a
materially false statement of fact that misleads investors.
Naming the CPU in this way doesn't clear the bar because:...
The EVP statement doesn't say "our CPU does AGI", sure, but is
it unfair to suggest it makes some form av AGI claim, which
isn't there from the naming alone?
It's no longer your point A) "clearly product brand" if the
established usage of the term "AGI" comes out of the EVP's
mouth.
And yes, their (albeit very vague) claim is clearly wrong IMHO.
nixass wrote 20 hours 49 min ago:
I'm "people" and AGI means nothing to me
_3u10 wrote 1 day ago:
I thought they were adding support for AGI slots
bluegatty wrote 1 day ago:
People buying these kinds of chips will know. AGI is barely a popular
concept. Nobody in my family knows what it means.
boxedemp wrote 1 day ago:
It's HD and ai and 5G and and that
jasonvorhe wrote 1 day ago:
If sind can't do the most basic due diligence as in reading up on
stuff you invest in using Wikipedia or a search engine, best of luck
to them.
dakolli wrote 1 day ago:
If this headline lead you to believe that ARM has somehow cracked
AGI, you deserve to lose your money..
imtringued wrote 1 day ago:
ARM has cracked Agentic AI infrastructure. What are you on about?
AGI is a solved problem. The next generation models will have AGI
capabilities.
dakolli wrote 19 hours 0 min ago:
I really hope this is satire. If not, please see a psychiatrist
usrusr wrote 1 day ago:
Do you think that we should live in a world where investors who buy
on a comical misinterpretation of an acronym are protected from their
naivety?
Why isn't there a minority shareholder lawsuit on the news because
someone bought MSFT not realizing that Copilot isn't actually
certified to fly an airliner? A certain type of people would likely
just buy MSFT on a massive lever and then if the bet fails to work
out sue pretending that they did not understand.
tombert wrote 1 day ago:
You're being purposefully obtuse.
People have been hearing for the last three years about how a
specific acronym, "AGI", is the final frontier of artificial
intelligence and how it's going to change the entire economy around
it. They've been hearing about this quasi-theoretical, very
specific thing, and a lot of them don't even know what the "G"
stands for.
People haven't been hearing for years about a mythical "copilot",
and as such I think people are much more likely to think it's not
anything more than a cute nickname.
Are you suggesting that this is just a coincidence? The acronym AGI
doesn't even make sense for Agentic AI Infrastructure, which should
be AAII; they're clearly calling it AGI to mislead people. I refuse
to think that the people running Arm are so stupid that they didn't
even Google the acronym before releasing the chip.
You think it's a "comical misinterpretation", but I don't think it
is. When I saw the article, I thought "shit; did they manage to
crack AGI?", and I clicked the article and was disappointed. I
suspect a lot of people aren't even going to read the press
release.
giancarlostoro wrote 1 day ago:
It's just going the way of "Smartphone" and "Smart Car" they'll
market it as such to get people riled up about it. Consumers will eat
it up. I'm sure Scam Altman is ready to show us "AGI" next too. If
ARM is making AGI's meaning shift to a CPU descriptor, anyone can
call their tech "AGI" by just using their chips.
groby_b wrote 1 day ago:
Honestly: The people who buy stock because a product says "AGI" in
the name deserve to lose their shirt.
And no, it's not "a lie", because only an utter idiot would consider
a product name an actual fact. It's a name. The Hopper GPUs also
didn't ship with a lifesize cutout of Grace Hopper.
tombert wrote 1 day ago:
No, it's actually a lie, and it's different than the Hopper GPU you
mentioned.
People have been seeing every big AI company talk about how AGI is
the holy grail of AI, and how they're all trying to reach it. Arm
naming a chip AGI is clearly meant to make casual observers think
they cracked AGI.
The Hopper GPU isn't the same, because Nvidia isn't actively trying
to make people think that it includes a lifesize cutout of Grace
Hopper. Not a dig on her, but most people don't know who Grace
Hopper is, people haven't been hearing on the news for the last
several years about how having a Grace Hopper is going to make
every job irrelevant.
juleiie wrote 1 day ago:
If rich people are this stupid then they deserve to be parted with
their cash.
If you invest money so mindlessly that you donât even check what
you buy, then no legislation in the world will manage to protect you
from your own mind
tombert wrote 1 day ago:
Itâs not just rich people though. Most people (at least in the
US) have their retirements and the like in things like 401ks, tied
to some kind of index like the S&P 500. A company doing bullshit
to manipulate the stock affects pretty much anyone who uses an
index fund or ETF, which is pretty much everyone in the US.
elictronic wrote 1 day ago:
You invest in index funds and etfs so your money averages out and
you donât get impacted by a single companies stupidity.
tombert wrote 17 hours 29 min ago:
No, the impact is lessened, but there can still be an impact
from an individual company's stupidity.
rootbear wrote 1 day ago:
This sort of thing really bugs me! Marketing departments appropriate
an existing term and use it in some new, often deceptive way. This
goes all the way back to when IBM released âThe IBM Personal
Computerâ, at a time when âpersonal computerâ was a category
name. Then Microsoft released Windows, when âwindowsâ was a
generic term for windowing systems. Intel did it with their
âcoreâ architecture. The list goes on.
(Disclosure: I am a casual investor in ARM.)
imglorp wrote 1 day ago:
The marketers did this for 5G also, calling their product 5G before
it was actually deployed, only because theirs came after 4G but
wanted to ride the upcoming 5G buzz.
It seems marketing /depends/ on conflating terms and misleading
consumers. Shakespeare might have gotten it wrong with his quip about
lawyers.
(HTM) [1]: https://www.pbs.org/newshour/economy/att-to-drop-misleading-...
guerrilla wrote 1 day ago:
Yes, my wireless router has "5G WiFi" but only does 4G. I didn't
have a choice about using it since it comes from the provider, but
still stupid.
estimator7292 wrote 16 hours 21 min ago:
5G and 4G are not terms applied to WiFi. We have
802.11a/b/g/n/ac/ax and WiFi6/7
WiFi operates in the 2.4, 5, 6GHz bands, but those frequency
bands are not used to differentiate WiFi standards because you
can mix and match WiFi 6/7 on all three bands.
There are also more WiFi bands below 2.4 and above 6GHz, but
they're not common worldwide.
catlifeonmars wrote 21 hours 25 min ago:
What is 5G WiFi? Do you mean 5Ghz WiFi?
sneak wrote 1 day ago:
Bill Hicks had some thoughts, too:
(HTM) [1]: https://youtube.com/watch?v=GaD8y-CGhMw
hnlmorg wrote 23 hours 29 min ago:
Itâs been a long long time since Iâve heard that name come up
in conversation.
Thanks for the trip down memory lane.
mort96 wrote 1 day ago:
There was soooo much intentional disinformation around 5G. Everyone
who wanted to sell anything intentionally confused the >1Gbps
millimeter wave line-of-sight kind of 5G with the "4G but with some
changes to handle more devices connected to one tower" kind of 5G.
I wonder how many bought a "5G phone" expecting millimeter wave but
only got the slightly improved 4G.
catlifeonmars wrote 21 hours 27 min ago:
Wait til you search the term â6gâ.
bee_rider wrote 1 day ago:
This is mostly the standardâs fault, right? Putting more
conventional wavelengths and the mm stuff together in one
standard was⦠a choice.
phire wrote 1 day ago:
From a standards design perspective, there is nothing wrong
with it. It's the same protocol running on two very different
frequency bands. They co-exist and support each other.
The problem is how marketing interacted with it.
0x3f wrote 1 day ago:
> Of course people don't realize that, and people will buy ARM stock
thinking they've cracked AGI.
Doesn't seem like a very credible assertion. Picking stocks in this
way would remove you from the market pretty quickly.
wiml wrote 1 day ago:
Yes, that's how fraud works a lot of the time. It removes you from
the market but not until after it's removed your money. And there's
an endless supply of new people ready to make the same mistake
after you've learned your lesson.
tombert wrote 1 day ago:
I didn't say it would be a wise decision to pick stocks that way,
but this has already happened: [1] .
Does an iced tea company changing their name to Long Blockchain
make any sense? No, not really, it's pretty stupid actually, but it
managed to bump the stock by apparently 380%.
The stock market can be pretty dumb sometimes. Let's not forget
the weird GME bubble.
(HTM) [1]: https://en.wikipedia.org/wiki/Long_Blockchain_Corp
PessimalDecimal wrote 1 day ago:
Didn't random companies add block chain to their names only just a
few years ago and get 30+% jumps in stock price immediately?
bee_rider wrote 1 day ago:
Thatâs quite different, BlockChain was a buzzword label for
existing tech. AGI is a label for something we famously havenât
achieved, and which would be revolutionary if we had.
This seems more like calling your spaceship company, I dunno,
âInterplanetary Passengersâ or something.
serf wrote 1 day ago:
AGI is a buzzword too, it's just differently applied.
In this case it's a word that means the thing we're all
developing towards apparently, but that no one actually knows
how to get or even how to measure whether or not we've already
gotten it , and no one really knows what will happen when it's
achieeved, if it hasn't already been.
It's a bit like an even wackier more-corporate version of The
Quest for the Holy Grail.
And the honest one true test for "is it a buzzword?" : Did a
corporate group brand a flagship with it?
"RISC architecture is going to change everything!"
0x3f wrote 1 day ago:
> Just because the stock goes up doesn't mean anyone was tricked.
People invest in sentiment, in momentum, in all kinds of second
order effects.
sincerely wrote 1 day ago:
Wouldnât those second order effects be downstream of the
first order effect of people being tricked?
elictronic wrote 1 day ago:
Run trading bot looking for news feeds with specific terms.
Buy stocks based on this. Understand your fellow humans are
lazy and stupid. If you canât read past the first word of
a news article maybe that person shouldnât be allowed to
trade stocks.
kergonath wrote 1 day ago:
AGI is a poorly-defined concept anyway. Itâs just vibes, nothing
descriptive.
chromoblob wrote 22 hours 47 min ago:
AGI is the automation of self-regulation of language
source: 100% personal certainty
LeifCarrotson wrote 1 day ago:
Those in the industry don't call it a lie, they call it "marketing".
It's those out of the industry who call them lies.
tombert wrote 1 day ago:
Touché. I guess I should have said "I call it a lie".
bhouston wrote 1 day ago:
If you showed someone what our computers can do with the latest LLMs
now to someone 5 years ago they would probably say it sure looks a
lot like AGI.
We have to keep defining AGI upwards or nitpick it to show that we
haven't achieved it.
I would argue that LLMs are actually smarter than the majority of
humans right now. LLMs do not have quite the agency that humans
have, but their intelligence is pretty decent.
We don't have clear ASI yet, but we definitely are in a AGI-era.
I think we are missing an ego/motiviations in the AGI and them having
self-sufficiency independent of us, but that is just a bit of
engineering that would actually make them more dangerous, it isn't
really a significant scientific hurdle.
jltsiren wrote 1 day ago:
The problem with definitions is that they are all wrong when you
try to apply them outside mathematical models. Descriptive terms
are more useful than normative ones when you are dealing with the
real world. Their meaning naturally evolves when people understand
the topic better.
General intelligence, as a description, covers many aspects of
intelligence. I would say that the current AIs are almost but not
quite generally intelligent. They still have severe deficiencies in
learning and long-term memory. As a consequence, they tend to get
worse rather than better with experience. To work around those
deficiencies, people routinely discard the context and start over
with a fresh instance.
chromacity wrote 1 day ago:
> If you showed someone what our computers can do with the latest
LLMs now to someone 5 years ago they would probably say it sure
looks a lot like AGI.
But this is a CPU! It's not a GPU / TPU. Even if you think we've
achieved AGI, this is not where the matrix multiplication magic
happens. It's pure marketing hype.
IanCal wrote 1 day ago:
I did AI back before it was cool and I think we have agi. Imo the
whole distinction was from extremely narrow AI to general
intelligence. A classifier for engine failure can only do that - a
route planner can only do thatâ¦
Now we have things I can ask a pretty arbitrary question and they
can answer it. Translate, understand nuance (the multitude of ways
of parsing sentences, getting sarcasm was an unsolved problem),
write code, go and read and find answers elsewhere, use toolsâ¦
these arenât one trick ponies.
There are finer points to this where the level of autonomy or
learning over time may be important parts to you but to me it was
the generality that was the important part. And I think weâre
clearly there.
Agi doesnât have to be human level, and it doesnât have to be
equal to experts in every field all at once.
usrusr wrote 1 day ago:
An interesting perspective: general, absolutely, just nowhere
near superhuman in all kinds of tasks. Not even close to human in
many. But intelligent? No doubt, far beyond all not entirely
unrealistic expectations.
But that seems almost like an unavoidable trade-off. Fiction
about the old "AI means logic!" type of AI is full of thought
experiments where the logic imposes a limitation and those
fictional challenges appear to be just what the AI we have excels
at.
singpolyma3 wrote 1 day ago:
It doesn't look anything like AGI and no one who knows what that
means would be confused in any era.
Is it useful? Yes. Is it as smart as a person? Not even remotely.
It can't even remember things it already was told 5 minutes ago.
Sometimes even if they are still in the context window un
compacted!
IanCal wrote 1 day ago:
It doesnât need to be human level, and if I walk into a room
and forget why I went in am I no longer a general intelligence?
singpolyma3 wrote 1 day ago:
If it doesn't need to be human level then what are we even
talking about? AGI means human level. Everything else is AI
IanCal wrote 22 hours 11 min ago:
No, the big thing with AGI was that it was general. AI things
we made were extremely narrow, identify things out of a set
of classes or route planning or something similarly specific.
We couldn't just hand the systems a new kind of task, often
even extremely similar ones. We've been making superhuman
level narrow AI things for many years, but for a long time
even extremely basic and restricted worlds still were beyond
what more general systems could do.
If LLMs are your first foray into what AI means and you were
used to the term ML for everything else I could see how you'd
think that, but AI for decades has referred to even very
simple systems.
singpolyma3 wrote 20 hours 27 min ago:
If AGI doesn't mean human level then what does? As you say,
every application of A* is in some way "AI" so we had this
idea of "AGI" for something "actually intelligent" but
maybe I'm wrong and AGI never meant that. What does mean
that?
jen20 wrote 1 day ago:
> I would argue that LLMs are actually smarter than the majority of
humans right now
This (surprisingly common) view belies a wild misunderstanding of
how LLMs work.
nananana9 wrote 1 day ago:
My definition of AGI hasn't changed - it's something that can
perform, or learn to perform, any intellectual task that a human
can.
5 years ago we thought that language is the be-all and end-all of
intelligence and treated it as the most impressive thing humans do.
We were wrong. We now have these models that are very good at
language, but still very bad at tasks that we wrongly considered
prerequisites for language.
Majromax wrote 1 day ago:
> My definition of AGI hasn't changed - it's something that can
perform, or learn to perform, any intellectual task that a human
can.
Wait, could you make your qualifiers specific here? Is your
definition of AGI that it be able to perform/learn any
intellectual task that is achievable by every human, or by any
human?
Those are almost incomparably different standards. For the
first, a nascent AGI would only need to perform a bit better than
a "profound intellectual disability" level. For the second, AGI
would need to be a real "Renaissance AGI," capable of advancing
the frontiers of thought in every discipline, but at the same
time every human would likely fail that bar.
svachalek wrote 1 day ago:
Your true average human is someone like your barista at
Starbuck's. Try giving them a good math problem, or logic
puzzle, or leetcode problem if you need some reminding of the
standard reasoning capabilities of our species. LLMs cannot
beat the best humans at practically anything, but average
humans? Average humans are a much softer target than this
thread seems to think.
alpaca128 wrote 1 day ago:
And yet if you asked that barista if you should walk to the
car wash or take your car there, they would never respond
with "you should take a walk, it's healthier than driving"
like almost every LLM did in a test I saw.
That is as basic as everyday reasoning gets and any human in
modern society solves hundreds of problems like that every
day without even thinking about it, but with LLMs it's a
diceroll. Testing them with leetcode problems or logic
puzzles is not going to prove much unless you first made sure
none of those were in the training data to prevent pure
memorization.
jacquesm wrote 1 day ago:
I think it would be fairly easy to prove or disprove that 'AI
as it is today knows more about any subject than 99% of HN'.
But knowledge alone does not translate into intelligence and
that's the problem: we don't have a really hard definition of
what intelligence really is. There are many reasons for that
(such as that it would require us to reconsider some of our
past actions), but the fact remains.
So until we really once and for all nail down what
intelligence is you get this god-of-the-gaps like problem
where everytime we find something that looks and feels truly
intelligent by yesterday's standards that intelligence will
be crammed into a slightly smaller space excluding the thing
that just became possible.
The rate-of-change is a factor here. Arguably the current
rate of change is very high compared to with two decades ago,
but compared to three years ago it feels as if we're already
leveling off and we're more focused on tooling and
infrastructure than on intelligence itself.
Intelligence may not actually have a proper definition at
all, it seems to be an emergent phenomenon rather than
something that you engineer for and there may well be many
pathways to intelligence and many different kinds of
intelligence.
What gets me about AI so far is that it can be amazing one
minute and so incredibly stupid the next that it is cringe
worthy. It gives me an idiot/savant kind of vibe rather than
that it feels like an actual intelligent party. If it were
really intelligent I would expect it to be able to learn as
much or more from the interaction and to be able to have a
conversation with one party where it learns something useful
to then be able to immediately apply that new bit of
knowledge in all the other ones.
Humans don't need to be taught the same facts over and over
again, though it may help with long term retention. We are
able to reason about things based on very limited information
and while we get stuff wrong - and frequently so - we usually
also know quite precisely where the limits of our knowledge
are, even if we don't always act like it.
To me it is one of those 'I'll know it when I see it' things,
and without insulting anybody, including the barista's at
Starbucks, I think it is perfectly possible to have a
discussion about this and to accept that average humans all
have different skills and specialties and that some people
work at Starbucks because they want to and others because
they have to, it does not say anything per-se about their
intelligence or lack thereof. At the same time you can be IQ
140 but still dumber than a Starbucks barista on what it
takes to make someone feel comfortable and how to make
coffee.
fc417fc802 wrote 1 day ago:
We seem to largely agree but I wanted to respond to this
one bit:
> you get this god-of-the-gaps like problem where everytime
we find something that looks and feels truly intelligent by
yesterday's standards that intelligence will be crammed
into a slightly smaller space excluding the thing that just
became possible.
It's important to distinguish between "AI" and "AGI" here.
I haven't seen many objections that the frontier models of
the past year or so don't qualify as AI (whatever that
might or might not mean) and the ones I have seen don't
seem to hold much water.
However there's a constant stream of bogus claims
presenting some new feat as "AGI" upon which each time we
collectively stop and revise our working definition to
close the latest loophole for something that is very
obviously not AGI. Thus IMO legal loophole is a more
fitting description than god of the gaps.
I do think we're nearing human level in general and have
already exceeded it in specific tightly constrained domains
but I don't think that was ever the common understanding of
AGI. Go watch 80s movies and they've got humanoid robots
walking around doing freeform housework while chatting with
the homeowner. Meanwhile transferring dirty laundry from a
hamper to the drum remains a cutting edge research problem
for us, let alone wielding kitchen knives or handling
things on the stovetop.
singpolyma3 wrote 1 day ago:
Completely disagree. Inability to handle specific math or CS
is a matter of training and experience not reasoning and
intelligence. The barista is quite capable at reasoning and
learning feats the LLMs aren't close to
tombert wrote 1 day ago:
Yeah, there appears to be this idea that "being smart" is
the same thing as "knowing facts", which I don't think is
realistic.
I know plenty of people who are considerably smarter than
me, but don't know nearly as much as I do about computer
science or obscure 90's video game trivia. Just because I
know more facts than they do (at least in this very limited
scope) doesn't mean that they're less capable of learning
than I am.
As you said, a barista is very likely able to reason about
and learn new things, which is not something an LLM can
really do.
chromoblob wrote 22 hours 43 min ago:
it's the matter of knowing the most practically important
facts to know
root_axis wrote 1 day ago:
> If you showed someone what our computers can do with the latest
LLMs now to someone 5 years ago they would probably say it sure
looks a lot like AGI.
Would they? Perhaps if you only showed them glossy demos that
obscure all the ways in which LLMs fail catastrophically and are
very obviously nowhere even close to AGI.
Certainly, they wouldn't expect that an AI able to score 150 on an
IQ test is unable to play a casual game of chess because it isn't
coherent enough to play without making illegal moves.
bykhun wrote 1 day ago:
> Certainly, they wouldn't expect that an AI able to score 150 on
an IQ test is unable to play a casual game of chess because it
isn't coherent enough to play without making illegal moves.
To be fair, I am pretty sure Claude Code will download and run
stockfish, if you task it to play chess with you. It's not like a
human who read 100 books about chess, but never played, would be
able to play well with their eyes closed, and someone whispering
board position into their ear
root_axis wrote 1 day ago:
There are a lot of problems with this analogy, but even if you
were to take a photo of the board after every move and send it
to the model, it would still be unable to play competently.
dubcanada wrote 1 day ago:
A human can think logically with reason, not to say they are smart
or smarter. But LLMs cannot. You can convince a LLM anything is
correct and it will believe you. You can't convince a human
anything is correct.
I can't argue that LLMs do not know an absolute insane amount of
information about everything. But you can't just say LLMs are
smarter then most humans. We've already decided that smartness is
not about how much data you know, but thinking about that data with
logical reasoning. Including the fact it may or may not be true.
I can run a LLM through absolutely incorrect data, and tell it that
data is 100% true. Then ask it questions about that data and get
those incorrect results as answers. That's not easy to do with
humans.
hex4def6 wrote 1 day ago:
That just implies LLMs are suggestible. The same is true of
children. As we get older and build a more complete world model
in our heads, it's harder to get us to believe things which go
against that model.
Tell a 5-yr old about Santa, and they will believe it sincerely.
Do the same with a 30-year old immigrant who has never heard of
Santa, and I suspect you'll have a harder time.
That's not because the 5-year old is dumber, but just because
their life-experience ("training data") is much more limited.
Even so, trying to convince a modern LLM of something ridiculous
is getting harder. I invite you to try telling ChatGPT or Gemini
that the president died a week ago and was replaced by a
body-double facsimile until January 2027, so that Vance can have
a full term. I suspect you'll have significant difficulty.
soperj wrote 1 day ago:
> Do the same with a 30-year old immigrant who has never heard
of Santa, and I suspect you'll have a harder time.
There's a plethora of people who convert to religion at an
older age, and that seems far more far fetched than Santa.
dahart wrote 1 day ago:
> There's a plethora of people who convert to religion at an
older age, and that seems far more far fetched than Santa.
Being in a religion doesnât imply belief in deities; it
only implies people want social connection. This is clearly
visible in global religion statistics; there are countries
where the majority of people identify as belonging to a
religion, and at the same time only a small minority state
they believe in a âGodâ. Norway is a decent example that
I bumped into just yesterday.
(HTM) [1]: https://en.wikipedia.org/wiki/Religion_in_Norway
hex4def6 wrote 1 day ago:
Sure.
But I bet you'd have a significantly easier time converting a
child rather than a 30/40/50-yr old to a religion.
My point is that LLMs are suggestible, perhaps more so than
the average adult, but less so than I child I suspect. I
don't think suggestibility really solves the problem of
whether something has AGI or not. To me, on the contrary, it
seems like to be intelligent and adaptable you need to be
able to modify your world model. How easily you are fooled is
a function of how mature / data-rich your existing world
model is.
rootusrootus wrote 1 day ago:
> LLMs are actually smarter than the majority of humans right now
I consider myself a bit of a misanthrope but this makes me an
optimist by comparison.
Even stupid people are waaaaaay smarter than any LLM.
The problem is the continued habit humans have of
anthropomorphizing computers that spit out pretty words. Itâs
like Eliza only prettier. More useful for sure. Still just a
computer.
svachalek wrote 1 day ago:
I really feel like we have not encountered the same stupid
people. Most stupid people I know respond to every question with
some form of will-not-attempt. What's 74 times 2? Use a
calculator! Should I drive or walk to the car wash? Not my
problem! How many R's in strawberry? Who cares! They'll lose to
the LLM 100%.
spaqin wrote 1 day ago:
That's actually proving that they indeed are smarter than LLMs
- by choosing to not deal and waste time, water and energy on
useless benchmarks.
tombert wrote 1 day ago:
The cheapest Aliexpress calculator can multiply much bigger
numbers than I can in my head, and it can do it instantly.
Does that mean that the calculator is âsmarterâ than me?
bhouston wrote 1 day ago:
> Still just a computer.
I don't believe in a separation of mind and spirit. So I do
think fundamentally, outside of a reliance on quantum effects in
cognition (some of theorized but it isn't proven), its processes
can be replicated in a fashion in computers. So I think that
intelligence likely can be "just a computer" in theory and I
think we are in the era where this is now true.
tombert wrote 1 day ago:
I don't believe in "spirits" from the get go. I think it's
certainly theoretically possible that we could mimic human
thought with a computer (quantum or otherwise) but I do not
think that the LLMs we have now are doing that. I'd say that
what we have right now is "just a computer".
This doesn't mean they aren't useful, I like Claude a lot, but
I don't buy that it's AGI.
hermanzegerman wrote 1 day ago:
No they aren't
ChatGPT Health failed hilariously bad at just spotting emergencies.
A few weeks ago most of them failed hilariously bad at the question
if you should drive or walk to the service station if you want to
wash your car
xp84 wrote 1 day ago:
Idk about the health story, but in my use, ChatGPT has
dramatically improved my understanding of my health issues and
given sound and careful advice.
The second question sounds like a useless and artificial metric
to judge on. The average person might miss such a âgotchaâ
logical quiz too, for the same reason - because they expect to be
asked âis it walking distance.â
No one has ever relied on anyone elseâs judgment, nor an AI, to
answer âshould I bring my car to the carwash.â Same for the
olâ âhow many rocks shall I eat?â that people got the AI
Overview tricked with.
Iâm not saying anything categorically âis AGIâ but by
relying on jokes like this youâre lying to yourself about
whatâs relevant.
foobiekr wrote 1 day ago:
I have been checking organic and inorganic chemistry skills in
ChatGPT pro and it is absolutely, laughably bad. But it sounds
good, plausible but it comically wrong in so many ways.
Maybe you should think twice about whether the health issues
advice it is giving you is legitimate.
hermanzegerman wrote 1 day ago:
It gave dangerous shitty advice to patients in critical
conditions
(HTM) [1]: https://www.bmj.com/content/392/bmj.s438
bhouston wrote 1 day ago:
I would accuse you of nitpicking. My experience is that LLMs are
generally as smart as the average human +90% of the time. A lack
of perfect to me doesn't mean it isn't AGI.
phkahler wrote 1 day ago:
>> My experience is that LLMs are generally as smart as the
average human +90% of the time. A lack of perfect to me doesn't
mean it isn't AGI.
In my experience, they contain more information than any human
but they are actually quite stupid. Reasoning is not something
they do well at all. But even if I skip that, they can not
learn. Inference is separate from training, so they can not
learn new things other than trying to work with words in a
context window, and even then they will only be able to mimic
rather than extrapolate anything new.
It's not the lack of perfect, it's the lack of reasoning and
learning.
bhouston wrote 1 day ago:
I 100% agree that learning is missing. We make up for it in
SKILLS.md and README.md files and RAGs of various types. And
we train the LLMs to deal with these structures.
I've seen a lot of reasoning in the latest models while
engaging in agentic coding. It is often decent at debugging
and experimentational, but around 30% it goes does wrong
paths and just adds unnecessary complexity via misdiagnoses.
flowardnut wrote 1 day ago:
"look, it completely lied about params that don't exist in a CLI!"
bhouston wrote 1 day ago:
AGI doesn't mean perfect. It means human like and the latest
models are pretty human like in terms of their fallibility and
capabilities.
tombert wrote 1 day ago:
Ok, but it's not AGI. People five years ago would have been wrong.
People who don't have all the information are often wrong about
things.
ETA:
You updated your comment, which is fine but I wanted to reply to
your points.
> I would argue that LLMs are actually smarter than the majority of
humans right now. LLMs do not have quite the agency that humans
have, but their intelligence is pretty decent.
I would actually argue that they are decidedly not smarter than
even dumb humans right now. They're useful but they are glorified
text predictors. Yes, they have more individual facts memorized
than the average person but that's not the same thing; Wikipedia,
even before LLMs also had many more facts than the average person
but you wouldn't say that Wikipedia is "smarter" than a human
because that doesn't make sense.
Intelligence isn't just about memorizing facts, it's about
reasoning. The recent Esolang benchmarks indicate that these LLMs
are actually pretty bad at that.
> We don't have clear ASI yet, but we definitely are in a AGI-era.
Nah, not really.
IanCal wrote 1 day ago:
> The recent Esolang benchmarks indicate that these LLMs are
actually pretty bad at that.
Iâm really not sure how well a typical human would do writing
brainfuck. Itâd take me a long time to write some pretty basic
things in a bunch of those languages and Iâm a SE.
tombert wrote 1 day ago:
Yes, but you also wouldn't need a corpus of hundreds of
thousands of projects to crib from. If it were truly able to
"reason" then conceivably it could look at a language spec, and
learn how to express things in term of Brainfuck.
IanCal wrote 22 hours 18 min ago:
They did for some problems. If you gave me five iterations at
a problem like this in brainfuck:
> "Read a string S and produce its run-length encoding: for
each maximal block of identical characters, output the
character followed immediately by the length of the block as
a decimal integer. Concatenate all blocks and output the
resulting string.
I'd do absolutely awfully at it.
And to be clear that's not "five runs from scratch repeatedly
trying it" it's five iterations so at most five attempts at
writing the solution and seeing the results.
I'd also note that when they can iterate they get it right
much more than "n zero shot attempts" when they have feedback
from the output. That doesn't seem to correlate well with a
lack of reasoning to me.
Given new frameworks or libraries and they can absolutely
build things in them with some instructions or docs. So
they're not very basically just outputting previously seen
things, it's at least much more pattern based than words.
edit -
I play clues by sam, a logical reasoning puzzle. The
solutions are unlikely to be available online, and in this
benchmark the cutoff date for training seems to be before
this puzzle launched at all: [1] Frankly just watching them
debug something makes it hard for me to say there's no
reasoning happening at all.
(HTM) [1]: https://www.nicksypteras.com/blog/cbs-benchmark.html
saganus wrote 1 day ago:
What does AGI look like in your opinion?
Personally, I've used LLMs to debug hard-to-track code issues and
AWS issues among other things.
Regardless of whether that was done via next-token prediction or
not, it definitely looked like AGI, or at least very close to it.
Is it infallible? Not by a long shot. I always have to
double-check everything, but at least it gave me solid starting
points to figure out said issues.
It would've taken me probably weeks to find out without LLMd
instead of the 1 or 2 hours it did.
In that context, I have a hard time thinking how would a "real"
AGI system look like, that it's not the current one.
Not saying current LLMs are unequivocally AGI, but they are darn
close for sure IMO.
root_axis wrote 1 day ago:
If we had AGI we wouldn't need to keep spending more and more
money to train these models, they could just solve arbitrary
problems through logic and deduction like any human. Instead,
the only way to make them good at something is to encode
millions of examples into text or find some other technique to
tune them automatically (e.g. verifiable reward modeling of
with computer systems).
Why is it that LLMs could ace nearly every written test known
to man, but need specialized training in order to do things
like reliably type commands into a terminal or competently
navigate a computer? A truly intelligent system should be able
to 0-shot those types of tasks, or in the absolute worst case
1-shot them.
fc417fc802 wrote 1 day ago:
To add to this, previously one could argue that LLMs were on
par with somewhat less intelligent humans and it was (at
least I found) difficult to dispute. But now the frontier
models can custom tailor explanations of technical subjects
in the advanced undergraduate to graduate range.
Simultaneously, I regularly catch them making what for a
human of that level would be considered very odd errors in
reasoning. When questioned about these inconsistencies they
either display a hopeless lack of awareness or appear to
attempt to deflect. They're also entirely incapable of
learning from such an interaction. It feels like interacting
with an empty vessel that presents an illusion of
intelligence and produces genuinely useful output yet there's
nothing behind the curtain so to speak.
tombert wrote 1 day ago:
> What does AGI look like in your opinion?
Being able to actually reason about things without exabytes of
training data would be one thing. Hell, even with exabytes of
training data, doing actual reasoning for novel things that
aren't just regurgitating things from Github would be cool.
Being able to learn new things would be another. LLMs don't
learn; they're a pretrained model (it's in the name of GPT),
that send in inputs and get an output. RAGs are cool but
they're not really "learning", they're just eating a bit more
context in order to kind of give a facsimile of learning.
Going to the extreme of what you're saying, then `grep` would
be "darn close to AGI". If I couldn't grep through logs, it
might have taken me years to go through and find my errors or
understand a problem.
I think that they're ultimately very neat, but ultimately
pretty straightforward input-output functions.
adamsb6 wrote 1 day ago:
Why should implementation matter at all? You should be able
to classify a black box as AGI or not.
Well, I guess you lose artificial if thereâs a human brain
hidden in the box.
bhouston wrote 1 day ago:
> They're useful but they are glorified text predictors.
There is a long history of people arguing that intelligence is
actually the ability to predict accurately. [1] > Intelligence
isn't just about memorizing facts, it's about reasoning.
Initially, LLMs were basically intuitive predictors, but with
chain of thought and more recently agentic experimentation, we do
have reasoning in our LLMs that is quite human like.
That said, there is definitely a biased towards training set
material, but that is also the case with the large majority of
humans.
For the Esoland benchmarks, I would be curious how much adding a
SKILLS.md file for each language would boost performance?
I am pretty confidence that we are in the AGI era. It is
unsettling and I think it gives people cognitive dissonance so we
want to deny it and nitpick it, etc.
(HTM) [1]: https://www.explainablestartup.com/2017/06/why-predictio...
Aloisius wrote 1 day ago:
> There is a long history of people arguing that intelligence
is actually the ability to predict accurately.
That page describes a few recent CS people in AI arguing
intelligence is being able to predict accurately which is like
carpenters declaring all problems can be solved with a hammer.
AI "reasoning" is human-like in the sense that it is similar to
how humans communicate reasoning, but that's not how humans
mentally reason.
saltcured wrote 1 day ago:
Like my father before me, I seem to have absorbed an ability
to predict what comes next in movies and books. It's
sometimes a fun parlor trick to annoy people who actually get
genuine surprise out of these nearly deterministic plot
twists. But, a bit like with LLMs, it is a superficial
ability to follow the limited context that the writers' group
is seemingly forced by contract to maintain.
Like my father before me, I've also gotten old enough to to
realize that some subset of people out there also behave like
they are scripted by the same writers' group and production
rules. I fear for the future where LLMs are on an equal
footing because we choose to mimic them.
tombert wrote 1 day ago:
> There is a long history of people arguing that intelligence
is actually the ability to predict accurately.
There sure is, and in psychological circles that it appears
that there's an argument that that is not the case. [1] >
Initially, LLMs were basically intuitive predictors, but with
chain of thought and more recently agentic experimentation, we
do have reasoning in our LLMs that is quite human like.
If you handwave the details away, then sure it's very human
like, though the reasoning models just kind of feed the dialog
back to itself to get something more accurate. I use Claude
code like everyone else, and it will get stuck on the strangest
details that humans actively wouldn't.
> For the Esoland benchmarks, I would be curious how much
adding a SKILLS.md file for each language would boost
performance?
Tough to say since I haven't done it, though I suspect it
wouldn't help much, since there's still basically no training
data for advanced programs in these languages.
> I am pretty confidence that we are in the AGI era. It is
unsettling and I think it gives people cognitive dissonance so
we want to deny it and nitpick it, etc.
Even if you're right about this being the AGI era, that doesn't
mean that current models are AGI, at least not yet. It feels
like you're actively trying to handwave away details.
(HTM) [1]: https://gwern.net/doc/psychology/linguistics/2024-fedo...
bhouston wrote 1 day ago:
> though the reasoning models just kind of feed the dialog
back to itself to get something more accurate.
Much of our reasoning is based on stimulating our sensory
organs, either via imagination (self-stimulation of our
visual system) or via subvocalization (self-stimulation of
our auditory system), etc.
> it will get stuck on the strangest details that humans
actively wouldn't.
It isn't a human. It is AGI, not HGI.
> It feels like you're actively trying to handwave away
details.
Maybe. I don't think so though.
Ucalegon wrote 1 day ago:
Marketing is marketing, nothing about it was ever about being factual
when there is a total addressable market to go after and dollars to
be made! This is inline with much of the other marketing that exists
in the AI space as it stands now, not mention the use of AGI within
the space as it stands currently.
tombert wrote 1 day ago:
Sure, but there are plenty of cases where a deceptive name has been
considered enough to at least warrant an investigation: [1] .
I'm not saying anything is going to happen, ARM holdings has a lot
more money and lawyers than Long Blockchain did, but I'm just
saying that it's not weird to think that a deceptive name could be
considered false advertising.
(HTM) [1]: https://en.wikipedia.org/wiki/Long_Blockchain_Corp
Ucalegon wrote 1 day ago:
That would not hold up considering that they consistently use
'agentic' in their press release and make no mention of
'artificial general intelligence'. Just because two things have
the same acronym does not mean that they stand for the same
thing. Marketing being cheeky is not a crime.
imtringued wrote 1 day ago:
The AGI in "Arm AGI CPU" isn't an acronym and there is no
coincidence.
tombert wrote 1 day ago:
It's not "being cheeky". They know that the holy grail for AI
is AGI. They know that people are going to see the acronym AGI
and assume Artificial General Intelligence. They know that
people aren't going to read the full article.
This isn't just a crass joke or a pun, it's outright deception.
I'm not a lawyer, maybe it wouldn't hold up in court, but you
cannot convince me that they aren't doing this on purpose.
Ucalegon wrote 1 day ago:
of course they did it on purpose but thats not illegal. They
are not at fault for individuals not reading what the acronym
stands for and the intent that they place within the press
release, which is very, very clear. They are not obligated or
liable for others lack of due diligence.
yencabulator wrote 12 hours 12 min ago:
They may not be criminally liable but they are at fault for
sure.
torginus wrote 1 day ago:
Considering AGI has been degraded into a generic feelgood marketing
word, I can't wait to get my AGI-scented deodorant.
lxgr wrote 21 hours 7 min ago:
Long Blockchain Corp. remembers [1] [1] .
(HTM) [1]: https://en.wikipedia.org/wiki/Long_Blockchain_Corp
can16358p wrote 23 hours 41 min ago:
Buy it in combo with the good ol' Blockchain perfume!
lxgr wrote 21 hours 7 min ago:
You mean iced tea, right?
RcouF1uZ4gsC wrote 1 day ago:
Artificial Gut Incense?
bensyverson wrote 1 day ago:
You can already drink AGI! Oh sorry, AG1. The resemblance must be a
complete coincidence.
bogzz wrote 1 day ago:
Oh, is that what they're implementing in schools? No, wait, that
was A1, probably the sauce.
bigfishrunning wrote 13 hours 12 min ago:
A1? I think you mean Al, which is what you can call me
parl_match wrote 1 day ago:
> The resemblance must be a complete coincidence.
I don't know why so many people are willing to descend into
flippant, lazy conspiracy instead of a 7 second Google search
before making a claim?
AG1 was started in 2010 by a police officer from New Zealand and
AG stands for Athletic Greens.
There is a fair amount of controversy around the company's
claims, so I suppose that is one symmetry between AG1 and AGI.
bensyverson wrote 1 day ago:
Not a conspiracy, and I know the historyâjust a joke. The
current branding sure looks like AGI if you're not looking
closely (or maybe I just read too much hn)
rsktaker wrote 1 day ago:
I laughed!
krogenx wrote 1 day ago:
Pretty sure in that case AG stands for Athletic Greens.
I think the name change also came before the AI hype.
BLKNSLVR wrote 1 day ago:
AGI: Attorney General Intelligence.
I believe Arm probably has cracked this very low bar.
SecretDreams wrote 1 day ago:
> I can't wait to get my AGI-scented deodorant.
Old spice for me, thanks!
BLKNSLVR wrote 1 day ago:
Old Spice, that's OG!
monegator wrote 1 day ago:
In case you haven't noticed, this whole thing has been a grift since
2022. It's kind of amazing that nobody thought of making AGI
processors before
alfalfasprout wrote 1 day ago:
the whole AI space is rife with much worse example of what could be
considered securities fraud tbh
ahmedfromtunis wrote 1 day ago:
Poor TSMC (and ASML)! They were already struggling with capacity to
fulfill orders from their established customers. With ARM now joining
the party, I don't know how they're going to cope.
Edit: The new CPU will be built with the soon-to-be-former leading edge
process of 3nm lithography.
bigyabai wrote 1 day ago:
TSMC has multiple fabs being constructed, they'll be okay. The
biggest losers here are AMD, Intel and Apple who will be forced to
pay AI-hype prices to mass-produce boring consumer hardware.
myhf wrote 1 day ago:
finally, a CPU capable of making API calls to cloud providers
bobmcnamara wrote 1 day ago:
6GB/s/core
That's...not much right? Maybe it's a lot times N-cores? But I really
hope each individual core isn't limited to that.
Edit: 17 minutes to sum RAM?
wmf wrote 1 day ago:
It's a decent amount. Cloudflare was happy to hit 3.2 GB/s/core
yesterday. It is shared so cores can burst higher.
jeffbee wrote 1 day ago:
It isn't obvious to me that they intended to give this as the maximum
single-core performance, or just the proportional share of 844GB/s
across 136 cores. Implementations of Neoverse V2 by Nvidia and Amazon
hit 20-30GB/s in single-threaded work.
bt1a wrote 1 day ago:
Oh wow already in use by Meta, OpenAI, and more ?? [1] The TDP to
memory bandwidth& capacity ratio form these blades is in a class of its
own, yes?
(HTM) [1]: https://www.arm.com/products/cloud-datacenter/arm-agi-cpu/ecos...
DeathArrow wrote 1 day ago:
Now every product will have the AI buzzword in it's name, just like 25
years ago product names started with letter e, from electronic.
So we will see AI Toilet Paper launching in the next months.
torusle wrote 1 day ago:
ARM riding the "everything is AI" train.
So sad.
josemanuel wrote 1 day ago:
Interesting that Jensen Huang joined in the congratulations for this
new product!
foobiekr wrote 1 day ago:
More AI bullshit and hype is good for Nvidia. Until it isn't.
I no longer believe this is like the dotcom. Now it feels like the
1983 video game crash.
midnightdiesel wrote 1 day ago:
What a product name choice! I wasnât expecting ARM to pivot to
selling snake oil.
jeffbee wrote 1 day ago:
Many of these words are unexplained. "Memory and I/O on the same die".
Oh? What does this mean? All of the DRAM in the photo/render is still
on sticks. Do they mean the memory controller? Or is there an embedded
DRAM component?
ahoka wrote 1 day ago:
All processors have memory on the same die.
jeffbee wrote 1 day ago:
How much, what kind, and what is your source?
All mainstream server CPUs have a megabyte or two of SRAM on a
core, of course.
ahoka wrote 23 hours 22 min ago:
Exactly. :-)
twostorytower wrote 1 day ago:
And the stock is down >2% today
rafram wrote 1 day ago:
AGI (Agentic AI Infrastructure) is joining CSS (Compute Subsystems) in
their lineup, apparently. Whoâs naming this stuff?
LikesPwsh wrote 1 day ago:
The same people who abbreviate "generative" AI in a way that
misleadingly conflates it with "general" AI.
Fraud is just the default lifestyle of marketers.
LollipopYakuza wrote 1 day ago:
So Artificial General Intelligence and Cascading Style Sheets are not
joining forces?
lenerdenator wrote 1 day ago:
If there's ever a singularity as a result of AGI, it will likely
look at CSS and decide that extermination is simply too good for
the human race.
rafram wrote 1 day ago:
Always have been :)
aurareturn wrote 1 day ago:
This is just a Neoverse CPU that Arm will manufacture themselves at
TSMC and then sell directly to customers.
It isn't an "AI" CPU. There is nothing AI about it. There is nothing
about it that makes it more AI than Graviton, Epyc, Xeon, etc.
This was already revealed in the Qualcomm vs Arm lawsuit a few years
ago. Qualcomm accused Arm of planning to sell their CPUs directly
instead of just licensing. Arm's CEO at the time denied it. Qualcomm
ends up being right.
I wrote a post here on why Arm is doing this and why now:
(HTM) [1]: https://news.ycombinator.com/item?id=47032932
lostmsu wrote 18 hours 5 min ago:
It's worse, because there are actually integrated SoCs that include
NPU, which I would say are real "AI accelerators".
jasoneckert wrote 1 day ago:
This was exactly my first thought when I saw the title. And after
reading the contents of the blog, it's pretty clear that ARM is laser
focused on getting a piece of their customer's cake by competing with
them. This is likely why they are riding the AI hype train hard with
their ill-suited name (AGI).
Unfortunately for them, I think hardware vendors will see past the
hype. They'll only buy the platform if it is very competitively
priced (i.e., much cheaper) since fortune favours long-lived
platforms and organizations like Apple and Qualcomm.
benob wrote 1 day ago:
This reminds me of Intel talking about faster web browsing with the
new Pentium
randusername wrote 19 hours 28 min ago:
A lot of this happening.
The Dell marketing machine in particular is bludgeoning everyone
that will listen about Dell AI PCs. The implication that folks will
miss the boat on AI by not having a piddly NPU in their laptop is
silly.
OJFord wrote 1 day ago:
Ha, I wasn't old or into it enough at the time to remember that,
but it is consistent with just about every IC datasheet ever with
their list of possible applications. (Like: logic gate;
applications include Walkman, Rocket ship, Fuzzy Logic Washing
Machine, mobile phone, AGI co-processor, ...)
steve1977 wrote 1 day ago:
I think the interesting bit is actually this:
For the first time in our more than 35-year history, Arm is delivering
its own silicon products
hgo wrote 23 hours 54 min ago:
Can this be read as finally the financial incentives to join the AI
silicone race has become too tempting. Finally the incentives to sell
chips are definitely stronger than the cost of competing with your
own licensees?
djmips wrote 1 day ago:
But really how different is TSMC than VLSI making the ARM1? By your
logic I would say that ARM has already delivered it's own silicon
product.
steve1977 wrote 1 day ago:
Well technically the ARM1 was a Acorn product (made by VLSI). ARM
as a company was only incorporated in 1990 (as a joint venture
between Acorn, VLSI and drumroll Apple), I guess that's where the
mentioned 35 years and "first time in our history" come from.
djmips wrote 18 hours 30 min ago:
The best kind of correct?
HerbManic wrote 1 day ago:
I can imagine a lot of ARM engineers being frustrated at seeing their
cores being used in stupid ways for decades to finally flex what they
can do (outside of Apple).
bigyabai wrote 1 day ago:
I can imagine many of those ARM engineers looking at Ampere's
product line and surmising that an "AGI" ARM server is like
building the Hindenburg 2.
wmf wrote 1 day ago:
Meta is a guaranteed customer though.
bigyabai wrote 1 day ago:
Oracle was a guaranteed Ampere customer and ended up giving
away the vCPUs for free.
joshstrange wrote 1 day ago:
Agreed, it will be _very_ interesting to see what waves this causes.
It would be like TSMC deciding to make and sell their own CPUs, now
ARM is directly competing with some of their clients.
jballanc wrote 1 day ago:
Eh, I'm not so sure it'll be that big a deal. The whole supply
chain is so twisted and tangled all the way up and down. Shuffling
out one piece doesn't seem like it will, on its own, be so major.
Samsung made the chips for the iPhone, then made their own phone,
then Apple designed their own chips made by TSMC, now Apple is
exploring the possibility of having Samsung make those chips again.
Also, it takes a willful ignorance of history for ARM to claim this
is the first time they've manufactured hardware. I mean, maaaaybe,
teeeeechnically that's true, but ARM was the Acorn RISC Machine,
and Acorn was in the hardware business...at least as much as Apple
was for the first iPhone.
steve1977 wrote 1 day ago:
As I mentioned in another comment, I guess when ARM references to
themselves, they mean Arm Holdings plc and not Acorn Computers.
The two are of course very much related, but not the same
company.
spooshspan wrote 1 day ago:
Technically right is the best kind of right ⦠right?
I donât think ARM Ltd have ever done a deal to deliver finished
chips to a customer for production use.
Theyâve made test silicon and dev. boards.
They designed arguably the first ever SoC (for Acorn) in the form
of the ARM250 but Acorn bought the chips from VLSI not ARM.
Not aware of an exception to this rule until now.
brcmthrowaway wrote 1 day ago:
Do they need to higher Design Verification engineers for this?
Thats a huge cost compared to the average RTL jockey
lizknope wrote 1 day ago:
ARM already had tons of DV engineers. No company would license
the RTL or any IP unless it has already been run through millions
of simulations in DV.
lenerdenator wrote 1 day ago:
What would be the real advantage of doing that?
SilverElfin wrote 1 day ago:
Call this an âAGI CPUâ just feels like the most out of touch,
terrible marketing possible. Maybe this is unfair but it makes me think
ARM as a whole is incompetent just because it is so tasteless.
> Arm has additionally partnered with Supermicro on a liquid-cooled
200kW design capable of housing 336 Arm AGI CPUs for over 45,000 cores.
Also just bad timing on trying to brag about a partnership with
Supermicro, after a founder was just indicted on charges of smuggling
Nvidia GPUs. Just bizarre to mention them at all.
mkl wrote 1 day ago:
This is like naming your kid World President Smith.
rboyd wrote 1 day ago:
This could work. Right? [1] My realtor's last name is House
(HTM) [1]: https://psycnet.apa.org/record/2002-12744-001
tyushk wrote 1 day ago:
See also: Nominative determinism in hospital medicine, by
orthopedics Limb, Limb, Limb and Limb
(HTM) [1]: https://publishing.rcseng.ac.uk/doi/10.1308/147363515X1413...
hn_acc1 wrote 1 day ago:
Seems more likely this falls under the replication crisis umbrella.
My wife's favorite numbers are my birthday (mm-dd), which is a
small reason she fell in love with me. Neither of those numbers are
related to her birthday. My favorite number(s) do not overlap with
my birthday. Maybe my mm-dd values just aren't low enough, like
02-02?
tombert wrote 1 day ago:
My urologist, and I swear I'm not making this up, has the last name
"Wiener".
usefulcat wrote 12 hours 2 min ago:
I know of a dentist named Dr. Payne.
pixelpoet wrote 1 day ago:
Quite a coincidence, but how did you know he's Austrian?
rootbear wrote 1 day ago:
My friend M. Goodeâs father was a urologist named Dr. P. Goode.
For real.
technothrasher wrote 1 day ago:
> Studies 1-5 showed that people are disproportionately likely to
live in places whose names resemble their own first or last names
There are several cities in the US that share my last name. I
don't live near any of them.
> Study 6 extended this finding to birthday number preferences.
D'oh!
conductr wrote 1 day ago:
> Studies 1-5 showed that people are disproportionately likely to
live in places whose names resemble their own first or last names
(e.g., people named Louis are disproportionately likely to live in
St. Louis).
When I lived in Austin, it seemed like a third of boys born were
being named Austin. I presume many of them will end up living there
as adults but not because of this particular bias, because they
were raised there and have familyâs there seems to be a more
likely driver.
chrisweekly wrote 1 day ago:
"Nominative determinism" is everywhere once you look for it. My
vet's last name is McStay.
krrrh wrote 1 day ago:
I just listened to an interview with Carl Trueman about his new
book which criticizes transhumanism.
IshKebab wrote 1 day ago:
Reporting bias.
rvz wrote 1 day ago:
Meta are heavily invested in building their own chips with ARM to
reduce their reliance on Nvidia as everyone is going after their
(Nvidia) data center revenues.
This is why Meta acquired a chip startup for this reason [0] months
ago.
[0]
(HTM) [1]: https://www.reuters.com/business/meta-buy-chip-startup-rivos-a...
yabutlivnWoods wrote 1 day ago:
How fun would it be if due to improved chips handling more model state
RAM needs are reduced and Sama cannot make all those RAM purchases he
booked?
VC without a degree who has no grasp of hardware engineering failed up
when all he had to do was noodle numbers in an Excel sheet.
He is so far behind the hardware scene he thinks its sitting still and
RAM requirements will be a nice linear path to AGI. Not if new chips
optimized for model streaming crater RAM needs.
Hilarious how last decades software geniuses are being revealed as
incompetent finance engineers whose success was all due to ZIRP
offering endless runway.
mhjkl wrote 1 day ago:
What RAM? OpenAI booked the silicon wafers, they can print anything
they want on them. I wouldn't call them "far behind" on hardware when
OpenAI are actively buying Cerebras chips.
yabutlivnWoods wrote 1 day ago:
Yes exactly; he is behind in that he has to buy others chips with
little say on how they work.
Apple and Google control their own designs.
Sama is 100% an outsider, merely a customer. The chip insiders are
onto his effort to pivot out of meme-stock hyping, into owning a
chunk of their fiefdom. They laughed off his claims a couple years
ago as insane VC gibberish (third hand paraphrase from social
network in chip and hardware land).
No way he can pivot and print whatever. Relative to hardware
industry he is one of those programmers who can say just enough to
get an interview but whiffs the code challenge.
He has no idea where the bleeding edge is so he will just release
dated designs. Chip IP is a moat.
Plus a bunch of RAM companies would be left hanging; no orders, no
wafers. Sama risks being Jimmy Hoffa'd imploding the asset values
of other billionaires.
gtowey wrote 1 day ago:
The thing they are good at is bullshitting and selling hype. Which we
see here doesn't mean they are actually going to be good at running a
business. Smart leaders understand they are not omnipotent and
omniscient so they surround themselves who know how to get things
done. Weak, narcissist leaders think they're the smartest one in the
room and fail.
Unfortunately failing upwards is still somehow common, probably
because the skill of parting fools from their money is still
valuable.
thereitgoes456 wrote 1 day ago:
No, he is also good at networking. When OpenAI was mission-driven
and Sam was more respected, he could convince the most talented
people to work for him.
Now the talent is going to other places for a variety of reasons,
not all due to Sam (one of which is little room for options to
grow). However itâs hard to believe his tanking reputation is not
badly hurting the company. Other than Jakub and Greg, I believe
there are not many top tier people left, those in top positions are
there because they are yes-men to Sam.
papichulo2023 wrote 1 day ago:
What does "Built for rack-scale agentic efficiency" even means?
nuker wrote 1 day ago:
How many angels can dance on the head of a pin?
(HTM) [1]: https://en.wikipedia.org/wiki/How_many_angels_can_dance_on_t...
anizan wrote 1 day ago:
Lots of isolated firecracker instances for openclaw like agents.
throwa356262 wrote 1 day ago:
If you read past the marketing talk, this is basically a massively
multicore system (136) with significantly reduced power usage (300W).
Where does Agentic come into this? ARMs explanation is that future
Agentic workloads will be both CPU and GPU bound thus the need for
significant CPU efficiency.
thewebguyd wrote 1 day ago:
Big "but mongodb is web scale" vibes
inerte wrote 1 day ago:
It's volume of tokens consumed x number of agents x rack space.
Basically agentic computation density.
sdwvit wrote 1 day ago:
Translation: âCan you give us some money pretty please?â
otabdeveloper4 wrote 1 day ago:
It's when LLM agents are inefficient that you need a whole rack of
servers to get shit done.
r_lee wrote 1 day ago:
I was gonna say just big DCs in marketing yap but really wtf does
that mean?
varispeed wrote 1 day ago:
It's a code sentence for let's go to the utility room to cross
pollinate ideas.
ray_v wrote 1 day ago:
We just say words now that sound good for marketing but have no real
meaning.
girvo wrote 1 day ago:
> now
Iâd argue we have always done that, and in fact itâs basically
the definition of marketing!
vova_hn2 wrote 1 day ago:
I found this article extremely frustrating to read. Maybe I lack some
required prior knowledge and I am not the target audience for this.
> built on the Arm Neoverse platform
What the heck is "Arm Neoverse"? No explanation given, link leads to
website in Chinese. Using Firefox translating tool doesn't help much:
> Arm Neoverse delivers the best performance from the cloud to the edge
What? This is just a pile of buzzwords, it doesn't mean anything.
The article doesn't seem to contain any information on how much it
costs or any performance benchmarks to compare it with other CPUs. It's
all just marketing slop, basically.
adrian_b wrote 16 hours 1 min ago:
You should look at the benchmarks of the Cortex-X4 cores used in many
smartphones from 2 years ago, because it is the same core as Neoverse
V3.
AWS Graviton5 uses the same cores, but it has 192 cores per socket.
So Graviton5 has more cores per socket, but I think that it does not
support dual socket boards.
This Arm AGI supports dual socket boards, so it provides 272 cores
per board, more than Graviton5 MBs.
However, this is puny in comparison with Intel Clearwater Forest,
which provides 576 cores per board, and the Intel Darkmont cores are
almost exactly equivalent for all characteristics with Arm Neoverse
V3.
snek_case wrote 1 day ago:
I feel like this is most products in the AI space lately. More
marketing fuzz than substance. Hard to figure out what thing even
does.
nicoburns wrote 1 day ago:
> The ARM Neoverse is a group of 64-bit ARM processor cores licensed
by Arm Holdings. The cores are intended for datacenter, edge
computing, and high-performance computing use. The group consists of
ARM Neoverse V-Series, ARM Neoverse N-Series, and ARM Neoverse
E-Series.
(HTM) [1]: https://en.wikipedia.org/wiki/ARM_Neoverse
adrian_b wrote 16 hours 8 min ago:
More precisely, this Neoverse V3 core is the server version of the
Cortex-X4 core from smartphones. The actual core is pretty much
identical, but the cache memories and the interfaces between cores
are different.
Neoverse V3 is also used in AWS Graviton5 and in several NVIDIA
products.
nurettin wrote 1 day ago:
I was wondering who convinced ARM to manufacture hardware. Turns out it
was Meta.
cmrdporcupine wrote 1 day ago:
Now if only they would go back to being "Acorn RISC Machines" and
make a nice desktop home computer again...
One can dream.
mghackerlady wrote 1 day ago:
I hate RISC OS architecturally, but if they made a new Archimedes
or whatever that ran it I'd buy it
wmf wrote 1 day ago:
DGX Spark is pretty nice. It could be cheaper if they removed the
NIC though.
cmrdporcupine wrote 1 day ago:
I have the ASUS variant. I like it well enough.
I see the NIC as a form of future proofing, but we'll see.
My Ryzen 9 mini-PC from 2 years ago outperforms this thing in raw
CPU Though.
walterbell wrote 1 day ago:
Nuvia/Qualcomm lawsuit and Softbank.
redwood wrote 1 day ago:
Fabless. Like AMD and Nvidia. So I would think about it more as
branding and packaging than Manufacturing
IshKebab wrote 1 day ago:
There's a big difference between just providing IP and actually
doing the physical design, manufacturing and packaging. You can't
just send your RTL to TSMC and magically get packaged chips back.
I haven't ever ordered an ARM SoC but I also wouldn't be surprised
if there were significant parts that they left up to integrators
before - PLLs, pads, SRAM etc.
anvuong wrote 1 day ago:
Huh, many companies use TSMC, in fact, probably all of them use
TSMC, including Intel, yet there are only a few who dominates in
performance. There are much more in designing chips than what you
just listed.
i_am_a_peasant wrote 1 day ago:
Intel uses its own fabs for certain IP, tsmc for others yeah. As
far as I've seen the latest greatest Panther Lake that stuff is
made in intel's arizona fabs.
throwa356262 wrote 1 day ago:
AGI = Agentic AI Infrastructure
In case you were thinking about some other abbreviation...
Xunjin wrote 1 day ago:
I was thinking "Another Great Illusion".
kaszanka wrote 1 day ago:
Is it AGentIc ai infrastructure? Or AGentic aI infrastructure? Or
AGentic ai Infrastructure?
I expected better from the people who brought us the ARM
architecture, with A, R and M profiles.
ww520 wrote 1 day ago:
Should have called it A^3I^2 - Arm Agentic Artificial Intelligence
Infrastructure.
recursivecaveat wrote 1 day ago:
I'd throw in an Inference there for the AAAIII symmetry. At a
certain point it starts to just look like a scream haha.
bee_rider wrote 1 day ago:
Itâs like they decided to moon all the onlookers while jumping the
sharkâ¦
I donât know if it was intentional or they were so far out over
their skis that they got their bathing suit caught, but itâs
impressive either way.
monegator wrote 1 day ago:
what lenghts are they going to, just to say we have achieved AGI...
now who's moving the goalpost?
artyom wrote 1 day ago:
Not bait at all
conductr wrote 1 day ago:
Missed opportunity to call it AAII and market it as twice as powerful
as regular AI.
jayd16 wrote 1 day ago:
We put AI in our AI so the AI is already baked in.
conductr wrote 1 day ago:
AI hallucinates, AAII stutters
flopsamjetsam wrote 1 day ago:
A^2I^2 or (AI)^2
esafak wrote 1 day ago:
The coast is clear to come up with your own expansion for AI!
WhrRTheBaboons wrote 1 day ago:
I would've went for Agentic Neural Infrastructure personally
ARMANI for short /s
SilverElfin wrote 1 day ago:
They pathetically donât mention what it stands for anywhere in this
press release. Deceptive marketing at worst, shameless AI-washing at
best.
charcircuit wrote 1 day ago:
AGI stands for Artificial General Intelligence.
monegator wrote 1 day ago:
No, it's Agenzia Giornalistica Italiana.
lock1 wrote 1 day ago:
Pretty sure it stands for "Artificial abbreviation & hype
GeneratIon" nowadays
hagbard_c wrote 1 day ago:
Are you sure it doesn't stand for Advanced Guessing Instrument?
That's what the result often seem to indicate after all.
ux266478 wrote 1 day ago:
I think this is a poetic encapsulation of the AI industry at this
point. A beautifully poignant vignette.
Aerroon wrote 1 day ago:
It feels more like "blockchain" to me:
(HTM) [1]: https://www.cnbc.com/2017/12/21/long-island-iced-tea-micro...
hootz wrote 1 day ago:
What a terrible, terrible name.
lupajz wrote 1 day ago:
I mean, they could at least use AI to figure out how to name their AI
product.
embedding-shape wrote 1 day ago:
> I work at ARM, we're launching a new CPU optimized for LLM usage.
We're thinking of calling it "Arm Agentic AI Infrastructure CPU",
or "Arm AGI CPU" for short. Do you think this is a good idea?
> No. I would not use it as the product name. âAGI CPUâ will be
read as artificial general intelligence, not âagentic AI
infrastructure,â so it invites confusion and sounds hypey.
To bad these executives seemingly don't have access to ChatGPT.
_ache_ wrote 1 day ago:
They did ask AI if AGI what a great name.
It said that it was the greatest name possible. It's bold,
aspirational, and ... polarizing?!
Oh god! Mistral tell me it's highly polarizing, will make the buzz
and it's risky but anyway people will know that ARM is doing CPU
again now (maybe I did put too many context).
foolproofplan wrote 1 day ago:
maybe they did and why they got this slop?
RealityVoid wrote 1 day ago:
It's... really something. Not good. Something.
(DIR) <- back to front page