[HN Gopher] Sandia turns on brain-like storage-free supercomputer
___________________________________________________________________
Sandia turns on brain-like storage-free supercomputer
Author : rbanffy
Score : 135 points
Date : 2025-06-06 15:24 UTC (7 hours ago)
(HTM) web link (blocksandfiles.com)
(TXT) w3m dump (blocksandfiles.com)
| realo wrote:
| No storage? Wow!
|
| Oh... 138240 Terabytes of RAM.
|
| Ok.
| jonplackett wrote:
| Just don't turn it off I guess...
| rzzzt wrote:
| I hear Georges Leclanche is getting close to a sort of
| electro-chemical discovery for this conundrum.
| rbanffy wrote:
| At least not while it's computing something. It should be
| fine to turn it off after whatever results have been
| transferred to other computer.
| throwaway5752 wrote:
| I feel like there is a straightforward biological analogue
| for this.
|
| But at in this case, one wouldn't subject to macro-scale
| nonlinear effects arising from the uncertainty principle when
| trying to "restore" the system.
| crtasm wrote:
| >In Sandia's case, it has taken delivery of a 24 board, 175,000
| core system
|
| So a paltry 2,304 GB RAM
| SbEpUBz2 wrote:
| I am reading it wrong, or the math doesn't add up? Shouldn't it
| be 138240 GB not 138240 TB?
| divbzero wrote:
| You're right, OP got the math wrong. It should be:
| 1,440 boards x 96 GB/board = 138,240 GB
| CamperBob2 wrote:
| Either way, that doesn't exactly sound like a "storage-
| free" solution to me.
| louthy wrote:
| Just whatever you do, don't turn it off!
| Nevermark wrote:
| "What does this button do?" Bmmmfff.
|
| On the TRS-80 Model III, the reset button was a bright
| red recessed square to the right of the attached
| keyboard.
|
| It was irresistible to anyone who had no idea what you
| were doing as you worked, lost in the flow, insensitive
| to the presence of another human being, until...
|
| --
|
| Then there was the Kaypro. Many of their systems had a
| bug, software or hardware, that would occasionally cause
| an unplanned reset the first time, after you turned it
| on, that you tried writing to the disk. Exactly the wrong
| moment.
| Footpost wrote:
| Well since Neuromorphic methods can show that 138240 = 0,
| should it come as as surprise that they enable blockchain on
| Mars?
|
| https://cointelegraph.com/news/neuromorphic-computing-breakt...
| shrubble wrote:
| You don't have to write anything down if you can keep it in your
| memory...
| timmg wrote:
| Doesn't give a lot of information about what this is for or how
| it works :/
| JKCalhoun wrote:
| Love to see a simulator where you can at least run a plodding
| version of some code.
| ymsodev wrote:
| https://arxiv.org/abs/2401.04491
| fasteddie31003 wrote:
| How much did this cost? I'd rather have CUDA cores.
| rbanffy wrote:
| Part of their job is to evaluate novel technologies. I find
| this quite exciting. CUDA is well understood. This is not.
| fintler wrote:
| They already have CUDA cores in production. This is a lab
| that's looking for the next big thing.
| bee_rider wrote:
| Sandia's business model is different from NVIDIA for sure.
| dedicate wrote:
| I feel like we're just trading one bottleneck for another here.
| So instead of slow storage, we now have a system that's hyper-
| sensitive to any interruption and probably requires a dedicated
| power plant to run.
|
| Cool experiment, but is this actually a practical path forward or
| just a dead end with a great headline? Someone convince me I'm
| wrong...
| JumpCrisscross wrote:
| > _we 're just trading one bottleneck for another_
|
| If you have two systems with opposite bottlenecks you can build
| a composite system with the bottlenecks reduced.
| tokyolights2 wrote:
| Sandia National Labs is one of the few places in the country
| (on the planet?) doing blue-sky research. My first thought was
| similar to yours--If it doesn't have storage, what can I
| realistically even do with it!?
|
| But sometimes you just have to let the academics cook for a few
| decades and then something fantastical pops out the other end.
| If we ever make something that is truely AGI, its architecture
| is probably going to look more like this SpiNNaker machine than
| anything we are currently using.
| mipsISA69 wrote:
| This smells like a VC derived sentiment - the only value is
| from identifying the be all end all solution.
|
| There's plenty to learn from endeavors like this, even if this
| particular approach isn't the one that e.g. achieves AGI.
| isoprophlex wrote:
| > the SpiNNaker 2's highly parallel architecture has 48 SpiNNaker
| 2 chips per server board, each of which in turn carries 152 based
| cores and specialized accelerators.
|
| NVIDIA step up your game. Now I want to run stuff on based cores.
| marsten wrote:
| Interesting that they converged on a memory/network architecture
| similar to a rack of GPUs.
|
| - 152 cores per chip, equivalent to ~128 CUDA cores per SM
|
| - per-chip SRAM (20 MB) equivalent to SM high-speed shared memory
|
| - per-board DRAM (96 GB across 48 chips) equivalent to GPU global
| memory
|
| - boards networked together with something akin to NVLink
|
| I wonder if they use HBM for the DRAM, or do anything like
| coalescing memory accesses.
| patcon wrote:
| Whenever I hear about neuromorphic computing, I think about the
| guy who wrote this article, who was working in the field:
|
| Thermodynamic Computing https://knowm.org/thermodynamic-
| computing/
|
| It's the most high-influence, low-exposure essay I've ever read.
| As far as I'm concerned, this dude is a silent prescient genius
| working quietly for DARPA, and I had a sneak peak into future
| science when I read it. It's affected my thinking and trajectory
| for the past 8 years
| evolextra wrote:
| Man, this article is incredible. So many ideas resonate with
| me, but I never can't formulate them. Thanks for sharing, all
| my friends have to read this.
| epsilonic wrote:
| If you like this article, you'll probably enjoy reading most
| publications from the Santa Fe Institute.
| afarah1 wrote:
| Interesting read, more so than the OP. Thank you.
| iczero wrote:
| Isn't this just simulated annealing in hardware attached to a
| grandiose restatement of the second law of thermodynamics?
| pclmulqdq wrote:
| Yes. This keeps showing up in hardware engineering labs, and
| never holds up in real tasks.
| lo_zamoyski wrote:
| I will say that the philosophical remarks are pretty obtuse and
| detract from the post. For example...
|
| "Physics-and more broadly the pursuit of science-has been a
| remarkably successful methodology for understanding how the
| gears of reality turn. We really have no other methods-and
| based on humanity's success so far we have no reason to believe
| we need any."
|
| Physics, which is to say, physical methods have indeed been
| remarkably successful...for the types of things physical
| methods select for! To say it is exhaustive not only begs the
| question, but the claim itself is not even demonstrable by
| these methods.
|
| The second claim contains the same error, but with more
| emphasis. This is just off-the-shelf scientism, and scientism,
| apart from what withering refutations demonstrate, should be
| obviously self-refuting. Is the claim that "we have no other
| methods but physics" (where physics is the paradigmatic
| empirical science; substitute accordingly) a scientific claim?
| Obviously not. It is a philosophical claim. That already
| refutes the claim.
|
| Thus, philosophy has entered the chat, and this is no small
| concession.
| HarHarVeryFunny wrote:
| The original intent for this architecture was for modelling large
| spiking neural networks in real-time, although the hardware is
| really not that specialized - basically a bunch of ARM chips with
| high speed interconnect for message passing.
|
| It's interesting that the article doesn't say that's what it's
| actually going to be used for - just event driven (message
| passing) simulations, with application to defense.
| Onavo wrote:
| Probably Ising models, phase transitions, condense matter stuff
| all to help make a bigger boom.
| colordrops wrote:
| > this work will explore how neuromorphic computing can be
| leveraged for the nation's nuclear deterrence missions.
|
| Wasn't that the plot of the movie War Games?
| bob1029 wrote:
| I question how viable these architectures are when considering
| that accurate simulation of a spiking neural network requires
| maintaining strict causality between spikes.
|
| If you don't handle effects in precisely the correct order, the
| simulation will be more about architecture, network topology and
| how race conditions resolve. We need to simulate the behavior of
| a spike preceding another spike in exactly the right way, or
| things like STDP will wildly misfire. The "online learning"
| promise land will turn into a slip & slide.
|
| A priority queue using a quaternary min-heap implementation is
| approximately the fastest way I've found to serialize spikes on
| typical hardware. This obviously isn't how it works in biology,
| but we are trying to simulate biology on a different substrate so
| we must make some compromises.
|
| I wouldn't argue that you couldn't achieve wild success in a
| distributed & more non-deterministic architecture, but I think it
| is a very difficult battle that should be fought after winning
| some easier ones.
| rahen wrote:
| So if I understand correctly, the hardware paradigm is shifting
| to align with the now-dominant neural-based software model. This
| marks a major shift, from the traditional CPU + OS + UI stack to
| a fully neural-based architecture. Am I getting this right?
| GregarianChild wrote:
| I'd be interested to learn who paid for this machine!
|
| Did Sandia pay list price? Or did SpiNNcloud Systems give it to
| Sandia for free (or at least for a heavily subsidsed price)? I
| conjecture the latter. Maybe someone from Sandia is on the list
| here and can provide detail?
|
| SpiNNcloud Systems is known for making misleading claims, e.g.
| their home page https://spinncloud.com/ lists DeepMind, DeepSeek,
| Meta and Microsoft as "Examples of algorithms already leveraging
| dynamic sparsity", giving the false impression that those
| companies use SpiNNcloud Systems machines, or the specific
| computer architecture SpiNNcloud Systems sells. Their claims
| about energy efficiency (like _" 78x more energy efficient than
| current GPUs"_) seem sketchy. How do they measure energy
| consumption and trade it off against compute capacities: e.g. a
| Raspberry Pi uses less absolute energy than a NVIDIA Blackwell
| but is this a meaningful comparison?
|
| I'd also like to know how to program this machine. Neuromorphic
| computers have so far been terribly difficult to program. E.g.
| have JAX, TensorFlow and PyTorch been ported to SpiNNaker 2? I
| doubt it.
| laidoffamazon wrote:
| If it doesn't have an OS, how does it...run? Is it just connected
| to a host machine and used like a giant GPU?
| mikewarot wrote:
| I see "storage-free"... and then learn it still has RAM (which IS
| storage) ugh.
|
| John Von Neumann's concept of the instruction counter was great
| for the short run, but eventually we'll all learn it was a
| premature optimization. All those transistors tied up as RAM just
| waiting to be used most of the time, a huge waste.
|
| In the end, high speed computing will be done on an evolution of
| FPGAs, where everything is pipelined and parallel as heck.
| thyristan wrote:
| FPGAs are implemented as tons of lookup-tables (LUTs).
| Basically a special kind of SRAM.
| mikewarot wrote:
| The thing about the LUT memory is that it's _all_ accessed in
| parallel, not just a 64 bits at a time or so.
| 1970-01-01 wrote:
| The pessimist in me thinks someone will just use it to mine
| bitcoin after all the official research is completed.
___________________________________________________________________
(page generated 2025-06-06 23:00 UTC)