[HN Gopher] Neuromorphic computing
       ___________________________________________________________________
        
       Neuromorphic computing
        
       Author : LAsteNERD
       Score  : 47 points
       Date   : 2025-06-05 18:37 UTC (4 hours ago)
        
 (HTM) web link (www.lanl.gov)
 (TXT) w3m dump (www.lanl.gov)
        
       | datameta wrote:
       | I could be mistaken with this nitpick but isn't there a unit
       | mismatch in "...just 20 watts--the same amount of electricity
       | that powers two LED lightbulbs for 24 hours..."?
        
         | rcoveson wrote:
         | Just 20 watts, the same amount of electricity that powers 2 LED
         | lightbulbs for 24 hours, one nanosecond, or twelve-thousand
         | years.
        
         | DavidVoid wrote:
         | There is indeed; Watts aren't energy, and it's a common enough
         | mistake that Technology Connections made a pretty good 52
         | minute video about it the other month [1].
         | 
         | [1]: https://www.youtube.com/watch?v=OOK5xkFijPc
        
       | ge96 wrote:
       | Just searched against HN, seems this term is at least 8 years old
        
         | lukeinator42 wrote:
         | The term neuromorphic? It was coined in 1990:
         | https://ieeexplore.ieee.org/abstract/document/58356
        
       | newfocogi wrote:
       | Once again, I am quite surprised by the sudden uptick of AI
       | content on HN coming out of LANL. Does anyone know if its just
       | getting posted to HN and staying on the first page suddenly, or
       | is this a change in strategy for the lab? Even so, I don't see
       | the other NatLabs showing up like this.
        
         | CamperBob2 wrote:
         | I imagine the mood at the national labs right now is pretty
         | panicky. They will be looking to get involved with more real-
         | world applications than they traditionally have been, and will
         | also want to appear more engaged with trendy technologies.
        
         | gyrovagueGeist wrote:
         | I am not sure why HN has mostly LANL posts. Otherwise though it
         | is a combination of things. Machine learning applications for
         | NATSec & fundamental research have become more important (see
         | FASST, proposed last year), the current political environment
         | makes AI funding and applications more secure and easier to
         | chase, and some of this is work that has already been going on
         | but getting greater publicity for both of those reasons.
        
         | ivattano wrote:
         | The primary pool of money for DOE labs is through a program
         | called "Frontiers in Artificial Intelligence for Science,
         | Security and Technology (FASST)," replacing the Exascale
         | Computing Project. Compared to other labs, LANL historically
         | does not have many dedicated ML/AI groups but they have
         | recently spun up an entire branch to help secure as much of
         | that FASST money as possible.
        
         | fintler wrote:
         | Probably because they're hosting an exascale-class cluster with
         | a bazillion GH200s. Also, they launched a new "National
         | Security AI Office".
        
       | geeunits wrote:
       | I've been building a 'neuromorphic' kernel/bare metal OS that
       | operates on mac hardware using APL primitives as its core layer.
       | Time is considered another 'position' and the kernel itself is
       | vector oriented using 4d addressing with a 32x32x32 'neural
       | substrate'.
       | 
       | I am so ready and eager for a paradigm shift of hardware &
       | software. I think in the future 'software' will disappear for
       | most people, and they'll simply ask and receive.
        
         | JimmyBuckets wrote:
         | I'd love to read more about this. Do you have a blog?
        
       | kokanee wrote:
       | Philosophical thought: if the aim of this field is to create an
       | artificial human brain, then it would be fair to say that the
       | more advanced the field becomes, the less difference there is
       | between the artificial brain and a real brain. This begs two
       | questions:
       | 
       | 1) Is the ultimate form of this technology ethically
       | distinguishable from a slave?
       | 
       | 2) Is there an ethical difference between bioengineering an
       | actual human brain for computing purposes, versus constructing a
       | digital version that is functionally identical?
        
         | russdill wrote:
         | Disagree. It would be like saying the more advanced
         | transportation becomes, then more like a horse it will be.
        
           | thechao wrote:
           | Shining-brass 25 ton, coal-powered, steam-driven autohorse! 8
           | legs! Tireless! Breathes fire!
        
         | ge96 wrote:
         | 3) can we use a dead person's brain, hook up wires to it and
         | oxygen, why not
        
         | thinkingtoilet wrote:
         | I am certain this answer will change as generations pass. The
         | current generations, us, will say that there is a difference.
         | Once a generation of kids grow up with AI
         | assistants/friends/partners/etc... they will have a different
         | view. They will demand rights and protections for their AI.
        
         | energy123 wrote:
         | We should start by disambiguating intelligence and qualia. The
         | field is trying to create intelligence, and kind of assuming
         | that qualia won't be created alongside it.
        
           | falcor84 wrote:
           | How would you go about disambiguating them? Isn't that
           | literally the "hard problem of consciousness" [0]?
           | 
           | [0]
           | https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
        
           | feoren wrote:
           | "Qualia" is a meaningless term made up so that philosophers
           | can keep publishing meaningless papers. It's completely
           | unfalsifiable: there is no test you can even theoretically
           | run to determine the existence or nonexistence of qualia.
           | There's never a reason to concern yourself with it.
        
             | drdeca wrote:
             | The test I use to determine that there exist qualia is
             | "looking". Now, whether there is a test I can do to confirm
             | that there is any that anything(/anyone) _other than me_
             | experiences, is another question. (I don't see how there
             | could be such a test, but perhaps I just don't see it.)
             | 
             | So, probably not really falsifiable in the sense you are
             | considering, yeah.
             | 
             | I don't think that makes it meaningless, nor a worthless
             | idea. It probably makes it not a scientific idea?
             | 
             | If you care about subjective experiences, it seems to make
             | sense that you would then concern oneself with subjective
             | experiences.
             | 
             | For the great lookup table Blockhead, whose memory banks
             | take up a galaxy's worth of space, storing a lookup table
             | of responses for any possible partial conversation history
             | with it, should we value not "hurting its feelings"? If
             | not, why not? It responds just like how a person in an
             | online one-on-one chat would.
             | 
             | Is "Is this [points at something] a moral patient?" a
             | question amenable to scientific study? It doesn't seem like
             | it to me. How would you falsify answers of "yes" or "no"?
             | But, I refuse to reject the question as "meaningless".
        
             | layer8 wrote:
             | The term has some validity as a word for what I take to be
             | the inner perception of processes within the brain. The
             | qualia of a scent, for example, can be taken to refer to
             | the inner processing of scent perception giving rise to a
             | secondary perception of that processing (or other side
             | effects of that processing, like evoking associated
             | memories). I strongly suspect that that's what's actually
             | going on when people talk about how it feels like to see
             | red, and the like.
        
             | lo_zamoyski wrote:
             | Drinking from the eliminativist hose, are we?
             | 
             | You can't be serious. Whatever one wishes to say about the
             | framing, you cannot deny conscious experience. Materialism
             | painted itself into this corner through its bad
             | assumptions. Pretending it hasn't produced this problem for
             | itself, that it doesn't exist, is just plain silly.
             | 
             | Time to show some intellectual integrity and revisit those
             | assumptions.
        
         | falcor84 wrote:
         | In my opinion, one of the best works of fiction exploring this
         | is qntm's "Lena" - https://qntm.org/mmacevedo
        
         | antithesizer wrote:
         | *shower thought
        
         | layer8 wrote:
         | For most applications, we don't want "functionally identical".
         | We do not want it to have its own desires and a will of its
         | own, biological(-analogous) needs, having a circadian rhythm,
         | getting tired and needing sleep, being subject to mood changes
         | and emotional swings, feeling pain, having a sexual drive,
         | needing recognition and validation, and so on. So we don't want
         | to copy the neural and bodily correlates that give rise to
         | those phenomena, which arguably are not essential to how the
         | human brain manages to have the intelligence it has. That is
         | likely to drastically change the ethics of it. We will have to
         | learn more about how those things work in the brain to avoid
         | the undesirables.
        
           | kokanee wrote:
           | If we back away from philosophy and think like engineers, I
           | think you're entirely right and the question _should_ be
           | moot. I can 't help but think, though, that in spite of it
           | all, the Elon Musks and Sam Altmans of the future will not be
           | stopped from attempting to create something indistinguishable
           | from flesh and blood.
        
             | tough wrote:
             | I mean have you watched Westworld?
        
         | dlivingston wrote:
         | To 1) and 2), assuming a digital consciousness capable of self-
         | awareness and introspection, I think the answer is clearly
         | 'no'.
         | 
         | But:
         | 
         | > it would be fair to say that the more advanced the field
         | becomes, the less difference there is between the artificial
         | brain and a real brain.
         | 
         | I don't think it would be fair to say this. LLMs are certainly
         | not worthy of ethical considerations. Consciousness needs to be
         | demonstratable. Even if the synaptic structure of the digital
         | vs. human brain approaches 1:1 similarity, the program running
         | on it does not deserve ethical consideration unless and until
         | consciousness can be demonstrated as an emergent property.
        
         | lo_zamoyski wrote:
         | The burden of proof is to show that there is any real or
         | substantive similarity between the two beyond some superficial
         | comparisons and numbers. If you can't provide that, then you
         | can't answer those questions meaningfully.
         | 
         | (Frankly, this is all a category mistake. Human minds possess
         | intentionality. They possess semantic apprehension. Computers
         | are, by definition, abstract mathematical models that are
         | purely syntactic and formal and therefore stripped of semantic
         | content and intentionality. That is exactly what allows
         | computation to be 'physically realizable' or 'mechanized',
         | whether the simulating implementation is mechanical or
         | electrical or whatever. There's a good deal of ignorant and
         | wishy-washy magical thinking in this space that seems to draw
         | hastily from superficial associations like "both (modern)
         | computers and brains involve electrical phenomena" or
         | "computers (appear to) calculate, and so do human beings", and
         | so on.)
        
       | Footpost wrote:
       | Neuromorphic computation has been hyped up for ~ 20 year by now.
       | So far it has dramatically underperformed, at least vis-a-vis the
       | hype.
       | 
       | The article does not distinguish between training and inference.
       | Google Edge TPUs https://coral.ai/products/ each one is capable
       | of performing 4 trillion operations per second (4 TOPS), using 2
       | watts of power--that's 2 TOPS per watt. So _inference_ is already
       | cheaper than the 20 watts the paper attributes to the brain. To
       | be sure, LLM _training_ is expensive, but so is raising a child
       | for 20 years. Unlike the child, LLMs can share weights, and
       | amortise the energy cost of training.
       | 
       | Another core problem with neuromorphic computation is that we
       | currently have no meaningful idea how the brain produces
       | intelligence, so it seems to be a bit premature to claim we can
       | copy this mechanism. Here is what the Nvidia Chief Scientist B.
       | Dally (and one of the main developers of modern GPU
       | architectures) says about the subject: _" I keep getting those
       | calls from those people who claim they are doing neuromorphic
       | computing and they claim there is something magical about it
       | because it's the way that the brain works ... but it's truly more
       | like building an airplane by putting feathers on it and flapping
       | with the wings!"_ From "Hardware for Deep Learning" HotChips 2023
       | keynote. https://www.youtube.com/watch?v=rsxCZAE8QNA This is at
       | 21:28. The whole talk is brilliant and worth watching.
        
       | stefanv wrote:
       | And still no mention of Numenta... I've always felt it's an
       | underrated company, built on an even more underrated theory of
       | intelligence
        
         | esafak wrote:
         | I want them to succeed but it's been two decades already. Maybe
         | they should have started with a less challenging problem to
         | grow the company?
        
           | meindnoch wrote:
           | They will be right on time when the first Mill CPU arrives!
        
       | random3 wrote:
       | memristors are back
        
       ___________________________________________________________________
       (page generated 2025-06-05 23:01 UTC)