[HN Gopher] Brain learning differs fundamentally from artificial...
___________________________________________________________________
Brain learning differs fundamentally from artificial intelligence
systems
Author : warkanlock
Score : 66 points
Date : 2024-11-27 20:00 UTC (3 hours ago)
(HTM) web link (www.nature.com)
(TXT) w3m dump (www.nature.com)
| josefritzishere wrote:
| Surprise factor zero.
| tantalor wrote:
| No shit, really?
| isaacimagine wrote:
| Wait, my brain doesn't do backprop over a pile of linear algebra
| after having the internet rammed through it? No way that's crazy
| /s
|
| tl;dr: paper proposes a principle called 'prospective
| configuration' to explain how the brain does credit assignment
| and learns, as opposed to backprop. Backprop can lead to
| 'catastrophic interference' where learning new things abalates
| old associations, which doesn't match observed biological
| processes. From what I can tell, prosp. config learns by solving
| what the activations should have been to explain the error, and
| then updates the weights in accordance, which apparently somehow
| avoids abalating old associations. They then show how prosp.
| config explains observed biological processes. Cool stuff, wish I
| could find the code. There's some supplemental notes:
|
| https://static-content.springer.com/esm/art%3A10.1038%2Fs415...
| jiggawatts wrote:
| The code: https://github.com/YuhangSong/Prospective-
| Configuration
| anon291 wrote:
| This is like expressing surprise that a photon doesn't perform
| relativistic calculations on its mini chalkboard.
|
| A simulation of a thing is not thing itself, but it is
| illuminating.
|
| > pile of linear algebra
|
| The entirety of physics is -- as you say -- a 'pile of linear
| algebra' and 'backprop' (differential linear algebra...)
| skissane wrote:
| > Backprop can lead to 'catastrophic interference' where
| learning new things abalates old associations, which doesn't
| match observed biological processes.
|
| Most people find that if you move away from a topic and into a
| new one your knowledge of it starts to decay over time. 20+
| years ago I had a job as a Perl and VB6 developer, I think most
| of my knowledge of those languages has been evacuated to make
| way for all the other technologies I've learned since (and 20
| years of life experiences). Isn't that an example of "learning
| new things ablates old associations"?
| FrustratedMonky wrote:
| "does not learn like human" does not mean "does not learn".
|
| It is alien to us, that doesn't mean it is harmless.
| nickpsecurity wrote:
| Some are surprised that anyone would make this point, either the
| title or the research.
|
| It might be a response to the many, many claims in articles that
| neural networks work like the brain. Even using terms like
| neurons and synapses. With those claims getting widespread,
| people also start building theories on top of them that make AI's
| more like humans. Then, we won't need humans or they'll be
| extinct or something.
|
| Many of us whom are tired of that are both countering it and just
| using different terms for each where possible. So, I'm calling
| the AI's models, saying model training instead of learning, and
| finding and acting on patterns in data. Even laypeople seem to
| understand these terms with less confusion about them being just
| like brains.
| skissane wrote:
| > It might be a response to the many, many claims in articles
| that neural networks work like the brain. Even using terms like
| neurons and synapses.
|
| Artificial neural networks originated as simplified models of
| how the brain actually works. So they really do "work like the
| brain" in the sense of taking inspiration from certain
| rudiments of its workings. The problem is "like" can mean
| anything from "almost the same as" to "in a vaguely resembling
| or reminiscent way". The claim that artificial neural networks
| "work like the brain" is false under the first reading of
| "like" but true under the second.
| anon291 wrote:
| > Even using terms like neurons and synapses. With those claims
| getting widespread, people also start building theories on top
| of them that make AI's more like humans.
|
| Except the networks studied here for prospective configuration
| are ... neural networks. No changes to the architecture have
| been proposed, only a new learning algorithm.
|
| If anything, this article lends credence to the idea that ANNs
| do -- at some level -- simulate the same kind of thing that
| goes on in the brain. That is to say that the article posits
| that some set of weights would replicate the brain pretty
| closely. The issue is how to find those weights. Backprop is
| one of many known -- and used -- algorithms . It is liked
| because the mechanism is well understood (function minimization
| using calculus). There have been many other ways suggested to
| train ANNs (genetic algorithms, annealing, etc). This one
| suggests an energy based approach, which is also not novel.
| johnea wrote:
| Was a study really necessary for this?
|
| Do "AI" fanbois really think LLMs work like a biological brain?
|
| This only reinforces the old maxim: Artificial intelligence will
| never be a match for natural stupidity
| jprete wrote:
| Claims that LLMs work like human brains were common at the
| start of this AI wave. There are still lots of fanboys who
| defend accusations of rampant copyright infringement with the
| claim that AI model training should be treated like human brain
| learning.
| 2OEH8eoCRo0 wrote:
| It only learns like a human when I use it to rip-off other
| people's work.
| zby wrote:
| I did not read the article - but I guess it all depends on the
| level of abstraction we are talking about. There is a very
| abstract level where you can say that AI learns like a
| biological brain and there is a level where you would say that
| a particular human brain learns in a different way than another
| particular human brain.
| anon291 wrote:
| > Do "AI" fanbois really think LLMs work like a biological
| brain?
|
| If you read the article you'd know two things: (1) the article
| explicitly calls out Hopfield networks as being more bio-
| similar (Hopfield networks are intricately connected to
| attention layers) and (2) the overall architecture (the
| inference pass) of the networks studied here remain unmodified.
| Only the training mechanism changes.
|
| As for a direct addressing of the claim... if the article is on
| point, then 'learning' has a much more encompassing physical
| manifestation than was previously thought. Really any system
| that self optimizes would be seen as bio-similar. In both
| mechanisms, there's a process to drive the system to
| 'convergence'. The issue is how fast that convergence is, not
| the end result.
| yongjik wrote:
| The title of the paper is: "Inferring neural activity before
| plasticity as a foundation for learning beyond backpropagation"
|
| The current HN title ("Brain learning differs fundamentally from
| artificial intelligence systems") seems very heavily
| editorialized.
| robotresearcher wrote:
| The post headline is distracting people and making a poor
| discussion. The paper describes a learning mechanism that had
| advantages over backprop, and may be closer to what we see in
| brains.
|
| The contribution of the paper, and its actual title is about the
| proposed mechanism.
|
| All the comments amounting to 'no shit, sherlock', are about the
| mangled headline, not the paper.
| lukeinator42 wrote:
| It has been clear for a long time (e.g. Marvin Minsky's early
| research) that:
|
| 1. both ANNs and the brain need to solve the credit assignment
| problem 2. backprop works well for ANNs but probably isn't how
| the problem is solved in the brain
|
| This paper is really interesting, but is more a novel theory
| about how the brain solves the credit assignment problem. The HN
| title makes it sound like differences between the brain and ANNs
| were previously unknown and is misleading IMO.
| mindcrime wrote:
| > The HN title makes it sound like differences between the
| brain and ANNs were previously unknown and is misleading IMO.
|
| Agreed on both counts. There's nothing surprising in "there are
| differences between the brain and ANN's."
|
| But their _might_ be something useful in the "novel theory
| about how the brain solves the credit assignment problem"
| presented in the paper. At least for me, it caught my attention
| enough to justify giving it a full reading sometime soon.
| blackeyeblitzar wrote:
| The comments here saying this was obvious or something else more
| negative are disappointing. Neural networks are named for neurons
| in biological brains. There is a lot of inspiration in deep
| learning that comes from biology. So the association is there.
| Pretending you're superior for knowing the two are still
| different, contributes nothing. Doing so in more specific ways,
| or attempting to further understand the differences between deep
| learning and biology through research, is useful.
| dboreham wrote:
| Paper actually says that they fundamentally do learn the same
| way, but the fine details are different. Not too surprising.
| eli_gottlieb wrote:
| Oh hey, I know one of the authors on this paper. I've been
| meaning to ask him at NeurIPS how this prospective configuration
| algorithm works for latent variable models.
| pharrington wrote:
| Theories that brains predict the pattern of expected neural
| activity aren't new, (eg this paper cites work towards the Free
| Energy Principle, but not Embodied Predictive Interoception
| Coding works). I have 0 neuroscience training so I doubt I'd be
| able to reliably answer my question just by reading this paper,
| but does anyone know how specifically their Prospective
| Configuration model differs, or expands, upon the previous work?
| Is it a better model of how brains actually handle credit assign
| than the aforementioned models?
| eli_gottlieb wrote:
| The FEP is more about what objective function the brain (
| _really_ the isocortex) ought to optimize. EPIC is a somewhat
| related hypothesis about how viscerosensory data is translated
| into percepts.
|
| Prospective Configuration is an actual algorithm that, to my
| understanding, attempts to reproduce input patterns but can
| also engage in supervised learning.
|
| I'm less clear on Prospective Configuration than the other two,
| which I've worked with directly.
| oatmeal1 wrote:
| > In prospective configuration, before synaptic weights are
| modified, neural activity changes across the network so that
| output neurons better predict the target output; only then are
| the synaptic weights (hereafter termed 'weights') modified to
| consolidate this change in neural activity. By contrast, in
| backpropagation, the order is reversed; weight modification takes
| the lead, and the change in neural activity is the result that
| follows.
|
| What would neural activity changes look like in an ML model?
___________________________________________________________________
(page generated 2024-11-27 23:00 UTC)