[HN Gopher] Artificial Neural Nets Finally Yield Clues to How Br...
___________________________________________________________________
Artificial Neural Nets Finally Yield Clues to How Brains Learn
Author : giorgiop
Score : 88 points
Date : 2021-02-20 11:01 UTC (12 hours ago)
(HTM) web link (www.quantamagazine.org)
(TXT) w3m dump (www.quantamagazine.org)
| kowlo wrote:
| I may be missing something, but it's just a click bait title with
| no substance.
| erikerikson wrote:
| I would suggest you are missing something: the article shared a
| round up of advances in the area of effective biologically
| plausible learning algorithms. That is an area often missed by
| the field with its excitement about the advances associated
| with back propagation.
|
| The title seemed a bit click-bait-y to be too though.
| NalNezumi wrote:
| There's three things I've always been baffled by the lack of
| interest in the current deep learning based AI field when it
| comes to parallels with biological brain:
|
| 1. Biological plausibility of back prop.
|
| 2. The lack of interest/consideration of time-continuous input on
| network. They are currently discrete and "learning" and inference
| is done separately. That's not how most organisms work.
|
| 3. The lack of consideration how brains (architecture, not
| weight) grows.
|
| I might just be me missing something but I really have hard time
| seeing how things would scale in real world (ex: in Robotics
| applications of Neural nets) without those things addressed
| FL33TW00D wrote:
| The optimist in me likes to liken it to the difference between
| birds and planes. Same result but different principle.
| amirkdv wrote:
| You're describing the deep dissonance I was feeling a decade
| ago when I first stepped into AI research. I just kind of
| always assumed that studying AI would necessarily have a strong
| focus on how biological intelligence works. And boy was I
| wrong.
|
| Knowing a bit more now, this gap makes some sense:
|
| 1. Neuroscience is really, really hard. Even with the
| unbelievable recent advances, we're still years away from
| having a clear understanding of the mechanics of learning and
| memory.
|
| 2. The drift between AI and the broader cognitive sciences
| started in the 70s, seemingly borne out of pragmatism and the
| difference in goals between engineer types and scientist types.
| belgian_guy wrote:
| As to 1, it has already been established that there's no
| biological plausibility of backprob whatsoever. You can only
| call the current models "neural" networks in the vaguest sense
| of analogy. There is significant academic interest in this
| intersection between AI and neuroscience, to design
| biologically plausible neural networks (see e.g. spiking
| networks). I guess the reasons there not very well known in the
| larger ML community is simply that these approaches don't work
| that well (as of yet).
|
| Personally I don't believe chasing perfect biological
| plausibility will be very fruitful (in short term). An
| algorithm that runs efficiently on wetware will probably not be
| very efficient on current hardware like gpu's. The reason deep
| learning is so successful is for a large part that they are
| very good at exploiting the efficient linear algebra devices we
| have at our disposal (transformers are only the latest evidence
| of this).
| mam2 wrote:
| 1 backprop is not necessarily the only thing able to perform
| optim.. it could be something more parallel that try many path
| at once. a bit like quantum computing.. but we have not just
| found the algo yet
|
| 2 is basically sleeping.
| canjobear wrote:
| Backprop isn't biologically plausible but predictive coding is
| and it approximates backprop.
|
| https://arxiv.org/abs/2006.04182
| rantwasp wrote:
| not an expert (more like a noob) by any means but:
|
| 1) from neuroscience point of view you have cortical columns
| with layers that are wired to send the input forward but to
| also propagate feedback. the layers constantly predict what is
| going to happen (by having neurons fire) and usually it's the
| delta between what is predicted and what is coming from the
| sensory system that drives the reinforcement or the weakening
| of the connections. this sort of sounds like backpropagation to
| me (but again i may be super ignorant and would appreciate if
| you can educate me on this if you know more)
|
| 2) technically the "input" in the brain is not continuous. I
| don't want to go into semantics but at the end of the day you
| have molecules, ions etc. so the input/transmission is not
| continuous. the size of the neurotransmitters is so small that
| it looks like it's continuous. my point is that, if you take
| the current model and you have more computing power you could
| find out that some things translate between the 2 models (we
| definitely need a way better model of the neuron, but that's
| another story)
|
| 3) this is a fair point.
| specialist wrote:
| With articles like this, I want a "check back in 2 years"
| reminder, to see how the science shakes out. I'm not smart or
| informed enough to judge these current events style updates for
| myself.
| The_rationalist wrote:
| Reddit has the remindMe bot for that, HN should give us an
| exobrain too
| dualthro wrote:
| You don't think setting a reminder in your calendar for 2
| years from now would suffice?
| The_rationalist wrote:
| It's too much clicks away, it's should be a matter of one
| click
| lostapathy wrote:
| Please, if somebody does this, let's not augment HN by
| littering the comments with bots.
| visarga wrote:
| Check back the predictions of 2 years ago and compare to the
| reality of today.
| x1798DE wrote:
| I occasionally did check up on stories, but people rarely do
| follow-up reporting (especially for things that don't pan
| out), and Google searches usually just turn up 50 variations
| on the original story written from the original press
| release. It's a very unfortunate dynamic.
| sjg007 wrote:
| You could favorite the post and add a calendar reminder but I
| agree it would be a useful HN feature.
| dawg- wrote:
| You could make an account on ResearchGate and follow the
| authors of the paper if they're on there, see what they come up
| with next!
| erikerikson wrote:
| Really nice to read a round up of advances in biologically
| plausible algorithms. The field, responding to incentives has, in
| my subjective opinion, undervalued this class of advancement. I
| expect once we've wrung the value of of the current techniques
| that this is the direction advancements will be made in.
| vmception wrote:
| Does anyone else notice that a lot of this stuff is just rehashed
| forms of things from decades prior?
|
| Someone tried making a computer like this decades ago.
|
| Ex-Machina had a plot device like this too, to make the robot's
| transistor based brain.
| benjaminjosephw wrote:
| > Nonetheless, Hinton and a few others immediately took up the
| challenge of working on biologically plausible variations of
| backpropagation.
|
| Trying to prove the plausibility of a theory is one approach to
| science I guess... The researchers have already concluded that
| brains are simply information processing machines and that AI
| techniques are a sufficiently representative model to use to
| learn what brains are like.
|
| I don't see how this research could give us clues to anything
| other than what is already presumed to be true by the
| researchers.
| mjburgess wrote:
| You're downvoted but this is correct. It is much like when the
| analogy was "springs and cogs", and an academic department
| created in that era "cog-nitive" science, would be the attempt
| to rotate enough gears in the right way.
|
| Many presumptions are being made here in "computational
| cognitive science" which preclude including many relevant
| features of animal learning and animal biology.
|
| Their whole world view is that "patterns of electrical signals
| in neurons" _is_ where learning takes place. This is very
| likely to be false: it fails, for example, to note _that the
| brain grows_.
|
| Organic growth isn't even scoped here. A brain is a time-
| evolving dynamic system, whose architecture is at every level
| dynamic. (& Not least, embedded in a motor system which has a
| profound effect on _its_ structure ).
| visarga wrote:
| > Organic growth isn't even scoped here.
|
| Other things current AI's are lacking besides growth:
| embodiment + the social and physical environment, ability to
| make interventions in the environment, self reproduction,
| learning from reward signals, autonomy, adaptation, radical
| open-endedness.
|
| "Patterns of electrical signals in neurons" are just part of
| the picture. Yes, learning happens there, but learning is fed
| by signals from the body and environment. It would be silly
| to focus on the neurons while ignoring the actual content,
| then start wondering where meaning comes from, and if syntax
| is enough. Meaning doesn't come from mere neurons, it comes
| from being an embodied agent.
| erikerikson wrote:
| > Their whole world view is that "patterns of electrical
| signals in neurons" is where learning takes place
|
| Actually, the mechanisms are chemical processes involving
| trophic factors (i.e. inputs to those processes) and
| alteration of the physical structures the signals are
| transmitted with. You say "the brain grows" but the
| alteration of its structure to strengthen our weaken
| transmission and connections in response to signals is how it
| grows _usefully_. Which was present in the work described by
| the article.
| eli_gottlieb wrote:
| >Many presumptions are being made here in "computational
| cognitive science" which preclude including many relevant
| features of animal learning and animal biology.
|
| This post doesn't actually seem to be citing computational
| cog-sci, which is usually a bit better about these things.
| Instead it's addressing the field of biologically plausible
| (ie: with Hebbian learning rules) deep learning.
|
| > (& Not least, embedded in a motor system which has a
| profound effect on its structure ).
|
| Sure, but that would expose how weak so much of the present
| AI work _actually is_ when it comes to studying the motor
| system.
| zagdul wrote:
| This linear model doesn't seem to reference those memories when
| considering new memories. You'd need a secondary processing unit
| for addressing the memories based on the current situation or
| argument. This is a decent model for how cells develop and how
| memory cells are maintained. However, it's creation still seems
| to be very binary, relying on IO rather than variance.
|
| Maybe this will help.
|
| https://ieeexplore.ieee.org/document/9325353
| SubiculumCode wrote:
| "In 2007, some of the leading thinkers behind deep neural
| networks organized an unofficial "satellite" meeting at the
| margins of a prestigious annual conference on artificial
| intelligence. The conference had rejected their request for an
| official workshop; deep neural nets were still a few years away
| from taking over AI."
|
| The author almost makes this sound nefarious or short sighted.
| Workshops and symposia get rejected all the time for a mundane
| reason: Too many submissions for the available schedule resources
| at the conference. Important research gets "rejected" all the
| time, and the selection committees are not saying your
| topic/research are silly, illegitimate, or fantasy.
| [deleted]
___________________________________________________________________
(page generated 2021-02-20 23:02 UTC)