[HN Gopher] The brain as a universal learning machine (2015)
___________________________________________________________________
The brain as a universal learning machine (2015)
Author : optimalsolver
Score : 61 points
Date : 2021-10-29 15:51 UTC (7 hours ago)
(HTM) web link (www.lesswrong.com)
(TXT) w3m dump (www.lesswrong.com)
| thedstrat wrote:
| One thing that isn't central at all, but it stood out to me.
|
| "The amygdala appears to do something similar for emotional
| learning. For example infants are born with a simple versions of
| a fear response, with is later refined through reinforcement
| learning."
|
| Positive and negative emotions can be seen as a reward/punishment
| mechanism - the goal of a reinforcement learning policy. Our
| brain is able to change this policy (what defines a positive or
| negative emotion) over time as our emotional intelligence
| matures. For example, when we are babies, we cry at anything that
| scares us. As we get older, we mature and change the emotional
| reaction automatically. In the example, we learn that not
| everything should scare us. I never realized that the brain (or
| ULM) can modify everything, including it's own policies, in
| response to external stimulus.
| smallmouth wrote:
| The brain is not a machine. It's a gateway.
| hypertele-Xii wrote:
| Jokes on you, gates are machines.
|
| https://en.wikipedia.org/wiki/Gate
| edgyquant wrote:
| In what way is the brain not a machine? Even if it is a
| gateway, whatever you mean by that, the two aren't mutually
| exclusive.
| tasty_freeze wrote:
| Great, I'd like to ask you some questions, as most talk I've
| heard along these lines is beyond vague. I'd be great if you
| could clarify some questions I have about the idea. My
| questions might be so off-base from your mental model of how
| things work they may seem ridiculous, but that would stem from
| me never hearing more than vague hand waves about "radio
| receiver" brains and such.
|
| #1: What is the division of labor between the physical mind
| (PM) and the non-physical mind (NPM)? Eg, is the NPM doing all
| the thinking, and the PM is just carrying out the instructions?
| Or does the PM do some share of the work and the NPM just
| nudges it when need be, like making free will decisions?
|
| #2: What is the NPM doing while the PM is sleeping? There is
| some metabolic reason for the mind to sleep 1/3 of the time,
| but presumably the NPM has no such need. Is it still thinking
| all that time, or does it sleep too?
|
| #3: When the PM is damaged in specific ways, perhaps
| catastrophically, what do you think the NPM is doing? Does it
| get frustrated that the PM can no longer receive the full
| message? For example, in the case of an Alzheimers patient.
|
| #4: By what mechanism does the NPM communicate its
| thoughts/wishes to the PM? Does it incur a violation of the
| physical laws in the PM?
|
| #5: Likewise to #4, how does the PM communicate to the NPM so
| the NPM knows what is going on?
|
| Because written communication is ambiguous, I'll explicitly
| state these are sincere questions.
| marginalia_nu wrote:
| A gateway to what?
| optimalsolver wrote:
| The stars.
|
| A star gate, if you will.
| opless wrote:
| Indeed.
| [deleted]
| smallmouth wrote:
| Your consciousness, which is likely not local.
| thewakalix wrote:
| Wow, a real-life dualist?
|
| How do you reconcile this view with the findings that
| various mental operations correspond directly to processes
| occurring in the brain? Doesn't it seem an odd coincidence
| that a simple "gateway" also contains everything it would
| need to do the work itself, without a gateway?
| jakear wrote:
| I sometimes like to compare us to intelligent entities on
| a webpage that have been given access to a REPL to their
| current context. We discovered document.body.innerHTML
| (dna?), and perhaps have found a way to establish a
| debugger connection too (eeg/ekg/etc?).
|
| We can see that various sequences of token inputted to
| the REPL correspond to reproducible outputs (gene
| engineering), but we have no real understanding how it
| all works under the covers. That is, we don't know
| anything of the miles of renderer/OS/hardware/physics
| stack that makes it all possible, and we don't know
| anything about a funny little sequence called
| XMLHttpRequest. We see it all over the place and can
| easily see how particular behaviors correspond to the
| sequence being invoked, but as far as we can tell it
| doesn't act all that differently to any of the other
| token sequences we test, being perhaps most similar to
| Math.random.
| joe_the_user wrote:
| _This article presents an emerging architectural hypothesis of
| the brain as a biological implementation of a Universal Learning
| Machine._
|
| Looked in the section titled "Universal Learning Machine", I
| looked at the footnotes (easy, there are none), I googled and
| used Google Scholar. I found no coherent definition of _Universal
| Learning Machine_.
|
| I mean, the section I mentioned says: _" An initial untrained
| seed ULM can be defined by 1.) a prior over the space of models
| (or equivalently, programs), 2.) an initial utility function, and
| 3.) the universal learning machinery/algorithm. The machine is a
| real-time system that processes an input sensory/observation
| stream and produces an output motor/action stream to control the
| external world using a learned internal program that is the
| result of continuous self-optimization."_ But it's using other
| vaguely defined concepts in a fairly vague fashion.
|
| What the author is defining is kind of like a Godel Machine [1]
| or Symbolic Regression[2], to give two more concrete references
| than I've found in the text (well, I'm only skimming).
|
| _The key defining characteristic of a ULM is that it uses its
| universal learning algorithm for continuous recursive self-
| improvement with regards to the utility function (reward
| system)._
|
| And there the author gets much more specific and the claim is
| much more debatable. Of course, if you leave "continuous" vague,
| then you have something vague again. If you're loose enough, the
| brain, by your loose definition, has utility function. But that
| easily be true but not useful. Every at least macro physical
| system can be predicted by solving it's Lagrangian but the
| existence of many, many intractable macro physical system just
| implies many, many unsolvable or unknown or unknowable
| Lagrangians.
|
| I think the problem with outlines like this, that I think are
| somewhat typical for broad-thinker/amateurs, is not that it's a
| priori bad place start looking at intelligence. It might be
| useful. But without a lot of concrete research, you wind-up
| seemingly simple steps like "We just maximize function R" when
| any know method for such maximization would take longer than the
| age of the universe (problem of a Godel Machine). Which again,
| isn't necessarily terrible - maybe you have an idea how to much
| more simply approximately maximize the function in much less
| time. But you know what you're up against.
|
| _I present a rough but complete architectural view of how the
| brain works under the universal learning hypothesis._
|
| Keep in mind that to claim a rough outline of how the brain
| operates is claim more than the illustrious neuroscientist of
| today would claim.
|
| [1] https://en.wikipedia.org/wiki/G%C3%B6del_machine [2]
| https://en.wikipedia.org/wiki/Symbolic_regression
___________________________________________________________________
(page generated 2021-10-29 23:01 UTC)