[HN Gopher] The Thousand Brains Theory of Intelligence (2019)
___________________________________________________________________
The Thousand Brains Theory of Intelligence (2019)
Author : hacksilver
Score : 105 points
Date : 2021-02-23 18:33 UTC (4 hours ago)
(HTM) web link (numenta.com)
(TXT) w3m dump (numenta.com)
| andyxor wrote:
| Jeff Hawkins has a great series of open workshops on
| computational cognitive models (with Marcus Lewis and the rest of
| Numenta team), check out Numenta on Twitter. This open seminar
| format is kind of unique in the industry.
|
| Recently they talked about grid cells models:
| https://twitter.com/Numenta/status/1357836938955218945
|
| and reviewing the new 'Tolman-Eichenbaum Machine' memory model by
| James Whittington' lab:
| https://twitter.com/Numenta/status/1362187375900651526
| Medox wrote:
| Reminds me of the last part from The Secret Life of Chaos [1]
| (after 51:00) and their 100 random brains.
|
| [1] https://www.dailymotion.com/video/xv1j0n
| abeppu wrote:
| I kept expecting terms like "ensemble" and "feature bagging" to
| pop up, but they didn't. How are others mentally mapping these
| concept back to ML?
| gojomo wrote:
| Also "dropout".
| sdenton4 wrote:
| Yeah, dropout is conspicuously missing. For those who haven't
| heard: Dropout zeros out random activations, creating a
| random subnetwork to make a prediction. Each training step
| updates a different random subnetwork. (...Keeping in mind
| that two random subnetworks can share lots of weights...)
| Then at inference time, you use the full network, which now
| acts as an ensemble of all of the subnetworks.
|
| You could view it as 'thousand brains' with lots of weight
| sharing for efficiency+regularization.
| ur-whale wrote:
| Much like backprop, I'm not sure dropout can be mapped back
| to the way an actual brain works.
| dcx wrote:
| Wait, why can't you? Backprop is basically adjusting
| weights to get the model closer to making correct
| predictions. Isn't that basically how the brain does it?
| (Though I'm sure the way the brain adjusts its weights uses
| a different approximation of differentiation)
| abeppu wrote:
| I think the main issues are:
|
| - there's not a "correct" training prediction
|
| - there's not a "final layer"
| nickmain wrote:
| Jeff has a whole section on machine intelligence in his
| upcoming book and discusses this in part 2 of the teaser
| podcast [0].
|
| [0] https://www.youtube.com/watch?v=zTRZL8dqHoo
| cmarschner wrote:
| I wish they would be working less like a company and would engage
| more with the research community on the topic. The topic of On
| Intelligence, Hierarchical Temporal Memory (HTM) was never open
| sourced, also didn't have published math, and was thus treated by
| the community as weird and not actionable, and in the end
| ignored.
| p1esk wrote:
| It is open source, and they have at least a dozen papers
| published.
| dublin wrote:
| It is quite likely that the brain's relationships are not just 2-
| or 3-dimensional, but rather very highly multidimensionally
| interconnected.
|
| When neural nets seriously hit the scene 30-odd years ago (I read
| a lot of the early papers, even having to mail off to get some in
| those pre-Internet days), no one seriously thought they were the
| way the brain works, just that they presented the opportunity to
| simulate a tiny slice of the way brains might work.
|
| As Ted Nelson so perfectly puts it, "Everything is deeply
| intertwingled."
| ur-whale wrote:
| I struggle to understand why this new theory is in any way
| original.
|
| The way an artificial neural network actually functions can most
| certainly be interpreted as multiple sub-systems "voting" to
| reach a conclusion.
|
| You can even explicitly design architectures to perform that task
| exclusively (mixture of experts).
|
| Since ANNs were loosely designed on the way we assume the brain
| works ... back in the 50's ... how is this an original idea?
|
| Am I missing something?
| wwarner wrote:
| They're proposing a theory of how the brain's cortex must work.
| Check out some interviews with Jeff Hawkins of Numenta, such as
| the forementioned interview with Lex Fridman. One essential
| insight of theirs is that the cortex is physiologically the
| same everywhere, so any differences between regions would be
| caused by what happens to be connected to them, rather than
| something that is inherent to them. This idea of 1000 brains is
| sort of along those lines.
| lostmsu wrote:
| To me this still does not explain what is original or
| interesting about that idea. Artificial neural networks are
| "physiologically" the same everywhere too, and have been like
| that from the inception.
| caddemon wrote:
| What is meant by physiologically the same? Because there is a
| lot of heterogeneity among neurons in the human brain, and
| there are definitely some macro-level differences between
| cortical regions that are innate (i.e. happen even with
| sensory deprivation). So I'd honestly be very surprised if
| different cortical regions did not have some differences in
| gene expression patterns that are "hard coded". Which can
| affect neural dynamics in a subtle way. Determining how much
| that matters for this hypothesis would require a probably
| impossible experiment though.
|
| FWIW there definitely was an old paper where the animal model
| (IIRC a ferret?) was deafened and had its visual cortex
| ablated, and then they wired the lower level visual regions
| to the auditory cortex, quite early in development. The
| animal could sort of see, and the auditory cortex had
| developed a bunch of visual cortex like properties, which is
| pretty awesome. But at the same time it was certainly not the
| same thing as using the visual cortex to do so, if you look
| at the actual behavior the animal had clear deficits in its
| visual capabilities. Of course that is far from a perfect
| version of the experiment being described though.
|
| Anyway, I read that a long while ago but here is a link to a
| broader review written by the researcher that did that work,
| in case anyone is interested in digging further:
| https://pubmed.ncbi.nlm.nih.gov/16272112/
| java-man wrote:
| Title needs (2019).
|
| A new book by Jeff Hawkins is coming out [0].
|
| [0] https://www.amazon.com/gp/product/1541675819
| VladimirGolovin wrote:
| Interesting. One of the blurbs got my attention:
|
| "Brilliant....It works the brain in a way that is nothing short
| of exhilarating." -- Richard Dawkins.
| cjauvin wrote:
| His previous book (On Intelligence) is very stimulating and
| contains many ideas that have interesting connections with
| Deep Learning. I'm looking forward to read this new one.
| KingFelix wrote:
| Second that, his first book was pretty great and easy to
| digest.
| jonplackett wrote:
| > A three-dimensional representation of objects also provides the
| basis for learning compositional structure, i.e. how objects are
| composed of other objects arranged in particular ways
|
| I found this particularly interesting. I have a young daughter
| and I find it fascinating how quickly and reliably she can learn
| an object and then recognise others in a way a computer just
| can't do at all. Eg see very rough line drawing of an object,
| then recognise that animal in real life.
|
| It's also interesting the way I can explain new objects or
| animals based on one she knows and then have her recognise them
| easily. Eg a tiger is like a lion but has no mane and does have
| stripes.
|
| This comes very naturally when teaching and learning and feels
| very uniquely 'human' compared to the rigidity of computer vision
| I've seen so far.
| ckosidows wrote:
| Have you seen Open AI's DALL*E? https://openai.com/blog/dall-e/
| Kind of does something similar to what you describe, right?
|
| This project blows my mind, by the way. I still think it's the
| coolest thing to come out of AI research so far.
| chrisweekly wrote:
| WOW!!
|
| I'm astounded. Zero-shot visual reasoning?? As an emergent
| capability?? Um. wat.
|
| >"GPT-3 can be instructed to perform many kinds of tasks
| solely from a description and a cue to generate the answer
| supplied in its prompt, without any additional training. For
| example, when prompted with the phrase "here is the sentence
| 'a person walking his dog in the park' translated into
| French:", GPT-3 answers "un homme qui promene son chien dans
| le parc." This capability is called zero-shot reasoning. We
| find that DALL*E extends this capability to the visual
| domain, and is able to perform several kinds of image-to-
| image translation tasks when prompted in the right way."
| tablespoon wrote:
| > Have you seen Open AI's DALL*E?
| https://openai.com/blog/dall-e/
|
| Is there any way to try it out easily? Everything on that
| page looked like it was pre-rendered (constrained-choice mad
| libs).
| ckosidows wrote:
| I don't think so.
|
| It does appear pre-rendered and my guess is the blog might
| only showcase the use cases that worked the best. Or it
| might take a really long time to generate results.
| gojomo wrote:
| "Thousand Brains" sounds a lot like "Society of Mind":
|
| https://en.wikipedia.org/wiki/Society_of_Mind
|
| (Perhaps, with a bit more "arbitrary horizontal diversification
| based on subsets of inputs" rather than mainly "specialization by
| function".)
| bglusman wrote:
| Yeah, came here to say this also, my memory of the book is
| fuzzy but I have a copy somewhere and is where my mind went
| right away.
| meroes wrote:
| Why have so many hypotheses failed when it comes to the mind? We
| understand quantum scale effects and measure and predict them to
| 17 decimals places of precision. How can we not know what is
| going on in the brain? We know what is going on in supercolliders
| when trillions of particles scatter (we only capture a tiny
| fraction of the data but still we know all the _mechanisms_
| available to the standard model).
|
| Surely we can track _any_ and _every_ physical mechanism in the
| brain "accessible" to consciousness to build itself up from
| assuming it is physical. How can we still not take that set of
| physical mechanisms and build a lasting model of consciousness.
|
| I have a personal bet that no progress will be made on
| consciousness in my lifetime. Maybe one of the theories we have
| _is_ correct and we just can 't test it correctly. I could easily
| be wrong. But it seems like we lack the imagination, not the
| technical ability, to find the answer. And not much has changed
| from my outside perspective in 30 years. Look, I'm genuinely
| puzzled by science's inability to have a working model of
| consciousness. I'm not asking rhetorically why we have no lasting
| models, I don't know what the reason could be.
|
| To me these are even more "woo" than string theory which gets
| lambasted around the popular alt big-science channels.
| aeternum wrote:
| We can't even model most two-element chemical systems using QM
| as the state space just becomes too complex.
|
| Having a model does not mean you can necessarily run the
| computations to sufficient accuracy to make good predictions.
| Even with something as well-understood as gravity, N-body
| problems are still quite difficult and calculations must be
| numerically approximated. This method works fine for most
| systems but diverges rapidly if a chaotic perturbation is
| present.
| kbelder wrote:
| And intelligence seems to be specifically driven by chaotic
| processes, built to amplify small signals until they cascade.
|
| A few different photons impinging on an eye can completely
| change the state of the brain a few seconds later.
| jgust wrote:
| IANA neuroscientist but I suspect it's a measurement problem.
| The structures of the brain are microscopic and enormous in
| scale, so how do you capture all that data and make sense of
| it?
| hypertele-Xii wrote:
| Well, the OpenWorm project [1] estimates to be about 20-30%
| into simulating the 302 neurons and 95 muscle cells of
| Caenorhabditis elegans. The human brain has over 86000000000
| neurons (about 300 million times more than the worm). I'd say
| we're about 0% into simulating (thus understanding) _that_.
| Mind you previous generation computers (32-bit) couldn 't even
| _count_ that high.
|
| [1] https://en.wikipedia.org/wiki/OpenWorm
| kbelder wrote:
| An idle thought... are there any professional ethics
| standards that control treatment of the Caenorhabditis
| elegans worm, like there are higher animals? Or (I expect) is
| it considered simple enough that concerns of animal cruelty
| don't apply?
|
| I wonder, because if there were, such standards should apply
| to the simulated copy of the worm's mind, as well.
| gamegoblin wrote:
| Highly recommend the Lex Fridman interview to listen to Jeff
| explain live: https://www.youtube.com/watch?v=-EVqrDlAqYo
|
| While Numenta doesn't have the amazing results of DeepMind and
| friends, I think they are doing really great stuff in the space
| of biologically plausible intelligence.
| jcims wrote:
| Seconded. Even if you aren't 'into' this stuff, Jeff has some
| interesting ideas on how the brain works that map pretty well
| to my personal subjective experience. (Not saying that makes
| them true.)
| dang wrote:
| If curious, past threads:
|
| _Numenta Platform for Intelligent Computing_ -
| https://news.ycombinator.com/item?id=24613866 - Sept 2020 (23
| comments)
|
| _Jeff Hawkins: Thousand Brains Theory of Intelligence [video]_ -
| https://news.ycombinator.com/item?id=20326396 - July 2019 (94
| comments)
|
| _The Thousand Brains Theory of Intelligence_ -
| https://news.ycombinator.com/item?id=19311279 - March 2019 (37
| comments)
|
| _Jeff Hawkins Is Finally Ready to Explain His Brain Research_ -
| https://news.ycombinator.com/item?id=18214707 - Oct 2018 (69
| comments)
|
| _IBM creates a research group to test Numenta, a brain-like AI
| software_ - https://news.ycombinator.com/item?id=9401697 - April
| 2015 (19 comments)
|
| _Jeff Hawkins: Brains, Data, and Machine Intelligence [video]_ -
| https://news.ycombinator.com/item?id=8804824 - Dec 2014 (15
| comments)
|
| _Jeff Hawkins on the Limitations of Artificial Neural Networks_
| - https://news.ycombinator.com/item?id=8544561 - Nov 2014 (16
| comments)
|
| _Numenta Platform for Intelligent Computing_ -
| https://news.ycombinator.com/item?id=8062175 - July 2014 (25
| comments)
|
| _Numenta open-sourced their Cortical Learning Algorithm_ -
| https://news.ycombinator.com/item?id=6304363 - Aug 2013 (20
| comments)
|
| _Palm founder Jeff Hawkins on neurology, big data, and the
| future of AI_ - https://news.ycombinator.com/item?id=5917481 -
| June 2013 (6 comments)
|
| _Numenta releases brain-derived learning algorithm package
| NuPIC_ - https://news.ycombinator.com/item?id=5814382 - June 2013
| (59 comments)
|
| _The Grok prediction engine from Numenta announced_ -
| https://news.ycombinator.com/item?id=3933631 - May 2012 (25
| comments)
|
| _Jeff Hawkins talk on modeling neocortex and its impact on
| machine intelligence_ -
| https://news.ycombinator.com/item?id=1945428 - Nov 2010 (27
| comments)
|
| _Jeff Hawkins ' "On Intelligence" and Numenta startup_ -
| https://news.ycombinator.com/item?id=59012 - Sept 2007 (3
| comments)
|
| _The Thinking Machine: Jeff Hawkins 's new startup, Numenta_ -
| https://news.ycombinator.com/item?id=3539 - March 2007 (3
| comments)
___________________________________________________________________
(page generated 2021-02-23 23:00 UTC)