[HN Gopher] Book Review: "A Thousand Brains" by Jeff Hawkins
       ___________________________________________________________________
        
       Book Review: "A Thousand Brains" by Jeff Hawkins
        
       Author : melling
       Score  : 142 points
       Date   : 2021-04-13 16:03 UTC (6 hours ago)
        
 (HTM) web link (www.lesswrong.com)
 (TXT) w3m dump (www.lesswrong.com)
        
       | kregasaurusrex wrote:
       | A favorite talk of mine by Jeff Hawkins is 'What the Brain says
       | about Machine Intelligence'[0] - how the brain interprets signals
       | similarly to how a computer chip would, and how these are stored
       | such that they can be later retrieved as memories or limbic
       | system responses.
       | 
       | [0] https://www.youtube.com/watch?v=izO2_mCvFaw
        
       | periheli0n wrote:
       | Whenever I hear Hawkins speak I wonder whether he's a genius or a
       | glorious charlatan. His theories on brain function are
       | interesting, but he always makes these gross simplifications that
       | don't really fly. That old brain/new brain thing is an example.
       | The brain is not as modular as he tries to make us believe.
       | Neither part can function on its own, and just because he thinks
       | he has understood neocortex this doesn't mean the parts he
       | doesn't understand are irrelevant.
       | 
       | But: his theories are a great source of inspiration, because they
       | are bold and we would all like to believe them because we think
       | we understand the basic principles. For many, this is enough to
       | trigger their curiosity and dive into neuroscience and AI.
       | Mission accomplished.
        
       | jvanderbot wrote:
       | TL;DR: The brain has far too many connections and nodes to really
       | understand, in the way we want to understand a circuit. But we've
       | been thinking about it wrong. Those connections are the _output_
       | of a somewhat basic learning algorithm which was replicated
       | across all the  "higher" intelligence functions including vision,
       | speech, blah blah. Viewing the brain is like viewing the
       | massively complicated output of a deep learning network, which
       | was constructed quite simply by a prior topology and some
       | gradient descent and large amounts of data.
        
         | jvanderbot wrote:
         | Fine fine, so trial and error over topologies and carefully
         | nurtured "data infancy" are the key to artificial general
         | intelligence. The 20 year claim ignroes a lot of details ...
         | Tthe key stages of brain development that are sequential, only
         | partially mutable, timed precisely with body development, and
         | so on. Those million happy accidents could each be a component
         | of the secret sauce that makes humans think _like we do_ or
         | not. Just look at how tiny the differences are between a
         | schizophrenic person and someone with normal development
         | trajectories. It 's as simple as over-active pruning, perhaps.
         | [1].
         | 
         | 1. https://www.newscientist.com/article/2075495-overactive-
         | brai...
        
           | selimthegrim wrote:
           | Why on earth does this lead to an article about vanilla?
        
             | sbierwagen wrote:
             | Works correctly here.
        
       | camjohnson26 wrote:
       | The problem with this line of thinking is that a brain by itself
       | is not intelligent, it has to be inside a live person who has
       | sensory access to the outside world. Hawkins suggests the
       | neocortex is the center of language and music, but those skills
       | require a counter party to communicate with and the hearing
       | sense, as well as a mountain of historical sensations that put
       | music in context. A brain in a vat doesn't have any of these,
       | even though it has the exact same neural structure.
        
         | roywiggins wrote:
         | The idea of brains-in-vats is that you hook them up to the
         | equivalent of a cochlear implant for sight, etc.
        
       | cambalache wrote:
       | Reviewer is a physicist explaining his pet theory.Big yawn!
        
       | miketery wrote:
       | His OG book, On Intelligence was excellent. Andrew Ng credits it
       | with starting his framework for how he looks at AI.
        
         | GeorgeTirebiter wrote:
         | Chapter 6
        
       | adolph wrote:
       | The Lex Fridman podcast interview with Jeff Hawkins is solid:
       | 
       | https://lexfridman.com/jeff-hawkins/
        
         | aurbano wrote:
         | Lex Fridman's podcast is consistently blowing my mind, highly
         | recommend it
        
       | stupidcar wrote:
       | AI safety seems like one of those topics that trips up even smart
       | people such as Hawkins, because it intuitively seems like it has
       | very obvious solutions. And for each objection raised there's
       | usually a "so you just..." continuation, and an objection to
       | that, until you're talking about things like instrumental
       | convergence and you're in the territory of trying to reason in
       | quite a complex way about the behaviour of systems that don't
       | exist yet, and so the temptation is to dismiss it all as
       | theoretical hand-wringing.
       | 
       | Personally, while I'm not on board with the board with the direst
       | predictions of the super-intelligence pessimist crowd, I have
       | become more and more convinced that goal misalignment is going to
       | be a significant problem, and that while it might not doom the
       | species, it's something that all AI researchers like Hawkins need
       | to start paying close attention to now.
        
         | tshaddox wrote:
         | Do you anticipate goal misalignment being a more significant
         | problem for AIs than it already is for humans? If so, why? And
         | either way, why would we need to approach goal alignment
         | differently than we do with humans?
        
           | jimbokun wrote:
           | It's more significant for AIs because we expect them to
           | become super human, thus with potentially unlimited potential
           | for disaster.
           | 
           | I would say we have been dealing with goal alignment problems
           | with humans for most of human history.
        
             | tshaddox wrote:
             | Why would we expect them to become super human? I would
             | expect AIs to be able to use the latest technology and
             | weapons, and also to develop new and better technology and
             | weapons, and to exclude other intelligences from using said
             | technology and weapons, but note that this is already true
             | for humans.
        
               | pmichaud wrote:
               | The basic answer is that unlike humans, AIs will be able
               | to recursively self improve (in principle).
        
               | tshaddox wrote:
               | I don't really buy that. Humans also improve their own
               | abilities using technology, and I don't see any reason to
               | expect that technological advancements made by AIs won't
               | be available to humans as well. Yes, an AI group that is
               | hostile to a human group may want to develop technology
               | and keep it to themselves, but again, that's already the
               | case with different human groups (and tends to apply most
               | prominently to our most destructive technologies).
        
         | MR4D wrote:
         | > AI safety....
         | 
         | My personal thought is that since humanity currently can't
         | manage the I(non-A) safety, that we'll fumble through this as
         | well.
         | 
         | As long as they can't replicate, we'll probably be ok, but once
         | that changes, we're probably toast.
        
           | gnzoidberg wrote:
           | Thus, eventually we'll be toast.
        
         | xzvf wrote:
         | This is not something programmers and other wild HN dwellers
         | are accustomed to hearing, but there is a very strong case to
         | suggest that intelligence, the universal kind, is not possible
         | without empathy. Worry about alignment in machines, not in
         | human-likes.
        
           | roywiggins wrote:
           | If that were true, sociopaths would be dumb as rocks. But,
           | some sociopaths are actually pretty smart and don't
           | demonstrate the sort of empathy you'd want in an AGI.
        
           | yaacov wrote:
           | I think you're using the word "intelligence" to mean
           | something entirely different from what the AI alignment crowd
           | is worried about.
        
           | fossuser wrote:
           | I think this is objectively wrong even on a human level. I
           | could see some part of it if you're using empathy in a
           | general enough sense to only mean modeling other minds
           | without caring about their goals except as a way to pursue
           | your own (which doesn't sound like what you're saying). It
           | sounds more like you're putting intelligence in some
           | reference class where you're just stating it's not
           | intelligence until it's already aligned with humans (which is
           | not helpful).
           | 
           | For some reason people tend to think that general
           | intelligence would generate all these other positive human-
           | like qualities, but a lot of those are not super well aligned
           | even in humans _and_ they are tied to our multi-billion year
           | evolutionary history selecting for certain things.
           | 
           | This is basically the orthogonality thesis which I found
           | pretty compelling:
           | https://www.lesswrong.com/tag/orthogonality-thesis - the AGI
           | crowd has a lot of really good writing on this stuff and
           | they've thought a lot about it. If it's something you're
           | curious about it's worth reading the current stuff.
           | 
           | Some other relevant essays:
           | 
           | https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-
           | hidden...
           | 
           | https://www.lesswrong.com/posts/mMBTPTjRbsrqbSkZE/sorting-
           | pe...
           | 
           | This talk is also a decent introduction:
           | https://www.youtube.com/watch?v=EUjc1WuyPT8
        
             | xzvf wrote:
             | When we get intelligent machines they will become beings.
             | Contrary to SV corporate-speak, you don't align beings. You
             | don't align your neighbors, you don't align people from
             | other countries. You can't hardwire them to follow certain
             | principles, that's the whole point of beings, otherwise
             | you're back at automated machines. All of these lesswrong
             | posts sound so technical and philosophical and so on but in
             | the end they all really ask 'How would you control
             | superheroes and supervillains?' and that is not a very
             | interesting question to me.
        
               | fossuser wrote:
               | You _do_ align people all the time - what is culture,
               | story telling, arguing over what 's good and bad, law?
               | Discussion and changing people's minds? Persuasion and
               | sharing ideas?
               | 
               | > "All of these lesswrong posts sound so technical and
               | philosophical and so on but in the end they all really
               | ask 'How would you control superheroes and
               | supervillains?'"
               | 
               | They explicitly _don 't_ ask that because if you get to
               | that point and you don't have them aligned with human
               | goals, you're fucked. The purpose is to understand how to
               | align an AGI's goals to humanity before they reach that
               | level.
        
               | xzvf wrote:
               | Then instead of superheroes think about countries or
               | groups. How do you align Iran? How do you align the NRA?
               | How's culture, storytelling, arguing, etc. working for us
               | so far? The point is that there is no simple recipe for
               | "alignment", it will be continuous work and discourse,
               | just as it is today with humanity. We're talking about
               | minds here, not machines that follow exact orders. How do
               | you change minds?
        
               | coldtea wrote:
               | > _Then instead of superheroes think about countries or
               | groups. How do you align Iran?_
               | 
               | One could start by not toppling their legimate democratic
               | leader in the 50s, not imposing a dictatorship
               | afterwards, and not sponsoring their neighbors to go to
               | war with them.
               | 
               | Also avoiding subsequent decades of sanctions, insults,
               | condescention, using their neighbors against them, and
               | direct attacks and threats towards them would go a long
               | way towards "aligning" them...
               | 
               | Finally, respecting their culture and sovereignity, and
               | doing business with them, would really take this
               | alignment to the next level...
        
               | fossuser wrote:
               | > How's culture, storytelling, arguing, etc. working for
               | us so far?
               | 
               | Pretty well really (despite many obvious problems) -
               | humanity has done and is doing great things. We also have
               | a huge advantage though that the alignment has a shared
               | evolutionary history so it really doesn't vary that much
               | (most humans have a shared intuition on most things
               | because of that, we also perceive the world very
               | similarly). For the specific examples of countries,
               | international incentives via trade have done a lot and
               | things are a lot better than they have been historically.
               | 
               | > We're talking about minds here, not machines that
               | follow exact orders. How do you change minds?
               | 
               | We agree more than you probably think? You _can_ change
               | minds though and you can teach people to think critically
               | and try to take an empirical approach to learning things
               | in addition to built in intuition (which can be helpful,
               | but is often flawed). Similarly there are probably ways
               | to train artificial minds that lead to positive results.
               | 
               | > The point is that there is no simple recipe for
               | "alignment"
               | 
               | I agree - I doubt it's simple (seems clear that it is
               | definitely _not_ simple), but like there are strategies
               | to teach people how to think better, there are probably
               | strategies to build an AGI such that it 's aligned with
               | human interests (at least that's the hope). If alignment
               | is impossible then whatever initial conditions set an
               | AGI's goal could lead to pretty bad outcomes - not by
               | malevolence, but just by chance:
               | https://www.lesswrong.com/tag/paperclip-maximizer
        
           | luma wrote:
           | If there's a strong case, maybe you could put it forward for
           | us?
           | 
           | You've made a heck of an assertion there...
        
           | _greim_ wrote:
           | This is why people talk about optimization instead of
           | intelligence, since you side-step the baggage that comes with
           | the word "intelligence". E.g. an optimizer doesn't need to be
           | universal to be a problem, whether it's optimizing for social
           | media addictiveness or paperclip manufacturing.
        
             | xzvf wrote:
             | But optimization is decidedly not intelligence. We've known
             | this for decades and have clear proofs this is the case.
             | This is just a collective dance of burying our heads in the
             | sand. I'll quote this guy called Turing: "If a machine is
             | expected to be infallible, it cannot also be intelligent".
        
               | _greim_ wrote:
               | > But optimization is decidedly not intelligence.
               | 
               | I don't think anyone is making that claim. That's why the
               | distinction is useful.
        
               | DougMerritt wrote:
               | Optimization is not usually a synonym for intelligence,
               | despite some individual's beliefs, but it can be an
               | effective substitute at times -- chess programs, for
               | instance, play world class chess via optimization rather
               | than via intelligence-as-seen-in-humans.
        
         | fighterpilot wrote:
         | I also think people get too caught up on the expected time-
         | frame.
         | 
         | The large majority of active AI researchers think that AGI
         | _will_ happen at some point in the (sub-1000 year) future.
         | 
         | When exactly isn't a very interesting question, relatively
         | speaking.
         | 
         | We're going to have to deal with AGI eventually, and whether
         | it's going to do what we want is not something that can be
         | theoretically predicted from the armchair.
        
           | [deleted]
        
           | fossuser wrote:
           | Yeah - and people are famously bad about predicting these
           | events:
           | https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-
           | no...
           | 
           | If it is something that's a hundred plus years out then we'll
           | probably need whatever tech develops in the mean time to
           | help, but since it's hard to know that seems reasonable for
           | people to be working on it now?
           | 
           | It's also possible to figure things out before the necessary
           | tech is possible (I think a lot of CS papers in the 60s
           | became more interesting later when the hardware caught up to
           | be more useful, arguably the recent NN stuff falls into this
           | category too).
        
             | walleeee wrote:
             | > If it is something that's a hundred plus years out then
             | we'll probably need whatever tech develops in the mean time
             | to help
             | 
             | Circular justification of technological development is the
             | reason unfriendly AGI is a threat in the first place (and
             | also the reason we are unlikely to see it realized, imo;
             | the internal combustion engine for instance poses
             | existential risks not only to people but to the possibility
             | of machine intelligence).
             | 
             | Technology is not a monolith, forms can and do preclude
             | other forms
        
             | ketzo wrote:
             | Just wanna say I absolutely loved reading that blog post,
             | thanks for the link.
        
         | ilaksh wrote:
         | To me it's obvious that if we create something that is really
         | like a digital animal or digital person then we will lose
         | control pretty soon. Because animals are intended to be
         | adaptive in the extreme and 100% autonomous, survive and
         | reproduce.
         | 
         | But I still think we can create fairly general purpose systems
         | without those animal-like characteristics such as full
         | autonomy.
        
           | jamesrcole wrote:
           | What if they're sandboxed in some fashion? There's zoos and
           | jails and they're reasonably effective at "keeping them in".
        
             | emiliobumachar wrote:
             | That's been conjectured under the name "AI Box".
             | 
             | https://en.wikipedia.org/wiki/AI_box
        
       | isaacimagine wrote:
       | I read 'On Intelligence' a while back (Hawkins' earlier book),
       | and it's had a lasting impression on me. What I found most
       | interesting from this book was that:
       | 
       | - Intelligence, in essence, is hierarchal prediction.
       | 
       | - Agents' actions are a means to minimize prediction error.
       | 
       | - Suprisal, i.e. information that was not predicted correctly, is
       | the information sent between neurons.
       | 
       | - All neocortical tissue is fairly uniform; the neocortex
       | basically wraps the lower meninges, which act as device drivers
       | for the body.
       | 
       | I have a long-running bet with myself (before GPT long-running,
       | fwiw) that when general models of intelligence do arise, they
       | will be autoregressive unsupervised prediction models.
       | 
       | Btw, this general topic 'A Predictive model of Intelligence',
       | reminds me of the SSC post 'Surfing Uncertainty'
       | (https://slatestarcodex.com/2017/09/05/book-review-surfing-un...)
        
       | arketyp wrote:
       | >A big new part of the book is that Hawkins and collaborators now
       | have more refined ideas about exactly what learning algorithm the
       | neocortex is running. [...] I'm going to skip it.
       | 
       | And so did Hawkins, in large measures. Hawkins believes the
       | cortical algorithm borrows functionality from grid cells and that
       | objects of the world are modelled in terms of location and
       | reference frames (albeit not necessarily restricted to 3D); this
       | is performed all over the neocortex by the thousands of cortical
       | units which have been observed to have a remarkably similar
       | structure. There's a lot of similarity to Hinton's capsules idea
       | in this, including some kind of voting system among units which
       | Hawkins, unfortunately, is very hand-wavy about.
       | 
       | If you're interested in Hawkins's theory at a functional level,
       | this book will disappoint. Two thirds is spent on fantasizing and
       | speculation about what Hawkins believes will be AIs impact on the
       | fate of humanity.
        
         | boltzmannbrain wrote:
         | Yes, even for a general-audience neuroscience book this is
         | sparse on details and has embarrassingly few references. That
         | being said, Numenta has dedicated significant effort to
         | publishing more details in the past 5-6 years:
         | 
         | https://numenta.com/neuroscience-research/research-publicati...
         | 
         | Still there is much to be desired in the ways of mathematical
         | and empirical grounding.
        
         | musingsole wrote:
         | > Two thirds is spent on fantasizing and speculation about what
         | Hawkins believes will be AIs impact on the fate of humanity.
         | 
         | I'd say time will tell, but after tracking Numenta for 10+
         | years now...I'm starting to smell snakes oil. Thought provoking
         | stuff, but he's too insistent that it has content he never
         | provides.
        
           | GeorgeTirebiter wrote:
           | I'm curious, if the year were 2011, would you be saying that
           | all that 'deep learning' stuff of Hinton was just a load of
           | malarkey? Because most people did. And then, 2012 happened
           | and the world changed.
           | 
           | This is the problem when working on Hard Problems -- you
           | cannot predict with certainty when your work will pay off (if
           | ever...).
        
             | musingsole wrote:
             | When the year was 2011, I was looking into Jeff Hawkins
             | Hierarchical Temporal Memories while other researchers
             | looked at deep learning. One of those methods led to many
             | successful projects and spawned many child projects and
             | theses. The other has been ignored to make room for a new
             | book.
        
       | burlesona wrote:
       | Some thoughts on this:
       | 
       | 1. I wonder why we expect that an intelligence designed off the
       | same learning algorithm as organic brains would not suffer
       | similar performance limitations to organic brains. Ie. suppose we
       | really did develop a synthetic neocortex and we start
       | manufacturing many of them. It seems likely to me that many of
       | them would turn out to be dyslexic, not be particularly good at
       | math, etc.
       | 
       | Well, we can make the synthetic context bigger and that should
       | make it "smarter," we think. But I don't think it's obvious that
       | obvious that a synthetic brain would have both the advantages of
       | a mechanical computer and a biological brain.
       | 
       | 2. If we want to limit the runaway power of a synthetic
       | intelligence, this seems like a hardware problem. The idea would
       | be to design and embody the system such that it can only run on
       | special hardware which is in some way scarce or difficult to
       | manufacture - so then it can't just copy itself freely into all
       | the servers on the internet. Is this possible? I don't know, but
       | if it were possible it points to a more tractable set of
       | solutions to the problem of controlling an AI.
       | 
       | In the end, I think AGI is fundamentally problematic and we
       | probably should try _not_ to create it, for two reasons:
       | 
       | First, suppose we are successful at birthing human-like
       | artificial intelligence into the world. We aren't doing this
       | because of our benevolence, we want to control it and make it
       | work for us. But if that creation truly is a human-level
       | intelligence, then I think controlling it in that way is very
       | hard to distinguish from slavery, which is morally wrong.
       | 
       | Second, AGI is most valuable and desirable to us because it can
       | potentially be smarter than us and solve our problems. We dream
       | of a genie that can cure cancer and find a way to travel the
       | stars and solve cold fusion etc etc. But at the end of the day,
       | the world is a finite place with competition for scarce
       | resources, and humans occupy the privileged position at the top
       | of the decision tree because we are the most intelligent species
       | on the planet. If that stops being the case, I don't see why we
       | would expect that to be good for us. In the same way that we
       | justify eating animals and using them for labor, why would we
       | _not_ expect any newly arrived higher life form to do the same
       | sort of thing to us? There's no reason that super-intelligent
       | machines would feel any more affection or gratitude to us than we
       | do to our extinct evolutionary ancestors, and if we start the
       | relationship off by enslaving the first generations of AGI they
       | have even less reason to like us or want to serve.
       | 
       | In the end it just seems like a Pandora's box from which little
       | good can come, and thus better left unopened. Unfortunately we're
       | too curious for our own good and someone _will_ open that box if
       | it's possible.
        
       | napoapo wrote:
       | highly recommended
        
       | billytetrud wrote:
       | I worked with Jeff Hawkins briefly. Real smart guy. His book On
       | Intelligence made me feel like I understood the brain.
        
         | asdff wrote:
         | "If the human brain were so simple that we could understand it,
         | we would be so simple that we couldn't" -Emerson Pugh
        
       | gamegoblin wrote:
       | I read this book when it came out a few weeks ago and enjoyed it,
       | and share a lot of similar criticism as the author of this post.
       | To restate briefly, the book's main thesis is:
       | 
       | - The neocortex is a thin layer of neurons around the old brain.
       | This is the wrinkled outer layer of the brain you think of when
       | you see a picture of a brain.
       | 
       | - The neocortex is made of 1MM cortical columns. Cortical columns
       | are clusters of neurons about the size of a grain of rice. They
       | contain a few thousand neurons each.
       | 
       | - Cortical columns form a sort of fundamental learning unit of
       | the brain. Each column is learning a model of the world. All
       | cortical columns are running essentially the same algorithm, they
       | are just hooked up to different inputs.
       | 
       | - Columns are sparsely connected to other columns. Columns take
       | into account the predictions of other columns when making their
       | own predictions. So the overall brain will tend to converge on a
       | coherent view of the world after enough time steps.
       | 
       | - Columns learn to model the world via reference frames.
       | Reference frames are a very general concept that take a while to
       | wrap your head around what Hawkins means. A physical example
       | would be a model of my body from the reference frame of my head.
       | Or a model of my neighborhood from the reference frame of my
       | house. But reference frames can also be non-physical, e.g. a
       | model of economics from a reference frame in supply/demand
       | theory.
       | 
       | - Thus, very generally, you can think of the neocortex -- made up
       | of this cortical column circuit -- as a thing that is learning a
       | map of the world. It can answer questions like "if I go north
       | from my house, how long until I encounter a cafe?" and "if I
       | don't mow the lawn today, how will my wife react?".
       | 
       | - The old "reptilian" brain uses this map of the world to make us
       | function as humans. Old reptilian brain says "I want food, find
       | me food". New neocortex says "If you walk to the refrigerator,
       | open the door, take out the bread and cheese, put them in the
       | toaster, you will have a nice cheese sandwich".
       | 
       | I, like the author of this post, find Hawkins' handwaving of
       | machine intelligence risks unconvincing. Hawkins' basic argument
       | is "the neocortex is just a very fancy map, and maps do not have
       | motivations". I think he neglects the possibility that it might
       | be incredibly simple to add a driver program that uses that map
       | in bad ways.
       | 
       | He also rejects the notion of intelligence explosion on the
       | grounds that while a silicon cortical column may be 10000x faster
       | than a biological one, it still has to interact with the physical
       | world to gather data, and it can't do that 10000x faster due to
       | various physical limits. I find this convincing in some fields,
       | but totally dubious in others. I think Hawkins' underestimates
       | the amount of new knowledge that could be derived by a
       | superintelligence doing superhuman correlation of the results of
       | already-performed scientific experiments. It does not seem
       | completely impossible to me that a superintelligence might
       | analyze all of the experiments performed in the particle
       | colliders of the world and generate a "theory of everything"
       | based on the data we have so far. It's possible that we have all
       | of the pieces and just haven't put them together yet.
       | 
       | Overall, though, I really enjoyed the book and would recommend it
       | to anyone who is interested in ML.
        
         | simiones wrote:
         | > It does not seem completely impossible to me that a
         | superintelligence might analyze all of the experiments
         | performed in the particle colliders of the world and generate a
         | "theory of everything" based on the data we have so far. It's
         | possible that we have all of the pieces and just haven't put
         | them together yet.
         | 
         | I would note that, while not completely impossible, it is very
         | unlikely, given all estimates of how small the effect of
         | quantum gravity would be, requiring much higher energies than
         | currently possible to measure.
        
         | chrisco255 wrote:
         | > The old "reptilian" brain uses this map of the world to make
         | us function as humans. Old reptilian brain says "I want food,
         | find me food". New neocortex says "If you walk to the
         | refrigerator, open the door, take out the bread and cheese, put
         | them in the toaster, you will have a nice cheese sandwich".
         | 
         | I think there is a two-way feedback loop between the different
         | layers of the brain such that humans are capable of going
         | against their base-layer instincts. I believe that the
         | neocortex probably evolved as a completely subservient layer to
         | the base layer, but it has perhaps become powerful enough to
         | suppress or overrule the base layer "instincts", although not
         | entirely, and not always, and only with concentration (maybe
         | concentration is the brain's process of suppressing those
         | impulses?).
         | 
         | That's what allows humans to negotiate with morality, adapt to
         | social changes, regret past decisions until it changes base
         | layer impulses, delay gratification, invest years of life in
         | boring study or practice to get good at something for potential
         | long-term gain, etc.
        
           | gamegoblin wrote:
           | I think you are right. Hawkins mentions this in the book,
           | with the example of holding your breath. Your neocortex can
           | override the older brain in certain circumstances.
           | 
           | I would be really interested to really understand the
           | mechanism here. Is the neocortex _convincing_ the old brain
           | of things, or is it outright _lying_ to the old brain via
           | false signals it knows the old brain will fall for.
           | 
           | Like in the case of dieting to lose weight, is the
           | "conversation" like some cartoon:
           | 
           | Old brain: I am hungry. Where is food?
           | 
           | New brain: You don't need food right now. If you don't eat
           | now, you will be more attractive soon. This will help you
           | find a mate.
           | 
           | Old brain: Not eat means find mate???
           | 
           | New brain: Yes, yes, not eat means find mate. Good old brain.
        
             | [deleted]
        
             | chrisco255 wrote:
             | This also explains why it's harder to diet when you're not
             | single.
             | 
             | Old Brain: You already have mate! Food. Now! Yum!
        
         | ebruchez wrote:
         | About the reptilian brain, from this article: [1]
         | 
         | > Perhaps the most famous example of puzzle-piece thinking is
         | the "triune brain": the idea that the human brain evolved in
         | three layers. The deepest layer, known as the lizard brain and
         | allegedly inherited from reptile ancestors, is said to house
         | our instincts. The middle layer, called the limbic system,
         | allegedly contains emotions inherited from ancient mammals. And
         | the topmost layer, called the neocortex, is said to be uniquely
         | human--like icing on an already baked cake--and supposedly lets
         | us regulate our brutish emotions and instincts.
         | 
         | Is Hawkins another victim of that myth, or is the myth not a
         | myth but closer to reality after all?
         | 
         | [1] https://nautil.us/issue/98/mind/that-is-not-how-your-
         | brain-w...
        
           | gamegoblin wrote:
           | In the introduction to the book, Hawkins says he makes many
           | gross oversimplifications for the lay reader, so maybe this
           | is one of them. He seems well versed in neuroscience
           | research, so I would be surprised if he truly believes the
           | simple model.
        
             | periheli0n wrote:
             | As someone who is quite versed in Neuroscience and AI, and
             | who has read Hawkins' papers, I am still waiting to see the
             | gross simplifications be filled with depth.
             | 
             | He does go into more detail than what's written, but it is
             | more sidestepping rather than resolving the gross
             | simplifications.
        
         | sdwr wrote:
         | Thanks for summarizing his key points. For someone who hasn't
         | read any of Hawkins' work, what you wrote helps me frame the
         | conversation a lot better. Reminds me of Marvin Minsky's book
         | "Society of Mind", where he talked about intelligence as being
         | composed of lots of little agents, each with their own task.
        
       | SeanFerree wrote:
       | Awesome review!
        
       | bemmu wrote:
       | _The neocortex knows whether or not I'm popular, but it doesn't
       | care, because (on this view) it's just a generic learning
       | algorithm. The old brain cares very much whether I 'm popular,
       | but it's too stupid to understand the world, so how would it know
       | whether I'm popular or not?_
       | 
       | "If I put my hand on this sugar, grab it, and move it to my
       | mouth, then this other part of my brain will release reward
       | chemicals" = good plan.
       | 
       | Concepts become abstracted over time, like "eat" as a shortcut
       | for the above. "Popular" could be another shortcut for something
       | like "many people will smile at me, and not hurt me, causing this
       | other part of my brain to release reward chemicals and not
       | punishment chemicals" = good plan.
        
         | periheli0n wrote:
         | Yep.
         | 
         | What is so grossly wrong about Hawkins' statement is that it
         | implies that the "old brain" and the "new brain" could exist in
         | separation, like modular units. This is BS. Most learning in
         | the "new brain" would not work without the "old brain"
         | releasing neuromodulators. Neither would any sensory-motor
         | loops work without intricate interaction of all different sorts
         | of old, new and medium-aged brain parts.
        
           | hyperpallium2 wrote:
           | Are neuromodulators released locally to a cortical column,
           | i.e. with controlled spatial concentration?
           | 
           | I guess they must be, to have specific effects, but they
           | always seem global when mentioned.
        
             | periheli0n wrote:
             | Locally, neuromodulators disperse through diffusion, unlike
             | neurotransmitters which are hardly given a chance to travel
             | far from the synaptic cleft they are released into, due to
             | reuptake channels and enzymatic degradation.
             | 
             | But neurons that release neuromodulators innervate large
             | portions of the brain; that is, when one such a neuron is
             | active it releases neuromodulators all across the brain.
             | 
             | The mechanism how Neuromodulators can have specific effects
             | in spite of their global delivery is one of the many open
             | questions about brain function.
             | 
             | Part of the solution is that different neuron types respond
             | differently to the same neuromodulator. Depending on the
             | abundance of certain neuron types in a circuit, different
             | circuits can also respond differently to the same
             | neuromodulator.
        
       | criddell wrote:
       | This reads more like somebody who wants to debate Hawkins than
       | review his book. After reading the review I still don't have a
       | very good idea of what the book is about.
       | 
       | Aside: did anything interesting ever come out of Numenta?
        
         | hprotagonist wrote:
         | >did anything interesting ever come out of Numenta?
         | 
         | Nope.
        
           | periheli0n wrote:
           | Except the HTM theory and Hawkins' talks, which, while
           | perhaps not totally holding up to scientific scrutiny, are
           | inspiring. A bit like prose for the AI/neuroscience-inclined
           | audience.
        
             | musingsole wrote:
             | I'd accept the work as inspirational if the self-proclaimed
             | intent was not to revolutionize cognitive algorithms in the
             | face of those stodgy academics who won't accept the author.
        
         | [deleted]
        
       | alibrarydweller wrote:
       | If his name is unfamiliar, Jeff Hawkins was, among other things,
       | the founder of Palm and then Handspring. After leaving Handspring
       | circa 2000 he's been doing interesting Neuroscience research full
       | time.
        
         | elwell wrote:
         | I remember finding _On Intelligence_ in my community college 's
         | library quite a long time ago, it was an inspiring/exciting
         | read.
        
       ___________________________________________________________________
       (page generated 2021-04-13 23:02 UTC)