[HN Gopher] Mental phenomena don't map into the brain as expected
       ___________________________________________________________________
        
       Mental phenomena don't map into the brain as expected
        
       Author : theafh
       Score  : 140 points
       Date   : 2021-08-24 15:05 UTC (7 hours ago)
        
 (HTM) web link (www.quantamagazine.org)
 (TXT) w3m dump (www.quantamagazine.org)
        
       | ElFitz wrote:
       | Which basically means the brain doesn't think the way the brain
       | thinks the brain thinks.
       | 
       | Aside from the horrible phrasing, it's quite funny to think
       | about, leading us to:
       | 
       | My brain thinks it is funny to think about how the brain doesn't
       | think the way the brain thinks the brain thinks.
       | 
       | In the same way that atoms don't exactly work the way the atoms
       | think the atoms work since, as Niels Bohr said, "A physicist is
       | just an atom's way of looking at itself."
       | 
       | Reminds me of some comment I heard somewhere, about how the brain
       | and the body don't come with an instruction manual, and how it
       | makes sense from an evolutionary perspective: we so far
       | apparently haven't needed to know how they work to use them well
       | enough to survive.
        
         | wintorez wrote:
         | "If our brains were simple enough for us to understand them,
         | we'd be so simple that we couldn't."
        
           | robwwilliams wrote:
           | Hmm, but perhaps we could manage to understand the brain and
           | behavior of a fruitfly or even a mouse. The "self-conscious"
           | tier is trivial recursion. Hofstadter has this right in "I Am
           | A Strange Loop".
        
         | PartiallyTyped wrote:
         | Funny, I began reading Yudkowsky's "Map and Territory", in the
         | preface there is this sentence:
         | 
         | > For if you can't trust your brain, can you trust anything
         | else?
         | 
         | Which basically means, if the brain can't trust the brain, can
         | the brain trust another brain?
         | 
         | Yes slightly tortured, but only to fit :D
        
           | ElFitz wrote:
           | Haha! Nice one!
           | 
           | We are starting to slide into Cartesian doubt here.
        
         | d0mine wrote:
         | For me it is more mandane: there were times when people thought
         | of human health in terms of 4 elements. [Much] Later they
         | discovered germ theory that uses more useful concepts.
         | 
         | Using "memory", "perception" to describe how the brain works is
         | like using 4 elements to describe how our bodies function.
        
           | ElFitz wrote:
           | Sure. But let's take the average computer user's analogy from
           | another comment [1].
           | 
           | A random, average smartphone can use their device every day,
           | and accomplish a great number of things without knowing how
           | it actually works at all.
           | 
           | For all they care, it could be powered by tiny fairies
           | running in wheels like hamsters, and the Internet and cell
           | networks could be irascible spirits connecting all
           | smartphones together through a worldwide mycelium network.
           | 
           | Knowing _how_ smartphones and computers is not necessary for
           | them to accomplish what they set to.
           | 
           | The same way, if we actually needed to have a deeper
           | understanding of our own inner workings to gain a noticeable
           | survival edge, evolution would probably have taken care of
           | that, the same way it has endowed nearly all of us with at
           | least a basic survival instinct. By eliminating most
           | individuals who don't feel any need to eat food or drink
           | water from the gene pool.
           | 
           | [1]: https://news.ycombinator.com/item?id=28292418
        
           | novok wrote:
           | Newtons theory of gravity and mechanics and such did explain
           | things, just not as accurately as general & special
           | relativity did. But we still use newtons formulas today as a
           | good enough approximation in many cases.
           | 
           | I have a feeling concepts like memory and perception will
           | still be used in the future, even when we figure out the
           | equivalent of general relativity for the brain in the future
           | too, especially since they are more higher level summation
           | phenomena, like personality, or higher level languages vs.
           | asm.
        
           | civilized wrote:
           | Except the four element theory was rendered irrelevant by the
           | germ theory. I doubt that understanding the brain will render
           | memory and perception irrelevant.
           | 
           | Robots literally have memory and perception (we know because
           | we built them), so clearly these are real phenomena that
           | exist in the real world.
           | 
           | It seems to me unlikely that we are totally mistaken in
           | conceptualizing ourselves in terms of these known real
           | phenomena.
        
             | mistermann wrote:
             | My intuition is that it isn't totally mistaken, more so
             | that it is a massive simplification.
             | 
             | Whereas we seem to describe the brain as having "memory",
             | as if that is something statically stored that is _simply_
             | "retrieved" (somehow), it seems to me that this overlooks
             | that the device that is doing this is also running an
             | entire virtual model of simulated reality, which can not
             | only _remember_ things, but it can replay them, change
             | variables and play them again, play them in reverse,
             | manufacture completely fictional scenarios, run (as its
             | default) a custom modified variation of  "actual" reality
             | that it finds more pleasing (this one has gotten lots of
             | news coverage in the last few years), read the contents of
             | realities running inside other minds (tens of millions if
             | it so chooses), see into the future of "actual" reality,
             | all sorts of different things.
             | 
             | "Brains have memory" is _a bit of an understatement_ of
             | what they actually do.
        
               | civilized wrote:
               | > "Brains have memory" is a bit of an understatement of
               | what they actually do.
               | 
               | ...but, I don't think anyone thinks that "brains have
               | memory" is a complete statement of what they actually do.
               | It's not a complete statement of what computers do
               | either.
        
             | Nav_Panel wrote:
             | Four element theory is on approximately the same
             | epistemological plane as modern personality psychology. In
             | some ways it's more useful than psychology, because it has
             | a straightforward logic in terms of metaphor and
             | relationships between parts, whereas psychology is a mess.
             | Of course, it has very little to do with how the brain
             | works, but psychology also has very little to do with how
             | the brain works (it's more about the use of language to
             | describe and regulate behavior), so it's not a big deal.
        
             | mrtksn wrote:
             | We build robots the way that we think things are supposed
             | to work. It may turn out that our design is fundamentally
             | limited.
        
         | jraph wrote:
         | I was annoyed by the phrasing of the submission, which could
         | have been "The way the brain thinks is counter-intuitive"
         | (don't assume I don't know what you are going to tell, or,
         | conversely, that I spent even one second thinking about how the
         | brain thinks), but we probably would not have had the pleasure
         | to read your comment with this title.
        
           | jsight wrote:
           | I'm glad I'm not the only one annoyed by the "it doesn't work
           | the way _you_ think" phraseology. How dare they assume how I
           | think? :)
           | 
           | Its a pet peeve, I guess.
        
           | darepublic wrote:
           | The title of this post somewhat reminds me of a jira task
           | title
        
             | jraph wrote:
             | You are speaking about the editorialized title "Mental
             | Phenomena Don't Map Into the Brain as Expected", right?
             | Well, yes, I would have titled a bug report exactly this
             | way now that you say it.
             | 
             | For reference, the original title is "The Brain Doesn't
             | Think the Way You Think It Does", this is the one I reacted
             | to.
        
       | techbio wrote:
       | The article begins by analogy to cartography, in which, the First
       | Law of Geography[1] "everything is related to everything else,
       | but near things are more related than distant things" applies to
       | the space defined by the surface of the Earth. The brain is
       | smaller than a pixel in a typical satellite photo but contains
       | information to build the satellite, put it into orbit, and
       | communicate with it via radio. Why would anyone believe that
       | analogy should hold? Having read D'Amasio[2] ages ago, this
       | appears to be looking for keys under a streetlamp simply because
       | the light is better.
       | 
       | [1]
       | https://en.m.wikipedia.org/wiki/Tobler%27s_first_law_of_geog...
       | 
       | [2]
       | https://www.goodreads.com/en/book/show/125777.The_Feeling_of...
        
       | Giorgi wrote:
       | Brain is interested how it thinks so made me click
        
       | Invictus0 wrote:
       | > Recent work has found, for instance, that two-thirds of the
       | brain is involved in simple eye movements; meanwhile, half of the
       | brain gets activated during respiration.
       | 
       | Talk about misleading... obviously these two tasks alone don't
       | account for 116% of the brain's power. The neocortex is all the
       | same part of the brain--yes it can loosely be delineated into
       | segments, but it is pretty pointless to say the whole brain is
       | involved when really the neocortex is just distributing its
       | inputs along its length in the process of searching for the
       | appropriate cortical columns.
        
         | philipov wrote:
         | It's only misleading if you're stuck in a reductionist mindset
         | where every piece only has one job and combines linearly
         | together with other pieces.
        
         | simonh wrote:
         | You have made it misleading by taking it out of the context of
         | the article which explains how that works.
        
         | SketchySeaBeast wrote:
         | I'm confused. That's what the article is about. The quote you
         | chose is 1/4 of the way in and then they spend the rest of the
         | article explaining why those details are misleading. I don't
         | understand what you're objecting to.
        
           | coldtea wrote:
           | Parent is objecting against patient reading
        
       | peter303 wrote:
       | Much of current cognitive science is neo-phrenology: geography =
       | function. This article shows thats too simple.
        
       | DiabloD3 wrote:
       | The brain is a piece of soggy bacon that lives in a shell of
       | bone, has its own membrane that is like the gut lining, cleans
       | itself by power-washing itself (and shorting itself out,
       | essentially; REM sleep is basically the side effect of this
       | process), depends on glucose for its special mitochondria[1], and
       | your humanity is literally a thin layer of meat-paint on top of
       | millions of years of evolution...
       | 
       | ... that is doing scientific research on itself, and then reading
       | it and understanding the strange symbols on the screen.
       | 
       | [1]: All your organs have unique metabolic signatures, but they,
       | generally, can all run on purely fatty acids; the brain requires
       | about 30-40g of dietary glucose to run optimally, as fatty acids
       | do not cross the BBB quickly enough.
        
         | codeflo wrote:
         | Something something diagonalization proof something something.
        
           | tudorw wrote:
           | how much?
        
         | amanaplanacanal wrote:
         | Doesn't require dietary glucose, the liver will manufacture as
         | needed from fats and proteins.
        
           | DiabloD3 wrote:
           | Upper limit of what is considered healthy gluconeogenesis is
           | slightly short of what can be provided. Brain needs ~120g of
           | glucose, you're about 30-40g short of that.
           | 
           | 30-40g is almost nothing, could be, say, a cup of frozen
           | berry mix, mixed right into 3/4th a cup of plain real greek
           | yogurt, with some turmeric, ginger, and cinnamon mixed in.
        
           | davikrr wrote:
           | It can also subsist on top of ketone bodies - if fat
           | breakdown is high enough due to a lack of glucose due to
           | fasting or lack of other storage such as glycogen.
        
           | kiba wrote:
           | Also, it's possible to subsists and survive without food for
           | quite a while.
        
         | remir wrote:
         | To me this is the most interesting thing. A bunch of organic
         | matter evolved to the point where it is trying to understand
         | itself. How crazy is that?
        
           | techbio wrote:
           | I know your ending question is a figure of speech, but since
           | we are imagining it simultaneously, it's more or less by
           | definition not crazy at all.
        
         | hypertele-Xii wrote:
         | Poetry for nerds. (Thank you)
        
       | joe_the_user wrote:
       | If you were to look at how a computers' memory is organized, a
       | rough, non-programmer idea of how it would map wouldn't be
       | correct. Big visible things like windows or the start menu aren't
       | put in a single place, etc.
       | 
       | It seems like remarkable that people don't want to imagine there
       | are the rough equivalent of many, many software layers between
       | the level of the neuron and the level of "reason", "emotions",
       | "self", "consciousness" and etc. I think this comes because the
       | "sense of self" is a process that reflexively takes you as
       | primary and indivisible. A framework that view this as generated
       | by lower process violates this.
        
         | robwwilliams wrote:
         | You are exactly right. I'm a full time geek neurogeneticist. I
         | find most neuroscience models of brain function too neat and
         | simple. My pinned tweet at @robwilliamsiii is one idea that
         | hackers and CS types may enjoy. In brief--where is the clock?
         | Where are the many levels of the stack that you mention.
        
         | moonchrome wrote:
         | >Big visible things like windows or the start menu aren't put
         | in a single place, etc.
         | 
         | Sure they are - in fact it's rendered to linear memory blocks
         | of pixels.
        
           | jvanderbot wrote:
           | OK, let's strain this analogy even more!
           | 
           | In memory, sure, the image is in a block (although not
           | really, it's composited (not composted) then shipped over a
           | wire).
           | 
           | But the functionality is basically everywhere. Similar in
           | minds. We can see images "trip" localized circuits when they
           | are recognized, but the comprehension and processing of the
           | scene is muuuuuch more complicated.
           | 
           | Similarly, where is the little 8bit block that handles a
           | click on the start button, and why God Almighty is it so
           | _far_ from the start button image!
        
             | waterhouse wrote:
             | > it's composted
             | 
             | (This is an amusing idea, but I think "composited" was the
             | word.)
        
               | jvanderbot wrote:
               | Ha! Thank you
        
         | acituan wrote:
         | > I think this comes because the "sense of self" is a process
         | that reflexively takes you as primary and indivisible
         | 
         | Only if you reduce the sense of self to ego function, and a
         | western one at that.
         | 
         | Self is a _much_ more complicated process. And good models of
         | it already acknowledge functional and content heterogeneity eg
         | subpersonalities, embodiment, aspirations, narratives,
         | personhood, boundary problems etc. Self exists because it
         | solves these problems and doing so was adaptive.
         | 
         | > It seems like remarkable that people don't want to imagine
         | there are the rough equivalent of many, many software layers
         | between the level of the neuron and the level of "reason",
         | "emotions", "self", "consciousness" and etc
         | 
         | This is a false dichotomy. We already know we have a grab bag
         | of specialized accelerator "hardware", but also soft/firm-ware
         | layers that glue things together. That's the whole point of the
         | article, you can't encode autobiographical memories without
         | hippocampus but that's not the only thing hippocampus does nor
         | it is the only thing required to encode those memories.
         | 
         | And people have already upped the ante on this; Cartesian
         | reductionism of trapping "computation" upstairs is also wrong.
         | Cognition requires an embodied and embedded agent. It is not
         | even a mere "brain-thing".
        
         | mabub24 wrote:
         | Or, you know, the analogy of brain as a computer with
         | computerlike memory management simply might not be 100% useful
         | all the time.
         | 
         | > people don't want to imagine there are the rough equivalent
         | of many, many software layers between the level of the neuron
         | and the level of "reason", "emotions", "self", "consciousness"
         | and etc
         | 
         | It's not that people don't want to imagine, it's that you don't
         | _need_ to imagine that. You don 't need to force how the brain
         | functions into the model of a modern computer, though it might
         | simplify and be popular. It's often a helpful analogy/metaphor,
         | and for people that know a lot about computers it often allows
         | them to prognosticate and theorize using elaborate analogies
         | built on top of more analogies, but it's equally helpful to not
         | insist that the thing _must be_ the analogy.
         | 
         | For instance, when I talk to a good friend who is a
         | neuroscientist and active in the development of prosthetics
         | with neural interfaces he is far more skeptical (healthily
         | though) of the computational model than any person I speak to
         | in software engineering or the tech industry more broadly.
         | That's likely because the analogy and model can be built upon
         | with further computer analogies, which people who work with
         | computers love. If we were using an automotive model to
         | describe how the brain functions, I'm sure we would have many
         | automotive mechanics theorizing on the nature of cognition too.
        
           | jmoss20 wrote:
           | But brains do, in fact, compute things!
           | 
           | There's no reason to think that they compute things the same
           | way silicon "computers" do -- that they're arranged in a von
           | Neumann architecture, or something. But it is true that they
           | perform computation somehow, and therefore are subject to a
           | similar set of constraints, and share similar goals
           | (efficiency, persistence, etc) with modern silicon hardware.
           | (Considering the differences is also insightful; ie our
           | wetware is a much noisier environment than silicon, and
           | likely requires a different approach to error correction.)
           | 
           | This is a useful perspective, for reasons that GP pointed
           | out. We know of many different physical arrangements that are
           | conducive to computing: e.x. Turing machines, von Neumann
           | machines, RNNs are all Turing complete (in principle), and
           | all look very different. So we should question our
           | assumptions about how the brain is organized. Why should we
           | think that, say, sadness "lives" in some physical location in
           | the brain? Does your email "live" somewhere on your computer?
           | (In some sense yes, in some sense, no...).
           | 
           | Is it not equally plausible that the brain implements a very
           | large RNN? And if it does, should we be surprised that if we
           | try to physically locate, say, "sadness", we might be
           | grasping at straws? In the absence of experimental evidence
           | (and even in the presence of it, if flawed assumptions are
           | driving the sorts of experiments we conduct), both seem
           | plausible to me.
           | 
           | Which is just a long winded way to say, I think there is some
           | value in questioning these assumptions. (Not blindly
           | swallowing others, just pushing on why we have the ones we
           | do.)
        
             | [deleted]
        
           | joe_the_user wrote:
           | _It 's not that people don't want to imagine, it's that you
           | don't need to imagine that._
           | 
           | I seems like you're hijacking the discussion to your pet
           | issue. My point isn't about how good in general the computer
           | analogy is, it really isn't. You should consider that maps of
           | brain function began quite a while ago, before the start of
           | the 20th (though accelerated by WWI). Here, the analogy was
           | the machine and the mapping of brain followed functional
           | units in machines. And if you consider the point I make
           | (which pretty much echos the article), it's really a counter-
           | example. The multi-layer organization of software show a
           | system doesn't _necessarily_ have to follow naive physical
           | functional units, especially ones we naively perceive. That
           | 's it, there's nothing here _forcing_ the computer analogy.
        
             | robwwilliams wrote:
             | I agree with joe!
             | 
             | Neuroscientist love to reify a chunk of brain as
             | responsible for function X. They have done this for 160
             | years. Only Karl Lashley's work called "localization" into
             | question but his work was swept aside in the Montcastle-
             | Hubel-Wiesel era of big neuroscience.
             | 
             | Now we do toy experiments using optogenetics of a single
             | inbred strain of mouse and delude ourselves into thinking
             | that we are achieving understanding of a highly complex
             | system.
             | 
             | I've worked in this field for 40 years and we are not even
             | asking the right questions.
             | 
             | It is a pity neuroscientists do not know more about analog
             | computing. Can a neuroscientist understand an op amp?
             | Probably not.
             | 
             | To share the harsh light--can a CS expert in AI understand
             | how to get to general AI? Probably not unless, like D.
             | Hassabis, you have a solid background in neuroscience.
        
               | Retric wrote:
               | On one level that's a reasonable take it, on the other
               | simply having enough data is a prerequisite to come up
               | with the right questions. Astronomers collected literally
               | centuries of data to build up ever more complex epicenter
               | models before ellipsis became an clearly better fit to
               | the data.
               | 
               | IMO, neuroscience simply needs that foundational data and
               | current theory is largely pointless.
        
             | mabub24 wrote:
             | I was agreeing with you. And we are arguing from the same
             | side. My comment was more directed at how the meta-opinion
             | in Hackernews generally struggles to step outside of the
             | computational model, or into other possible theories of
             | mind (or memory) developed by philosophers like
             | Wittgenstein, Dennett, or Hacker; and, the result is
             | usually forcing the computer analogy in ways that assumes a
             | kind of blunt physicalism, like the discrete parts of a
             | computer, and often nonsense, as you described. There is
             | often disbelief expressed at the idea that you could use
             | anything but software or computer analogies to describe how
             | the brain functions, or the mind. The assumption is so
             | strong that people do feel forced into the analogy simply
             | because they also understand computers.
        
       | brainmapper wrote:
       | Please note that merely seeing some place in the brain activate
       | in a functional MRI task does NOT necessarily mean that that
       | location is either necessary, sufficient or even involved in
       | representing information relevant to that task. Functional MRI
       | amplifies small global signals related to arousal, and if arousal
       | changes during a task then these arousal-related signals can
       | propagate over much of the brain. And even something as simple as
       | an eye movement can be correlated with global changes in arousal.
       | (A similar problem occurs with attention.) Unfortunately many of
       | the most common modeling and analysis methods methods used in
       | fMRI have no way to distinguish these rather uninteresting
       | arousal-related changes from those that are actually informative
       | about task-specific processes. The bottom line is that whenever
       | you read about any fMRI result, you should ask yourself whether
       | that could be a mere artifact of changes in arousal (or
       | attention), and if so you should find out what was done to
       | address this potential confound.
        
         | brainmapper wrote:
         | Sorry I should have clarified this comment was addressed at the
         | claims in the article that most of the brain is activated even
         | for trivial tasks...
        
       | DrNuke wrote:
       | Mechanistic solutions are never meant to solve high level
       | problems, but they very often make the fundamental bricks more
       | advanced solutions will rely upon, at a later stage.
        
         | robwwilliams wrote:
         | True. But mechanistic "solutions" can also be used as a crutch
         | that allows us to avoid asking the right hard questions. I
         | started my career as an electrophysiologist studying a big
         | chunk of the thalamus referred to as a visual information
         | "relay"nucleus. In 40 years I have never seen anyone question
         | this "relay" function seriously. I am reminded of the phrase
         | from Princess Bride: "I don't think that word means what you
         | think it means". "Relay" is a crutch to mask ignorance. No
         | collection of complex circuitry--one million neurons in this
         | case--is just a relay. It is also an important processor, but
         | probably in a domain invisible to a neuroscientist recording
         | from one or even 1000 neurons simultaneously.
         | 
         | My vote is that the "relay" is actually a timebase corrector
         | for noisy retinal input mosaics that have their own quirky
         | dynamics and temporal noise.
        
       | Borrible wrote:
       | Mindboggling how brains have a blind spot for the thought of
       | being an idea.
       | 
       | Just as mindboggling as the mind's reluctance to think of itself
       | as a brain.
        
         | mistermann wrote:
         | > Just as mindboggling as the mind's reluctance to think of
         | itself as a brain.
         | 
         | A bit of an understatement in my experience...very often, minds
         | get rather emotionally agitated when encountering the idea that
         | the reality they perceive is a representation of reality,
         | implemented by a brain (as opposed to being reality itself).
        
         | robwwilliams wrote:
         | Very D. Hofstadter ;-)
        
       | [deleted]
        
       | criddell wrote:
       | Are the neurons outside of the head considered to be part of the
       | brain?
        
         | netizen-936824 wrote:
         | Is "brain" equated to "nervous system"?
        
           | [deleted]
        
         | dspillett wrote:
         | Generally not. Which is why the nervous system of cephalopods
         | seems so alien -- their processing is performed in a more
         | distributed fashion.
         | 
         | Sometimes the brain and spinal column are considered together
         | in our case, as some of our survival reactions are governed by
         | neurons in our spine, but that does not really match how
         | cephalopods distribute their processing (something that is
         | presumably necessary, or at least helpful/efficient, for
         | controlling the flexibility of their limbs).
        
         | tgbugs wrote:
         | The top level partonomy for the nervous system is usually as
         | follows.                 nervous system       -> central
         | nervous system          -> brain          -> spinal cord
         | -> peripheral nervous system
        
       | [deleted]
        
       | EMM_386 wrote:
       | If anyone is interested in the "deep" topics such as
       | consciousness, materialism, quantum physics, religion, etc. I
       | highly recommend the "Closer to Truth" series.
       | 
       | https://www.youtube.com/channel/UCl9StMQ79LtEvlrskzjoYbQ
       | 
       | I recently stumbled across this and it has some fascinating
       | episodes where he interviews many top people in each field.
       | Michio Kaku, Roger Penrose, Paul Davies, religious leaders, etc
       | etc.
        
       | kens wrote:
       | An interesting paper is "Could a neuroscientist understand a
       | microprocessor". The idea is to apply neuroscience-style analysis
       | to the 6502 processor running programs such as Space Invaders and
       | see if you can find out anything interesting about the processor.
       | They tried a bunch of different approaches which gave a bunch of
       | data but essentially nothing useful about the processor's
       | structure or organization. The point is that if these techniques
       | are useless for figuring out something simple and structured like
       | the 6502, they're unlikely to give useful information about the
       | brain.
       | 
       | https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...
       | (Click "download PDF", it's open access.)
        
       | robg wrote:
       | We've known for over thirty years, first based on lesion studies
       | then neuroimaging. For instance, we saw that language didn't map
       | to one region, but depended on how language was being used and
       | then word meanings relying on sensory vs more abstract brain
       | regions (e.g., cannon vs cannoli vs carve).
       | 
       | Think of features with distributed representations, patterns of
       | connectivity re-using bits in endless ways. Another
       | oversimplification is that neurons can be only representing 1's
       | and 0's. The true computational power is every state in-between
       | and strength of connections between processing units.
        
       | cblconfederate wrote:
       | The original title might have been OK since the gist of the
       | article is that what we call "mental phenomena" may not map at
       | all. Just like somebody can't find "beauty" in an ANNs
       | parameters.
        
       | yboris wrote:
       | My favorite writings on the topic of consciousness are by the
       | philosopher Daniel Dennett with his books like "Consciousness
       | Explained". He provide such fun thought experiments, and brings
       | together so much science - it's a treat!
        
         | lnjarroyo wrote:
         | Never read that, I will look into it. One of the books I love
         | is "The Mind's I" by Dennett and Hofstadter. Easily one of the
         | best books I have ever read.
        
       ___________________________________________________________________
       (page generated 2021-08-24 23:01 UTC)