[HN Gopher] A single cell slime mold makes decisions without a c...
___________________________________________________________________
A single cell slime mold makes decisions without a central nervous
system
Author : gmays
Score : 108 points
Date : 2021-02-27 16:37 UTC (6 hours ago)
(HTM) web link (www.tum.de)
(TXT) w3m dump (www.tum.de)
| fiftyfifty wrote:
| Previous studies were already zeroing in on the cytoskeleton
| (made of microtubules) as the likely place where slime molds
| stored their memories:
| https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4594612/
|
| The break through here is that they've found the memories are
| encapsulated in the diameter of the microtubules: "Past feeding
| events are embedded in the hierarchy of tube diameters,
| specifically in the arrangement of thick and thin tubes in the
| network."
| rhyn00 wrote:
| This sort of reminds me of the book "Vehicles: Experiments in
| Synthetic Psychology" by Valentino Braitenberg. In this book the
| author starts a series of thought experiments by constructing
| small "vehicles" which drive around on a table top. The vehicles
| start with very simple behaviors, then he applies evolution (by
| vehicles falling off, or being selectively removed) while adding
| more complex behaviors until the vehicles eventually become
| intelligent.
|
| In a way the slim mold is a analogous to one of the simple
| vehicles that ends up becoming more intelligent through the
| simple mechanisms and evolution.
|
| Book link: https://mitpress.mit.edu/books/vehicles
| tapoxi wrote:
| Highly recommend this episode of Nova:
| https://www.pbs.org/wgbh/nova/video/secret-mind-of-slime/
| iujjkfjdkkdkf wrote:
| Pour water into a maze (or create a potential difference across a
| conductor) and you'll see that as new current flows in, it doesnt
| explore every pathway but takes the correct path through.
|
| This intelligent behavior arises from the simple compulsions
| things to reach equal potential.
|
| The slime mold experiments are cool because they connect simple
| compulsions with emergent intelligent behavior in an organism. I
| have wondered if it's the same for us, if conciousness is really
| just the sum of all our simple compulsions, arising from basic
| rules - like is water "conscious" of wanting to seek it's own
| level, ions conscious of wanting to react, etc, and together that
| makes up what human conciousness is?
| TaupeRanger wrote:
| Well it wouldn't tell us much about consciousness per se,
| because what you're describing is an explanation of behavior,
| not the first-person experiential thing we call consciousness.
| Although I think there must at least be a correlation between
| the two. After all, when you put your hand on a burning stove,
| something somewhere goes "out of equilibrium", causing the
| reaction AND the accompanying experience of pain. It's just
| that we don't really understand why or how the latter
| accompanies the former.
| asimpletune wrote:
| Wait, um, can you explain this more or tell me what to search
| for to learn more about this?
|
| I googled "simple compulsions" already
| sidpatil wrote:
| Emergence [1] is what I thought of at first.
|
| https://en.m.wikipedia.org/wiki/Emergence
| ta1234567890 wrote:
| > and together that makes up what human conciousness is?
|
| If you believe consciousness is completely materialistic, then
| maybe it is like that.
|
| I personally think that to be conscious means to be aware of
| being aware.
|
| You could say it's a circular definition (or recursive), and it
| is, but it's the only way to define something by itself instead
| of as reference to something else.
| neatze wrote:
| consciousness is dissimilar to awareness, in my limited
| understanding it is about feelings (experience) in itself and
| not about being aware of being aware about feelings.
| adolph wrote:
| The Deep History of Ourselves is a book that goes from single
| cell to consciousnesses. Here is a review:
|
| https://www.nature.com/articles/d41586-019-02475-x
| SquirrelOnFire wrote:
| From Bacteria to Back and Back follows a similar path and was
| a worthwhile read for a layman like myself. It emphasizes the
| evolutionary fitness of ideas as well as biological
| evolution. https://www.nature.com/articles/542030a
| cercatrova wrote:
| I don't understand the point about the water, why wouldn't it
| cover uniformly the entire maze? Assuming the maze is level
| with the ground.
|
| Edit: thanks for the clarification all, I didn't realize it was
| an entrance exit maze, I was thinking of one sealed on a ll
| sides where the water can't get out.
| jfoutz wrote:
| I think if a maze has an entrance and and exit, and there's
| some surface tension at the leading edge, when you pour water
| into the entrance, it'll fill the maze like a breadth first
| search. as soon as the water can flow freely at the exit
| it'll drain because there's less resistance at the exit.
|
| pour a little water on a counter and you'll get a round area
| with little walls at the edge, if it's not clean it'll kinda
| break down where it's dirty (surface tension doesn't hold up)
| once it hits the edge of the counter, the surface tension
| pushes all the water over the edge. if it's a perfectly flat
| clean surface it'll make a perfect circle till it hits the
| edge.
|
| so the water won't fill every nook and cranny of the maze,
| it'll start a new circle at every decision point, till one of
| those circles goes over the edge.
| sgtnoodle wrote:
| It would to an extent. The idea is that the maze has an exit,
| and once the water made it to the exit, it would spill out.
| The steady spilling out of water would create a current of
| flowing water all the way back to the water's source,
| following the "solved" path of the maze. The water didn't
| intelligently solve the maze, though, but rather the solution
| emerged out of the simple but massively parallel interactions
| between collisions of atoms (i.e "weak forces") and gravity.
| lrem wrote:
| I would leave gravity out of this.
| coryrc wrote:
| Surface tension will keep it from spreading beyond a certain
| point unless the exit is the longest path of the entire maze.
| Stratoscope wrote:
| I think OP was talking about a maze with an exit where the
| water can drain out, not a maze sealed all around the edges.
|
| It would be a fun experiment to test this with and with an
| open exit drain at the end of the maze.
| cercatrova wrote:
| Ah OK makes sense now, I was assuming like a kids toy maze
| where it's covered on all sides. An entrance and exit maze
| makes a lot more sense.
| fiftyfifty wrote:
| This article says that they've found that the slime mold's
| memories are stored in tubes that form intricate networks
| inside the cell. It's interesting because neurons have lots of
| microtubules and there have been some recent research showing
| that they may be more than just structural components of the
| neuron:
|
| https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3979999/
|
| Could there be a relationship between these slime mold memories
| and memories in more complex organisms?
| [deleted]
| eternalban wrote:
| I recall Roger Penrose catching flack for his microtubules
| theory of consciousness - looks like I missed his
| Orchestrated Objective Reduction theory:
|
| http://www.consciousentities.com/penrose.htm
|
| https://medium.com/awake-alive-mind/roger-penrose-on-the-
| bra...
|
| https://en.wikipedia.org/wiki/Orchestrated_objective_reducti.
| ..
|
| https://www.sciencedirect.com/science/article/pii/S157106451.
| ..
| [deleted]
| mrmonkeyman wrote:
| You call it equal potential, like it is some obvious thing, but
| what if that _is_ intelligence?
| blowski wrote:
| Like this? https://www.youtube.com/watch?v=ztOk-v8epAg
|
| (I'm a complete numpty here, so need very basic explanations!)
| AndrewKemendo wrote:
| With few exceptions, the AI research community completely
| overlooks the "rest of the body" when it comes to thinking about
| intelligent systems and focuses too much on the brain.
|
| The amount of computing going on in the peripheral nervous system
| is staggering - and when you look at HOW and WHERE this computing
| works with effectors and sensors you realize how much of
| intelligence is reliant on those systems being there.
|
| Brains are interesting - but they actually don't do all that much
| when it comes to the majority of how people interact with the
| real world, and frankly you don't need that much (physical mass
| of) brain to be intelligent.
| patmorgan23 wrote:
| This. Mind vs body is a false dicotomy. Your mind is fully
| integrated through out your body. Physical and mental health
| are so heavily intertwined.
| avaldeso wrote:
| [Citation needed]
| tiborsaas wrote:
| You can cite your body. Before you get offended, really,
| just examine it as a system, and try to explain how can you
| have a conscious experience without any sensory input.
|
| Even with a lame comparison to computers, the machines also
| need a lot of stuff to put a CPU to work.
| avaldeso wrote:
| > You can cite your body.
|
| Anecdotal evidence.
|
| Also, if the mind is fully integrated with the body, how
| you explain seemingly inconsistent states that seems to
| work just fine. Eg., people with ALS or quadriplegic or
| severely injured or mutilated. If the mind can perfectly
| works without a perfectly abled body, where's this mind
| body connection? Also, where's such connection in a
| comatose brain with a completely funcional body? Maybe I
| misunderstood what this mind body connection is supposed
| to be.
| SkyPuncher wrote:
| > You can cite your body.
|
| At best, this is an anecdote.
| carapace wrote:
| Check out Levin's lab's work: "What Bodies Think About:
| Bioelectric Computation Outside the Nervous System" -
| NeurIPS 2018
|
| https://www.youtube.com/watch?v=RjD1aLm4Thg
|
| https://news.ycombinator.com/item?id=18736698
|
| In short, the biomolecular machinery that neurons use to
| think is present in all cells.
| Teever wrote:
| This may be the case but a quad amputee is still able to
| form and recall memories as well as tell jokes and sing
| songs.
| ErikVandeWater wrote:
| I think the second sentence is opinion, not something that
| could be objectively tested. Last sentence is mostly true.
| Sick people are much less happy than when they are healthy.
| ravi-delia wrote:
| And yet I can go a-chopping anywhere but the brain without
| cognitive deficit, but even a little scraping of the cortex
| has a notable effect. The brain is fed and maintained by the
| body, and as such is vulnerable to the body's failures, but
| such a connection doesn't exactly break down the difference
| between mind and body.
| IdiocyInAction wrote:
| How do you suggest that AI research should incorporate that?
| Most modern AI research isn't even brain-inspired anymore; the
| origins of ANNs are brain-inspired, but most SOTA approaches
| don't really seem to be.
| [deleted]
| 01100011 wrote:
| Sure but think about what happened. 50+ years ago, researchers
| figured out some aspects of a neuron, simulated a network of
| grossly simplified neurons, and found out they could do useful
| things. Much of modern NN stuff is just following that
| trajectory.
|
| I don't think many people seriously believe that artificial
| neurons are in any way comparable to a real neuron, much less
| believe that an ANN is comparable to what goes on in the human
| body. Maybe in some very limited cases like the visual cortex,
| but even then I think most people would admit that it's a poor
| model valid only to a 1st approximation.
|
| That said, there is still merit in pushing the current approach
| further while other researchers continue to try to understand
| how biology implements intelligence and consciousness.
| jtsiskin wrote:
| Most AI tasks that I think of - image labeling, NLP - the
| majority of that happens in the brain? Do we process language
| in our peripheral nervous system?
| _Microft wrote:
| Edge-enhancement happens at the retina already by clever
| combinations of inputs from different photoreceptor cells for
| example.
| ravi-delia wrote:
| But it's also pretty obviously just a convolution, so not
| exactly a big unknown. It's super neat, and it makes sense
| that it would be in the eye, but at the end of the day the
| interesting processing is done in the brain.
| peignoir wrote:
| Reminds me of the book wetware discribing a similar behavior
| Barrin92 wrote:
| Great book on the topic is _Wetware: A Computer in Every Living
| Cell_. It really does a lot to show the complexity and amount of
| work that is done within every-single cell purely at a mechanical
| or chemical level, and has made me a lot more skeptical about the
| reductionism that is common today in a lot of AI related fields.
| szhu wrote:
| This doesn't feel as shocking or startling to me as it probably
| does for many.
|
| A slime mold is a collection of adjacent cells without a
| hierarchy that can act together to make decisions. Our brain is
| also a collection of adjacent cells that can act together to make
| decisions.
|
| They're fundamentally the same thing. It seems like people are
| shocked primarily because we arbitrarily defined a notion of
| certain collections of cells being an "organism", and a slime
| mold doesn't fit within this ontology.
| lisper wrote:
| Meh. A thermostat makes decisions without a central nervous
| system too.
|
| But the title of the article is actually, "A memory without a
| brain", which is actually much more interesting. A better rewrite
| of the title would be "A single-cell slime mold can remember the
| locations of food sources", which is actually pretty cool.
| [deleted]
___________________________________________________________________
(page generated 2021-02-27 23:00 UTC)