[HN Gopher] The brain 'rotates' memories to save them from new s...
       ___________________________________________________________________
        
       The brain 'rotates' memories to save them from new sensations
        
       Author : jnord
       Score  : 385 points
       Date   : 2021-04-16 06:04 UTC (2 days ago)
        
 (HTM) web link (www.quantamagazine.org)
 (TXT) w3m dump (www.quantamagazine.org)
        
       | wizzwizz4 wrote:
       | > _The work could help reconcile two sides of an ongoing debate
       | about whether short-term memories are maintained through
       | constant, persistent representations or through dynamic neural
       | codes that change over time. Instead of coming down on one side
       | or the other, "our results show that basically they were both
       | right," Buschman said, with stable neurons achieving the former
       | and switching neurons the latter. The combination of processes is
       | useful because "it actually helps with preventing interference
       | and doing this orthogonal rotation."_
       | 
       | This sounds like the early conservation of momentum /
       | conservation of energy debates. (Not that they used those words
       | back then.)
        
       | jakear wrote:
       | > And yet those memories can't be allowed to intrude on our
       | perception of the present, or to be randomly rewritten by new
       | experiences.
       | 
       | Assumes facts not in evidence? I feel it's incredibly common for
       | memories to intrude on perception of present, and be rewritten by
       | new experiences.
        
       | lupire wrote:
       | Abstract is mostly readable to a technically person:
       | 
       | https://www.nature.com/articles/s41593-021-00821-9
        
         | ThePowerOfDirge wrote:
         | I am technically person.
        
           | alexanderdmitri wrote:
           | You are technically readable words of a mostly abstract
           | person.
        
       | throwaway189262 wrote:
       | > They had the animals passively listen to sequences of four
       | chords over and over again, in what Buschman dubbed "the worst
       | concert ever."
       | 
       | Hahahaha!
        
       | trott wrote:
       | Something to keep in mind though is that in a high-dimensional
       | space, approximate orthogonality of independent vectors is almost
       | guaranteed.
        
         | iandanforth wrote:
         | No? If the samples are randomly chosen then you'd expect the
         | cosign similarity to be low, but there's no such assumption
         | here, in fact it's the exact opposite.
        
         | filoeleven wrote:
         | Can you say a bit more on what that means in this context?
        
           | FigmentEngine wrote:
           | probably a reference to the curse of dimensionality
        
           | fighterpilot wrote:
           | Not sure about the neuroscience context, but if you have two
           | large ("high-dimensional") vectors of variables that have a
           | population correlation of zero ("independent"), then the dot
           | product of a sample is likely to be close to zero
           | ("orthogonal") due to the law of large numbers.
        
         | adampk wrote:
         | Do you mean to say that the neurons in the brain are operating
         | in a higher-dimensional space than 3?
        
           | mrbungie wrote:
           | Yes, just as a set of 1000-levers (arbitrary number, but
           | highly dimensional) can influence a machine (in our 3d
           | reality).
        
           | frisco wrote:
           | Yes definitely. Here the "space" doesn't refer to physical
           | space, but an abstract vector space that neuron's tuning
           | represents. For example, there is a famous paper[1] that
           | showed neurons could be responsive to abstract concepts --
           | for example, one might fire for "Bill Clinton" regardless of
           | whether the stimulus is a photo of him, his name written as
           | letters, or even (with weaker activation) photos/text of
           | other members of his family or other concepts adjacent to
           | him. The neuron's activity gives a vector in this high
           | dimensional concept space, and that's the "space" GP is
           | referring to.
           | 
           | [1] https://www.nature.com/articles/nature03687
        
             | mapt wrote:
             | Wouldn't it be especially inelegant/inefficient to try and
             | wire synapses for, say, a seven-dimensional cross-
             | referencing system, when have to actually physically locate
             | the synapses for this system in three-dimensional space?
             | 
             | (and when the neocortex that does most of the processing
             | with this data is actually closer to a very thin, almost
             | two-dimensional manifold wrapped around the sulci)
             | 
             | There has to be an information-theory connection between
             | the physical form and the dimensionality of the memory
             | lookup, even if they aren't referring to precisely the same
             | thing, right?
        
               | riwsky wrote:
               | The issue with your question is that the dimensions of
               | the configuration space and the physical form aren't even
               | _approximately_ the same thing. Take, for example, a
               | 100x100 grayscale image. It's a flat image; the physical
               | dimensions are 2. There are 10,000 different pixels
               | though, and they are all allowed to vary independently of
               | each other; the configuration-space dimensions are
               | 10,000. Neurons are like the pixels in this analogy, not
               | like the the physical dimensions.
        
               | posterboy wrote:
               | Neurons are not random access. The analogy is otherwise
               | pretty apt, except that an image doesn't store
               | information about what it displays and I don't mean EXIF.
        
               | mapt wrote:
               | Neurons aren't allowed to vary independently of each
               | other, and neither are pixels; A grayscale image with
               | random pixels is static, not even recognizable as an
               | image. The mind cannot decode those pixels in a seven-
               | dimensional indexing scheme, it can't even decode them in
               | the given two dimensions if you have an array size error
               | and store the same data in an array 87 columns wide. In
               | your analogy, if you put a stop sign into the upper right
               | side of the image, that is always going to be recalled
               | associativity with the green caterpillar you put in the
               | lower left side of the image. These properties don't work
               | so well for memories & imperfect/error-prone but
               | statistically correct biological systems.
               | 
               | The average neuron has 1000 synapses, and for geometric
               | reasons (Synaptic connections take up space) most of
               | those are to other neurons that aren't very far away in
               | 3D space.
        
               | true_religion wrote:
               | Maybe, but my guess would be that there's a trade off
               | made here. Either you can use higher dimensionality in
               | the abstract, or you can have a much much bigger brain. A
               | bigger brain processes slower merely because of volume
               | and requires a lot more resources to support it.
               | 
               | Nature stumbled onto the path that it did because we
               | don't have high enough nutrient food or fast enough
               | neurons.
        
             | PullJosh wrote:
             | Can I get an ELI5 on how physical neurons, stuck in a
             | measly 3 dimensions, can possibly form higher-dimensional
             | connections on a large scale?
             | 
             | I understand higher dimensional connections in theory (such
             | as in an abstract representation of neurons within a
             | computer), but I can't imagine how more highly-connected
             | neurons could all physically fit together in meat space.
        
               | austinjp wrote:
               | This is fun, I'm enjoying reading the replies :) I'm
               | certainly no expert, but attempting an explanation helps
               | me exercise my personal understanding, so here goes.
               | Corrections welcome.
               | 
               | The "connections" you mention aren't the issue, in my
               | understanding of the biology. Neurons are already very
               | strongly interconnected by numerous synapses, so they
               | already _do_ physically fit together in their available
               | 3D space, and appear capable of representing high-
               | dimensional concepts. (See caveat below.)
               | 
               | The "higher dimensions" here are not where the neurons
               | exist, only what they're capable of representing. If we
               | think about a representation of the concept of a "dog"
               | for example, there are many dimensions. Size, colour,
               | breed, temperament, barking, growling, panting, etc etc.
               | Those attributes are dimensions.
               | 
               | Take two dog attributes: size and breed. You can plot a
               | graph of dogs, each dog being a mark on the graph of size
               | vs breed. Add a third dimension and turn the graph into a
               | cube: temperament. You can probably imagine plotting dogs
               | inside this three dimensional space.
               | 
               | It's very difficult to imagine that graph extending into
               | 4th, 5th or further dimensions. And yet, you can easily
               | imagine, say, a dog that's a large, black, friendly
               | Labrador with a deep bark who growls only rarely. We
               | could say that dog can be represented as a point in
               | 6-dimensional space (or perhaps a 6-dimensional slice
               | through a space with even more dimensions, just a slice
               | through 3D space could produce a 2D graph).
               | 
               | The number of connections between neurons may be related
               | to the number of dimensions they can represent. In
               | honesty, I don't know, and I guess that if there _is_ a
               | relationship it may not be linear. So neurons might be
               | capable of representing 4 dimensions with fewer than 4
               | synapses, for example, I don 't know. Seems possible to
               | me, though.
               | 
               | Caveat: I think my reasoning here may be fallacious: "the
               | fact that neurons are capable of representing high-
               | dimension concepts demonstrates that they have adequate
               | synapses to do so". It seems akin to anthropocentrism,
               | I'm not sure. Perhaps it's just a circular argument. I
               | think it provides an adequate basis for an ELI5 though.
               | 
               | I look forward to further comments!
        
               | ww520 wrote:
               | The vector here refers to the "feature vector" where the
               | dimension is the number of elements in the vector. E.g. a
               | feature vector of [size, length, width, height, color,
               | shape, smell] has 7 dimensions. A feature vector for the
               | space has 3 dimensions [x, y, z]. The term "higher
               | dimension" just means the number of features encoded in
               | the vector is higher than usual.
               | 
               | In the context of neurons, while the neurons are in the 3
               | spatial dimensions, the connections of each neuron can be
               | encoded in a feature vector. Each connection can
               | specialize on one feature, e.g. the hair color of the
               | person. These connection features can be encoded in a
               | vector. The number of connections becomes the dimension
               | of the vector. Not to be confused with the physical 3D
               | spatial dimensions of the neurons.
               | 
               | The nice thing about encoding things in vectors is that
               | you can use generic math to manipulate them. E.g.
               | rotation mentioned in this article, orthogonality of
               | vectors implies they have no overlap, or dot product of
               | vectors measures how "similar" they are. Apparently this
               | article shows that different versions of the sensory data
               | encoded in neurons can be rotated just like vector
               | rotation so that they are orthogonal and won't interfere
               | with each other.
               | 
               | Linear algebra usually deals with 2 or 3 dimensions.
               | Geometric algebra works better on higher dimension
               | vectors.
        
               | posterboy wrote:
               | the ELI 5 of higher dimensions explained mathematically
               | in text is that a coordinate in R^3 is identified
               | uniquely by a three tuple u = (x, y, z). A four touple
               | simply adds one dimension. That might be a time
               | coordinate, color, etc.
               | 
               | If I remember correctly, the integers Z form spaces, too.
               | Z^2 can be illustrated as grid, where every node is
               | uniquely identified again coordinates or by two of its
               | neighbours, eitherway v = (a, b).
               | 
               | Adjency lists or index matrices are common ways to encode
               | graphs. My modelnof a neuron network is then a graph.
               | 
               | I imagine that, since Neurons have many more Synapses,
               | that's how you get a manifold with many more coordinates.
               | 
               | Each Neuron stores action potential much like color of a
               | pixel and its state evolves over time, but that's when
               | the model becomes limited.
               | 
               | How it actually represents complex information in this
               | structure I don't know.
               | 
               | PS: Or very simply put, physics has more than three
               | dimensions.
        
               | dboreham wrote:
               | Same as a silicon chip stuck in 2 dimensions can.
        
               | Salgat wrote:
               | Don't conflate physical and logical, in this case we
               | don't care about the physical dimensions, only how the
               | logic is expressed. Even a 2D function can be expressed
               | in N-dimensional parameters, such as
               | 
               | y = a1 * x + a2 * x^2 + a3 * x^3 + a4 * x^4
               | 
               | where you only have one input and one output, but 4
               | constants that can be adjusted. These 4 constants make up
               | a 4D vector.
        
               | [deleted]
        
               | ajuc wrote:
               | > Can I get an ELI5 on how physical neurons, stuck in a
               | measly 3 dimensions, can possibly form higher-dimensional
               | connections on a large scale?
               | 
               | You can multiplex in frequency and time. I'm not sure if
               | neurons do it, but it's certainly possible with computer
               | networks.
        
               | wyager wrote:
               | Your stick of RAM is also stuck in 3 dimensions but it
               | reifies a, say, 32-billion-dimensional vector over Z/2Z.
        
               | CuriouslyC wrote:
               | If you take a matrix of covariance or similarity between
               | neurons based on firing pattern, and try to reduce it to
               | the sum of a weighted set of vectors, the number of
               | vectors you would need to accurately model the system
               | gives you the dimensionality of the space.
        
               | fao_ wrote:
               | This does not seem particularly like an "Explain Like I'm
               | 5"-parsable comment that the posted asked for.
        
               | dogma1138 wrote:
               | This isn't about the 3 dimensional structure the neurons
               | occupy, but about their operational degrees of freedom.
               | 
               | Think about how a CNC machine works, you can have CNC
               | with more than 3 axis, for example a 4 axis CNC machine
               | can move left/right up/down backwards/forwards and also
               | have another axis which can rotate in a given plane.
               | 
               | From a more mathematical perspective just think about the
               | number of parameters in a system (excluding reduction)
               | each parameter would be a dimension.
        
               | anu7df wrote:
               | Appreciate the attempt, but in this example the 4th axis
               | is not independent since the motion along that axis can
               | be achieved, with some complexity, by the motion along
               | the other axes. Granted this is not very useful for a
               | machinist because it will be very tedious to machine a
               | part this way compared to the dedicated 4th rotating
               | axis, but mathematically it is redundant.
               | 
               | I have found it easiest to think of a logical dimensions
               | or configurations when thinking of higher dimensions.
               | Physically it can be a row of bulbs (lighted or not)
               | wherein N bulbs (dimensions) can represent 2^n states in
               | total. The 2 here can be increased by having bulbs that
               | can light up in many colours.
        
               | posterboy wrote:
               | its not redundant. without rotation it could only ever
               | drill downwards.
               | 
               | Smartphones eg. measure six dimensions of freedome,
               | including rotation about every axis. 3 for location, 3
               | for orientation.
               | 
               | this has very little to do with synapses.
        
               | hitlerism wrote:
               | The vectors are in a configuration vector space, not a
               | physical vector space.
        
               | fao_ wrote:
               | I _roughly_ understand what the article says about
               | dimensional space (Reading higher mathematics books on
               | the way to my meagre college course way back when, helps
               | me a little, even if it is all half-remembered and a bit
               | wrong -- this understanding is sufficient enough to
               | satisfy _me_ ), however the poster above me _doesn 't_,
               | and clearly asked for a definition a 5 year old layman
               | could understand.
               | 
               | The comment I am replying to, your comment in the tree,
               | and the one next to you, does not seem to match that
               | request in any sense.
               | 
               | Now, simplified definitions are an art, but Feynman
               | managed it with Quantum Electrodynamics -- so it is not
               | impossible to do it for complex subjects. And it seems to
               | me the less you understand a subject, the less simple and
               | more confusing your explanation will be, such as the
               | explanations given by the other posters here. (fyi: I do
               | not understand enough to properly convey my understanding
               | clearly -- which is why I have not attempted to do so)
        
               | dopu wrote:
               | If I'm recording from N neurons, I'm recording from an
               | N-dimensional system. Each neuron's firing rate is an
               | axis in this space. If each neuron is maximally
               | uncorrelated from all other neurons, the system will be
               | maximally high dimensional. Its dimensionality will be N.
               | Geometrically, you can think of the state vector of the
               | system (where again, each element is the firing rate of
               | one neuron) as eventually visiting every part of this
               | N-dimensional space. Interestingly, however, neural
               | activity actually tends to be fairly low dimensional (3,
               | 4, 5 dimensional) across most experiments we've recorded
               | from. This is because neurons tend to be highly
               | correlated with each other. So the state vector of neural
               | activity doesn't actually visit every point in this high
               | dimensional space. It tends to stay in a low dimensional
               | space, or on a "manifold" within the N-dimensional space.
        
               | chadcmulligan wrote:
               | Would you have any further reading on this? Sounds
               | fascinating.
        
               | dopu wrote:
               | Agreed, it's really cool :). A lot of this is very new --
               | it's only been in the past decade and a half or so that
               | we've been able to record from large populations of
               | neurons (on the order of hundreds and up, see [0]). But
               | there are a lot of smart people working on figuring out
               | how to make sense of this data, and why we see low-
               | dimensional signals in these population recordings. Here
               | are some good reviews on the subject: [1], [2], [3], [4],
               | and [5].
               | 
               | [0]: https://stevenson.lab.uconn.edu/scaling/ [1]:
               | https://www.nature.com/articles/nn.3776 [2]:
               | https://doi.org/10.1016/j.conb.2015.04.003 [3]:
               | https://doi.org/10.1016/j.conb.2019.02.002 [4]:
               | https://arxiv.org/abs/2104.00145 [5]:
               | https://doi.org/10.1016/j.neuron.2017.05.025
        
               | trott wrote:
               | I'm curious about how much of this apparent low
               | dimensionality is explained by (1) the physical proximity
               | of the neurons being recorded, (2) poverty of the stimuli
               | (just 4 sequences in this paper, if I'm not mistaken)
        
               | dopu wrote:
               | Both good questions. It could very well be that low
               | dimensionality is simply a byproduct of the fact that
               | neuroscientists train animals on such simple (i.e., low-
               | dimensional) tasks. This paper argues that [0]. As for
               | your first point, it is known that auditory cortex
               | exhibits tonotopy, such that nearby neurons in auditory
               | cortex respond to similar frequencies. But much of cortex
               | doesn't really exhibit this kind of simple organization.
               | Regardless, technological advancements are making it
               | easier for us to record from large populations of neurons
               | (as well as track behavior in 3D) while animals freely
               | move in more naturalistic environments. I think these
               | kinds of experiments will make it clearer whether low-
               | dimensional dynamics are a byproduct of simple task
               | designs.
               | 
               | [0]:
               | https://www.biorxiv.org/content/10.1101/214262v1.abstract
        
               | DavidSJ wrote:
               | This is basically just linear algebra.
               | 
               | For an abstract perspective, try Sheldon Axler's _Linear
               | Algebra Done Right_.
               | 
               | For a more concrete perspective, Gilbert Strang's
               | lectures:
               | https://www.youtube.com/playlist?list=PL49CF3715CB9EF31D
        
               | cochne wrote:
               | Consider three neurons all connected together. Now
               | consider that each of them may have some 'voltage'
               | anywhere between 0 and 1. Using three neurons you could
               | describe boxes of different shapes in three dimensions.
               | Add more and you get whatever large dimension you want.
        
               | fsociety wrote:
               | Think of it less as n-dimensional in meat space and more
               | of n-dimensional in how it functions.
        
               | [deleted]
        
               | exporectomy wrote:
               | Do you mean due to the thickness of each connection, they
               | would occupy too much space if the number of dimensions
               | was too high? Not necessarily 4 or more, just very high
               | because there are on the order of n^2 connections for n
               | neurons?
               | 
               | In the visual cortex, neurons are arranged in layers of
               | 2D sheets, so that perhaps gives an extra dimension to
               | fit connections between layers.
        
               | andyxor wrote:
               | see related talk by the first author: "Dynamic
               | representations reduce interference in short-term
               | memory": https://www.youtube.com/watch?v=uy7BUzcAenw
        
             | MereInterest wrote:
             | There was a fun article in early March showing that the
             | same is true for image recognition deep neural networks.
             | They were able to identify nodes that corresponded with
             | "Spider-Man", whether shown as a sketch, a cosplayer, or
             | text involving the word "spider".
             | 
             | https://openai.com/blog/multimodal-neurons/
        
               | andyxor wrote:
               | deep neural nets are an extension of sparse autoencoders
               | which perform nonlinear principal component analysis
               | [0,1]
               | 
               | There is evidence for sparse coding and PCA-like
               | mechanisms in the brain, e.g. in visual and olfactory
               | cortex [2,3,4,5]
               | 
               | There is no evidence though for backprop or similar
               | global error-correction as in DNN, instead biologically
               | plausible mechanisms might operate via local updates as
               | in [6,7] or similar to locality-sensitive hashing [8]
               | 
               | [0] Sparse Autoencoder https://web.stanford.edu/class/cs2
               | 94a/sparseAutoencoder.pdf
               | 
               | [1] Eigenfaces https://en.wikipedia.org/wiki/Eigenface
               | 
               | [2] Sparse Coding
               | http://www.scholarpedia.org/article/Sparse_coding
               | 
               | [3] Sparse coding with an overcomplete basis set: A
               | strategy employed by V1?https://www.sciencedirect.com/sci
               | ence/article/pii/S004269899...
               | 
               | [4] Researchers discover the mathematical system used by
               | the brain to organize visual objects
               | https://medicalxpress.com/news/2020-06-mathematical-
               | brain-vi...
               | 
               | [5] Vision And Brain https://www.amazon.com/Vision-Brain-
               | Perceive-World-Press/dp/...
               | 
               | [6] Oja's rule https://en.wikipedia.org/wiki/Oja%27s_rule
               | 
               | [7] Linear Hebbian learning and PCA
               | http://www.rctn.org/bruno/psc128/PCA-hebb.pdf
               | 
               | [8] A neural algorithm for a fundamental computing
               | problem
               | https://science.sciencemag.org/content/358/6364/793
        
           | andyxor wrote:
           | Yes, grid cells in the hippocampus [0] form a coordinate
           | system that is used for 4D spatiotemporal navigation [1], as
           | well as navigation in abstract high-dimensional "concept
           | space" [2]
           | 
           | [0] http://www.scholarpedia.org/article/Grid_cells
           | 
           | [1] Time (and space) in the hippocampus
           | https://pubmed.ncbi.nlm.nih.gov/28840180/
           | 
           | [2] Organizing conceptual knowledge in humans with a gridlike
           | code: https://science.sciencemag.org/content/352/6292/1464
        
           | [deleted]
        
           | darwingr wrote:
           | Yes but only in aggregate, like how adding a column to a
           | database table is also adding a "dimension" to said data.
           | 
           | I'm not convinced the author's analogy of cross-writing to
           | fit more information on a page is actually going to be
           | helpful to most people's understanding. It led me at least to
           | try to imagine visually what's going on, to picture the input
           | being physically rotated. This is more akin to the more
           | abstract but inclusive concept of rotation from linear
           | algebra, where more dimensions (of information, not space or
           | time) makes sense.
        
           | gleenn wrote:
           | If you think of groups of neurons in arbitrary dimensions,
           | where some groups fire together for some things, and a
           | different group with some overlap fire for other things, then
           | it's like two dimensions where a line is a sense or thought
           | and the lines are crossing where they fire for both memories.
           | So two thoughts along two dimensions can cross and light up
           | that subset of neurons. If the two thoughts, or lines, are
           | orthogonal, then not many neurons are both firing for
           | thoughts. If you have many many neurons, and many many
           | memories, then the dimensionality, or possible subsets of
           | firing neurons, is huge. Like our two lines but now in three
           | dimensions, there are a lot of ways for them not to overlap.
           | So the possibility that many things in that space are
           | orthogonal is likely. In a highly dimensional space, a whole
           | lot of things don't overlap.
        
         | dopu wrote:
         | Sure, but the neural activity is actually low-dimensional (see
         | Extended Fig 5e). By day 4, the first two principal components
         | of the neural activity explains 75% of the variance in
         | response. ~3-4 dimensions is not particularly high dimensional.
        
       | ivan_ah wrote:
       | The Nature version is paywalled
       | https://www.nature.com/articles/s41593-021-00821-9
       | 
       | but I found the preprint of the paper on biorxiv.org:
       | https://www.biorxiv.org/content/10.1101/641159v1.full
        
         | tgflynn wrote:
         | The abstracts are a bit different so I'm not sure how close the
         | preprint is to the published version.
        
       | ordu wrote:
       | Curious. I cannot understand it clearly. Lets take for example
       | "my wife and my mother-in-law" illusion[1]. It is known for it's
       | property that one cannot see both women at once. If we assume
       | that it has something to do with such a coding in neurons, would
       | it mean that those women are orthogonal, or it would mean that
       | they refuse to go orthogonal?
       | 
       | [1] https://brainycounty.com/young-or-old-woman
        
         | jamiek88 wrote:
         | Wow. That blew my tiny little mind.
         | 
         | I figured out how to change it at will eventually, if you close
         | your eyes then open them and look at the bottom of the picture
         | first it's an old woman. Do the reverse and it's a young woman.
         | Eventually you can do that without the eye closing step but
         | never would I say I could see both at once.
         | 
         | Just rapidly switch.
         | 
         | Very interesting!
        
         | bserge wrote:
         | Sorry, I'm pretty tired, but I fail to see the relation to this
         | article, how does that example apply?
         | 
         | I thought that was more of a case of a human's facial
         | recognition being a special function, and we're not able to
         | process two or more people's faces at the same time. Like, see
         | the details in them, recognize that it's _their face_.
         | 
         | You're either looking at one person, or the other, but if you
         | try to look at both of them at the same time, they become
         | "blurry", unrecognizable, even though you remember all the
         | other information about them both.
         | 
         | But that's not related to memory integrity and new
         | emotions/sensations?
        
           | ordu wrote:
           | It is a work of human visual perception at work. Somehow you
           | mind chooses how to interpret sensations from a retina, and
           | shows you one of women. Then you mind chooses to switch
           | interpretations and you see the other one. Both
           | interpretation are somewhere in memory. So it may be
           | connected with this research.
           | 
           | Like with those chords in a research. Mice hear one chord,
           | and by association from memory it expects other chord. But
           | instead it hears some third chord. Expected and unexpected
           | chords have perpendicular representation, if I understood
           | correctly.
           | 
           | Here you see a picture, and expects one interpretation or
           | other. You have memory of both, but you get just one.
           | 
           | Possibly it doesn't apply, I do not know. I'm trying to
           | understand it. The obvious step is to make a prediction from
           | a theory, should interpretations oscillate, if it has
           | something to do with perpendicularity of representation in
           | neurons?
           | 
           | When I hear another chord instead of a predicted one, do
           | prediction and sensations oscillate? I'm not quick enough to
           | judge based on a subjective experience.
        
         | vmception wrote:
         | Wish they would outline the two variants
         | 
         | I only see the young woman before I became disinterested in
         | making the other one happen because why
        
         | LordGrey wrote:
         | I spent 10 minutes staring at that picture and saw only the
         | wife. The mother-in-law never appeared.
         | 
         | This happens to me often.
        
           | andrewmackrodt wrote:
           | I had trouble at first too until I noticed the ear looking a
           | little suspicious. If you create a diagonal obstruction from
           | the top of the hat, to the nose, you are left will only the
           | mother-in-law; the ear has now become an eye.
           | 
           | Once I'd seen it once, the mother-in-law is now prominent. I
           | can still see the wife if I concisely choose to, but the
           | mother-in-law is now the default, strange huh?
        
             | LordGrey wrote:
             | Thanks! That helped me finally see the mother-in-law.
             | 
             | I showed my wife the picture and she couldn't see either
             | woman until I pointed out features. Interesting!
        
         | chaps wrote:
         | Hmmm.. I tried to visualize them both at the same time.. it
         | took some effort, but quickly "oscillating" between the two
         | ended up settling (without a jittery oscillating feeling) on
         | seeing both at the same time. Maybe my brain was playing meta
         | tricks on me though?
        
           | c22 wrote:
           | I can "see" both at the same time, but only if I am not
           | focusing on either. I think this conflict of focus is the
           | real effect people are talking about.
        
         | Baeocystin wrote:
         | Really? I have no trouble seeing both at the same time. Nothing
         | special about it, the angles of their respective faces are
         | different enough that it doesn't feel like there's any
         | interference at all.
        
           | bserge wrote:
           | But do you really see both _at the same_ time or you just
           | switch between them really fast?
        
             | mrbungie wrote:
             | I'm not really sure if I'm able to see both (or the three
             | of them in the case of the 'Mother, Father and Daugther'
             | figure), but at least I can switch stupidly fast.
        
             | treeman79 wrote:
             | Does it matter? My vision switches eyes every 30 seconds,
             | unless I'm wearing prism glasses. I rarely notice unless
             | I'm trying to write.
        
             | Baeocystin wrote:
             | At the exact same time. No oscillating.
        
               | [deleted]
        
       | ddmma wrote:
       | It's this blockchain?
        
       | andyxor wrote:
       | looks similar to "Near-optimal rotation of colour space by
       | zebrafish cones in vivo"
       | 
       | https://www.biorxiv.org/content/10.1101/2020.10.26.356089v1
       | 
       | "Our findings reveal that the specific spectral tunings of the
       | four cone types near optimally rotate the encoding of natural
       | daylight in a principal component analysis (PCA)-like manner to
       | yield one primary achromatic axis, two colour-opponent axes as
       | well as a secondary UV-achromatic axis for prey capture."
        
       | fighterpilot wrote:
       | I read the abstract and don't really get it. How is this
       | different from saying that a group of neurons A is responsible
       | for memory storage and a group of neurons B is responsible for
       | sensory processing, and A != B? I think I'm misunderstanding this
       | "rotation" concept.
        
         | rkp8000 wrote:
         | It's a good question. It looks like they actually specifically
         | check for this and show that it's not two separate groups of
         | neurons. Instead a subset of the neural population changes
         | their representation of the input as it moves from sensory to
         | memory, so it's more like a single group of neurons that
         | represents current sensory and past memory information in two
         | orthogonal directions.
        
           | fighterpilot wrote:
           | So current sensory info is a vector of numbers, and past
           | memory info is a vector of numbers, and these two vectors are
           | orthogonal.
           | 
           | What are these numbers, precisely?
        
             | resonantjacket5 wrote:
             | In a simple example that I can think of it could just be a
             | vector of <present, past> aka the current info could be
             | encoded like [<2, 0>, <4, 0>] then rotated to ("y axis")
             | [<0, 2>, <0, 4>] allowing you to write more "present" data
             | to the original x dimension without overriding the past
             | data.
             | 
             | If you're asking about the exact numbers here's a snippet
             | from the xlsx document. ``` ABC _D_mean ABC_ D_se ABCD_mean
             | ABCD_se XYC _D_mean XYC_ D_se XYCD_mean XYCD_se day neuron
             | subject time 0 6.012574653 0.5990308106 6.181361381
             | 0.5737310366 6.59759636 0.6419092978 6.795648346
             | 0.5716884524 1 2 M496 -50 ```
             | 
             | According to the article SEM neural activity, though this
             | is way beyond my ability to interpret.
        
             | rkp8000 wrote:
             | My simplified picture of what's going on is something like
             | this (if I'm understanding the paper correctly). Stimulus A
             | starts out represented by the vector (1,1,1,1) and B by
             | (-1,-1,-1,-1). Those are the sensory representations. Later
             | A is represented by (1,1,-1,-1) and B by (-1,-1,1,1). Those
             | are the memory representations. The last two
             | component/neurons have "switched" their selectivity and
             | rotated the encoding. The directions (1,1,1,1) and
             | (1,1,-1,-1) are orthogonal, so you can store sensory info
             | (A vs B in the present) along one and memory info (A vs B
             | in the past) aling the other.
        
           | o_p wrote:
           | So memory and sensory get multiplexed?
        
       | [deleted]
        
       | [deleted]
        
       | behnamoh wrote:
       | Articles on Quanta magazine have clickbait titles.
        
         | chalst wrote:
         | And yet this title seems to capture the content quite
         | adequately.
        
       | meiji163 wrote:
       | Can someone liberate the article from behind the paywall for me?
        
       | ohazi wrote:
       | I don't remember where I came across this (was probably some pop
       | neuroscience blog or maybe radiolab), but there was some theory
       | about how memories seem subject to degredaton when you recall
       | them a lot, and less so when you don't.
       | 
       | I guess that would sort of be like the opposite of DRAM - cells
       | maintain state when undisturbed, but the "refresh" operation is
       | lossy.
        
         | plg wrote:
         | it's the theory of re-consolidation
         | 
         | here are some references
         | 
         | https://pubmed.ncbi.nlm.nih.gov/?term=memory+reconsolidation...
        
         | [deleted]
        
         | drivers99 wrote:
         | That sounds like the kind of thing they talk about on Hidden
         | Brain (NPR). I think I found it:
         | 
         | https://www.npr.org/transcripts/788422090
         | 
         | Quote (although it's missing context if the full show):
         | 
         | > Yeah, I think it's really interesting. I think it's really
         | interesting to think about why we do these things, why we
         | misrecollect our past, how those kinds of reconstruction errors
         | occur. And I think about it in my own personal life - I share
         | my memories with my partner. And many of us who have partners,
         | we have these sort of collaborative ways in which we recollect.
         | But those collaborations often result in my incorporating
         | information into my memories that were suggested by this
         | individual, but I never experienced. And so I might have this
         | vivid recollection of something that only my partner
         | experienced because we've shared that information so often. And
         | so that's how we can distort memories in the laboratory. We can
         | just get individuals to try and reconstruct events over and
         | over and over again. And with each reconstructive process, they
         | become more and more confident that that event has occurred.
        
         | ajuc wrote:
         | > I guess that would sort of be like the opposite of DRAM -
         | cells maintain state when undisturbed, but the "refresh"
         | operation is lossy.
         | 
         | Or like any analog data medium ever :)
        
         | mncharity wrote:
         | I'm under the anecdotal and subjective impression that I can do
         | a "brain dump" describing a recently-experienced physical
         | event. But it's a one-shot exercise. Close to read-once recall.
         | The archived magnetic 9-track tape that when read becomes a
         | take-up reel of backing and a pile of rust. The memories feel
         | like they're degrading as recalled, like beach sand eroding
         | under foot, and becoming "synthetic", made up. The dump is
         | extremely sparse and patchy. Like a limits-of-perception vision
         | experiment: "I have moderate confidence that I saw a flash
         | towards upper left". Not "I went through the door and down the
         | hall" but "low-confidence of a push with right shoulder,
         | medium-confidence passing a paper curled out from the wall at
         | waist height, and ... that's all I've got". But what shape
         | curl? Where in the hall? You've whatever detail was available
         | around the moment you recalled it, because moments later extra
         | information recalled start tasting different, speculative fill-
         | in-the-blanks untrustworthy.
        
         | tshaddox wrote:
         | I would expect memories to _change_ more the more they are
         | recalled, just like I would expect a story to change the more
         | times it's told.
        
           | Phenomenit wrote:
           | Yeah I'm thinking that's because our interpretation of
           | reality and it's abstractions ar falsy and that filter is
           | applied every time we update the memory. Maybe then when we
           | are learning a new subject through say reading our filter is
           | minimal and every time we read the same info we combat our
           | falsy interpretation of reality.
        
           | ohazi wrote:
           | Yes, maybe change is a better term than degrade. The story
           | was told in terms of the details in a memory changing a lot
           | vs. remaining accurate.
        
         | sebmellen wrote:
         | How fascinating, I've experienced this myself to a large
         | degree. I have a few songs that very vividly remind me of
         | certain periods or points of my life. When I play them, I
         | always feel like I'm scratching up the vinyl surface of the
         | memory, and I lose a little bit each time. Rather disappointing
         | :(
        
         | gus_massa wrote:
         | Perhaps the Crick and Mitchison theory about why we dream:
         | https://en.wikipedia.org/wiki/Reverse_learning
         | 
         | (AFAIK it's totally wrong, but I really like it anyway. I hope
         | there is another specie in the universe that use it.)
        
       | [deleted]
        
       | User23 wrote:
       | In mice.
        
         | Jaecen wrote:
         | The experiment was on mice, but the process has been observed
         | elsewhere.
         | 
         | From the article:
         | 
         | > _This use of orthogonal coding to separate and protect
         | information in the brain has been seen before. For instance,
         | when monkeys are preparing to move, neural activity in their
         | motor cortex represents the potential movement but does so
         | orthogonally to avoid interfering with signals driving actual
         | commands to the muscles._
        
       | de6u99er wrote:
       | This makes much more sense than having secret memory cells in
       | neurons.
        
       | bernardand wrote:
       | This is basically just linear algebra.
        
       | darwingr wrote:
       | This really would have been harder for me to understand had I not
       | taken linear and abstract algebra courses a few years ago. That
       | area of maths reused common words like "rotation" but with more
       | generalized definitions, which made it was jarring and confusing
       | to hear and take in at the time. When someone said the word
       | "rotate" my mind as if by reflex was already trying visualize a
       | 3d or 2d rotation even when it made no sense for the problem at
       | hand. Being an English speaker my whole life I thought I
       | understood what a rotation was or could be but I didn't.
       | 
       | Same goes for what's being alleged here: Is there even a way to
       | visualize this that makes mathematical sense? What will be the
       | corollaries to this discovery simply as a result of what the
       | mathematics of rotations will dictate?
        
         | dboreham wrote:
         | Same goes for the ordinary English word "Eigenvector".
        
           | danwills wrote:
           | Reminds me of how orthogonally polarized waves can inhabit
           | the same bit of space without interfering with each other
           | (and can be cleanly separated later using 2 polarized filters
           | at 90 degrees).
        
             | cephalicmarble wrote:
             | Requires a fair bit of detergent to clear up all the crumbs
             | anyway: might varying grain sizes help any?
        
               | NavinF wrote:
               | Was this comment generated by a markov chain?
        
               | danwills wrote:
               | Yeah super wierd, can't make any sense of it at all
        
               | totetsu wrote:
               | In the last few months there has been more of these not
               | people posts. If we call them out are we just training
               | the algorithms?
        
         | zeeshanqureshi wrote:
         | And yet the main image on the article illustrates a 45 degree
         | rotation along an axis.
         | 
         | From what I understand, you are saying this rotation is non-
         | intuitive. Could you elaborate more or share some relevant
         | links?
        
       | iandanforth wrote:
       | Take a binary array of length N, where N is in the hundreds to
       | thousands range. Choose 2% of the bits to set to 1. Now you have
       | a "sparse array".
       | 
       | Now, you want to use this sparse array to represent a note in a
       | song. So you need every note to consistently map to a distinct*
       | sparse array.
       | 
       | However, you also want to be able distinguish a note as being in
       | one song or another. The representation should tell you not only
       | that this is note A but note A in song X.
       | 
       | How might you do that? Well some portion of the ON bits could be
       | held consistent for every A note and some could be used to
       | represent specific contexts.
       | 
       | Stable and variable bits of you will.
       | 
       | Now if you look at two representations of the note A from two
       | songs you'll see they're different. How different are they? Well
       | you could just count the bits they have in common or not, or you
       | can treat them as vectors. (Lines in high dimensional space) Then
       | you can calculate the angle between those two lines. As that
       | angle increases its easier to distinguish the two lines. They
       | won't ever get to full "right angles" between them because of the
       | shared stable bits, but they can be more or less orthogonal.
       | 
       | That's what's happening here. The brain is encoding notes in a
       | way that it can both recognize A, but also recall it in different
       | contexts.
       | 
       | *But not perfectly consistent, we use sparse representations
       | because the brain is noisy and it's more energy efficient. Pretty
       | close is good enough in the brain and you can encode a lot of
       | values in 1000 choose 20 options.
        
         | mmastrac wrote:
         | So we are just walking Lucene indexes?
        
       | screye wrote:
       | This maps wonderfully onto SVD, Neural networks and embeddings.
       | 
       | Word embeddings frequently encode particular traits in different
       | 'regions' of a 256(ish) dimensional space. AFAIK, It is also why
       | we think of element wise addition (merging) in neural networks as
       | an efficient and relatively loss-less computation. The
       | aggregation after attention step used in Transformers (GPT-3)
       | fundamentally relies on this being true.
       | 
       | Although from my reading, there is an inherent assumption of
       | sparsity in such situations. So, is it reasonable to assume that
       | human neurons are also relatively sparse in how information is
       | stored ?
        
       | lukeplato wrote:
       | There was another recent article on applications of geometry to
       | analyse neural mechanisms to encode context. It also mentioned a
       | rotation/coiling geometry:
       | 
       | https://www.simonsfoundation.org/2021/04/07/geometrical-thin...
        
       ___________________________________________________________________
       (page generated 2021-04-18 23:02 UTC)