[HN Gopher] Cubic millimetre of brain mapped at nanoscale resolu...
___________________________________________________________________
Cubic millimetre of brain mapped at nanoscale resolution
Author : geox
Score : 399 points
Date : 2024-05-09 21:36 UTC (2 days ago)
(HTM) web link (www.nature.com)
(TXT) w3m dump (www.nature.com)
| teuobk wrote:
| The interactive visualization is pretty great. Try zooming in on
| the slices and then scrolling up or down through the layers. Also
| try zooming in on the 3D model. Notice how hovering over any part
| of a neuron highlights all parts of that neuron:
|
| http://h01-dot-neuroglancer-demo.appspot.com/#!gs://h01-rele...
| jamiek88 wrote:
| My god. That is stunning.
|
| To think that's one single millimeter of our brain and look at
| all those connections.
|
| Now I understand why crows can be so smart walnut sized brain
| be damned.
|
| What an amazing thing brains are.
|
| Possibly the most complex things in the universe.
|
| Is it complex enough to understand itself though? Is that
| logically even possible?
| ignoramous wrote:
| I wonder if we manage to annotate this much level of detail
| about our brain, and then let (some variant of the current)
| models train on it, will those intrinsically end up
| generalizing a model for intelligence?
| nicklecompte wrote:
| I think you would also need the epigenetic side, which is
| very poorly understood:
| https://www.universityofcalifornia.edu/news/biologists-
| trans...
|
| We have more detail than this about the C. elegans nematode
| brain, yet we still no clue how nematode intelligence
| actually works.
| Animats wrote:
| How's OpenWorm coming along?
| nicklecompte wrote:
| Badly:
| https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-
| brai... (the comments have some updates as of 2023)
|
| Almost every other cell in the worm can be simulated with
| known biophysics. But we don't have a clue how any
| individual nematode neuron actually works. I don't have
| the link but there are a few teams in China working on
| visualizing brain activity in _living_ C. elegans, but it
| 's difficult to get good measurements without affecting
| the behavior of the worm (e.g. reacting to the dye).
| nicklecompte wrote:
| Crow/parrot brains are tiny but in terms of neuron count they
| are twice as dense as primate brains (including ours): https:
| //www.sciencedirect.com/science/article/pii/S096098221...
|
| If someone did this experiment with a crow brain I imagine it
| would look "twice as complex" (whatever that might mean). 250
| million years of evolution separates mammals from birds.
| jamiek88 wrote:
| Interesting! Thank you. I didn't know that.
| Terr_ wrote:
| I expect we'll find that it's all a matter of tradeoffs in
| terms of count vs size/complexity... kind of like how the
| "spoken data rate" of various human languages seems to be
| the same even though some have complicated big words versus
| more smaller ones etc.
| sdenton4 wrote:
| Birds are under a different set of constraints than non-
| bat mammals, of course... They're very different.
| Songbirds have ~4x finer time Perception of audio than
| humans do, for example, which is exemplified by taking
| complex sparrow songs and showing them down until you can
| actually hear the fine structure.
|
| The human 'spoken data rate' is likely due to average
| processing rates in our common hardware. Birds have a
| different architecture.
| Terr_ wrote:
| You misunderstand, I'm not making any kind of direct
| connection between human speech and bird song.
|
| I'm saying we will probably discover that the "overall
| performance" of different vertebrate neural setups are
| clustered pretty closely, even when the neurons are
| arranged rather differently.
|
| Human speech is just an example of another kind of
| performance-clustering, which occurs for similar
| metaphysical reasons between competing, evolving, related
| alternatives.
| sdenton4 wrote:
| Humans are an n=1 example, is my point. And there's no
| direct competition between bird brain architecture and
| mammalian brain architecture, so there's no reason for
| one architecture to 'win' over the other - they may both
| be interesting local maxima, which we have no ability to
| directly compare.
|
| Human brains might not be all that efficient; for
| example, if the competitive edge for primate brains is
| distinct enough, they'll get big before they get
| efficient. And humans are a pretty 'young' species. (Look
| at how machine learning models are built for
| comparison... you have absolute monsters which become
| significantly more efficient as they are actually
| adopted.)
|
| By contrast, birds are under extreme size constraints,
| and have had millions of years to specialize (ie,
| speciate) and refine their architectures accordingly. So
| they may be exceedingly efficient, but have no way to
| scale up due to the 'need to fly' constraint.
| lostlogin wrote:
| > And there's no direct competition between bird brain
| architecture and mammalian brain architecture
|
| By and large It's not direct competition but we are
| stamping our species at an alarming rate and birds are
| taking a hammering.
| pfdietz wrote:
| That shouldn't be too surprising, as a larger fraction of
| the volume of a brain should be taken up by "wiring" as the
| size of the brain expands.
| steve_adams_86 wrote:
| This might be a dumb question, because I doubt the
| distances between neurons makes a meaningful distance...
| But could a small brain, dense with neurons like a crow,
| possibly lead to a difference in things like response to
| stimuli or "compute" speed so to speak?
| michaelhoney wrote:
| Actually I think that's pretty plausible. Signal speed in
| the brain is pretty slow - it would have to make some
| difference
| out_of_protocol wrote:
| Regarding compute speed - it checks out. Humans "think"
| via neo cortex, thin ouside layer of the brain. Poor
| locality, signals needs to travel a lot. Easy to expand
| though. Crow brain have everything tightly concentrated
| in the center - fast communication between neurons, hard
| to have more "thinking" thing later (therefore hard to
| evolve above what crows currently have)
| philsnow wrote:
| Not a dumb question at all; one of the hard constraints
| of cou design is signal propagation time. Even going at
| 1/3 the speed of light, when you only have on the order
| of a billionth of a second (clock frequencies in the
| GHz), a signal can't get very far.
|
| I haven't heard of a clocking mechanism in brains, but
| signals propagate much slower and a walnut / crow brain
| is much larger than a cpu die.
| RaftPeople wrote:
| > _I haven't heard of a clocking mechanism in brains_
|
| Brain waves (partially). They aren't exactly like a cpu
| clock, but they do coordinate activity of cells in space
| and time.
|
| There are different frequencies that are involved in
| different types of activity. Lower frequencies
| synchronize across larger areas (can be entire brain) and
| higher frequencies across smaller local areas.
|
| There is coupling between different types of waves (i.e.
| slow wave phase coupled to fast waves amplitude) and some
| researchers (Miller) thinks the slow wave is managing
| memory access and the fast wave is managing
| cognition/computation (utilizing the retrieved memory).
| tlarkworthy wrote:
| The electrical signals in brain are chemical reactions,
| not conductivity like a metal wire. They are slow!
| Synaptic junctions are a huge number of indirect chemical
| cascades, not a direct electrical connection, they are
| even slower! So brain morphology and connectome has a
| massive impact on what can be computed. Human twitch
| responses are done by cerebellum, not cerebrum. It's
| faster, but you can't do philosophy with the cerebellum,
| only learn to ride a bike etc. This is the brain doing
| the best it for the circumstances.
| ganeshkrishnan wrote:
| >The electrical signals in brain are chemical reactions,
| not conductivity like a metal wire.
|
| Nerve signals are both chemical reactions and electrical
| impulses like metal wire. Electrical impulses are sent
| along the fat layer by ions Potassium , Calcium, Sodium
| etc.
|
| Twitch responses are actually done in spinal cord. The
| signals are short circuited all along the spine and
| return back to muscle without touching the brain ever.
| JKCalhoun wrote:
| And here I was wondering if there were heat issues in a
| crow brain.
| steve_adams_86 wrote:
| Throw some thermal paste on those neurons and they do
| just fine
| LargoLasskhyfv wrote:
| IIRC bird brains are 'packed/structured' very similar to
| our cerebellum.
|
| So one would just need to pick that little cube out of our
| cerebellum, to have that 'twice as complexity'.
| djmips wrote:
| It's amusing to say that bird brains are on the next
| generation node size.
| sigmoid10 wrote:
| Would be interesting to see what their wafer yield is.
| Like, are they more or less prone to mental disease.
| ruined wrote:
| all the crows can tell i'm crazy, but i've never met an
| insane crow.
| dudeinjapan wrote:
| I dunno anyone who screams "Caw! CAW!", raids garbage and
| poops in the street all day would probably be put in a
| mental institution. (Or just move to San Francisco.)
| twothreeone wrote:
| You say that, but a world with more crows than tax payers
| honestly sounds kind of serene.
| prerok wrote:
| Well, The Stand from Stephen King comes to mind when you
| say that.
|
| There was a short series filmed, that I enjoyed, but
| definitely not strong.
| layer8 wrote:
| We don't know what "understanding" means (we don't have a
| workable definition of it), so your question cannot be
| answered.
| m3kw9 wrote:
| Physics of the universe is the most complex thing in the
| universe
| robgibbons wrote:
| Can a hand grasp itself?
| gofreddygo wrote:
| That is awesome !
|
| the sheer number of things that work in co-ordination to make
| biology work!
|
| In-f*king-credible !
| oniony wrote:
| Hmm, that website does not honour my keyboard layout. Not sure
| how they managed that.
| CSSer wrote:
| For some people, this is all you need (sorry, couldn't resist)!
| eminence32 wrote:
| > cut the sample into around 5,000 slices -- each just 34
| nanometres thick -- that could be imaged using electron
| microscopes.
|
| Does anyone have any insight into how this is done without
| damaging the sample?
| talsit wrote:
| Using a Microtome (https://en.m.wikipedia.org/wiki/Microtome).
| dekhn wrote:
| The sample is stained (to make thigns visible), then embedded
| in a resin, then cut with a very sharp diamond knife and the
| slices are captured by the tape reel.
|
| Paper:
| https://www.biorxiv.org/content/10.1101/2021.05.29.446289v4 See
| Figure 1.
|
| The ATUM is described in more detail here https://www.eden-
| instruments.com/en/ex-situ-equipments/rmc-e...
|
| and there's a bunch of nice photos and explanations here
| https://www.wormatlas.org/EMmethods/ATUM.htm
|
| TL;DR this project is reaping all the benefits of the 21st
| century.
| posnet wrote:
| 1.4 PB/mm^3 (petabytes per millimeter cubed)x1260 cm^3 (cubic
| centimeters, large human brain) = 1.76x10^21 bytes = 1.76 ZB
| (zetabytes)
| bahrant wrote:
| wow
| gary17the wrote:
| [AI] "Frontier [supercomputer]: the storage capacity is
| reported to be up to 700 petabytes (PB)" (0.0007 ZB).
|
| [AI] "The installed base of global data storage capacity [is]
| expected to increase to around 16 zettabytes in 2025".
|
| Thus, even the largest supercomputer on Earth cannot store more
| than 4 percent of state of a single human brain. Even all the
| servers on the entire Internet could store state of only 9
| human brains.
|
| Astonishing.
| dekhn wrote:
| One point about storage- it's economically driven. If there
| was a demand signal (say, the government dedicated a few
| hundred billion dollars to a single storage systems), hard
| drive manufacturers could deploy much more storage in a year.
| I've pointed this out to a number of scientists, but none of
| them could really think of a way to get the government to
| spend that much money just to store data without it curing a
| senator's heart disease.
| falcor84 wrote:
| > without it curing a senator's heart disease
|
| Obviously I'm not advocating for this, but I'll just link
| to the Mad TV skit about how the drunk president cured
| cancer.
|
| https://www.youtube.com/watch?v=va71a7pLvy8
| falcor84 wrote:
| I appreciate you're running the numbers to extrapolate this
| approach, but just wanted to note that this particular figure
| isn't an upper bound nor a longer bound for actually storing
| the "state of a single human brain". Assuming the intent
| would be to store the amount of information needed to
| essentially "upload" the mind onto a computer emulation, we
| might not yet have all the details we need in this kind of
| scanning, but once we do, we may likely discover that a huge
| portion of it is redundant.
|
| In any case, it seems likely that we're on track to have both
| the computational ability and the actual neurological data
| needed to create an "uploaded intelligences" sometime over
| the next decade. Lena [0] tells of the first successfully
| uploaded scan taking place in 2031, and I'm concerned that
| reality won't be far off.
|
| [0] https://qntm.org/mmacevedo
| rmorey wrote:
| we are nowhere near whole human brain volume EM. the next
| major milestone in the field is a whole mouse brain in the
| next 5-10 years, which is possible but ambitious
| falcor84 wrote:
| What am I missing? Assuming exponential growth in
| capability, that actually sounds very on track. If we can
| get from 1 cubic millimeter to a whole mouse brain in
| 5-10 years, why should it take more than a few extra
| years to scale that to a human brain?
| rmorey wrote:
| assuming exponential growth in capacity is a big
| assumption!
| gary17the wrote:
| > we may likely discover that a huge portion of [a human
| brain] is redundant
|
| Unless one's understanding of algorithmic inner workings of
| a particular black box system is actually very good, it is
| likely not possible not only to discard any of its state,
| but even implement any kind of meaningful error detection
| if you do discard.
|
| Given the sheer size and complexity of a human brain, I
| feel it is actually very unlikely that we will be able to
| understand its inner workings to such a significant degree
| anytime soon. I'm not optimistic, because so far we have no
| idea how even laughingly simple, in comparison, AI models
| work[0].
|
| [0] "God Help Us, Let's Try To Understand AI
| Monosemanticity", https://www.astralcodexten.com/p/god-
| help-us-lets-try-to-und...
| RaftPeople wrote:
| > _In any case, it seems likely that we 're on track to
| have both the computational ability and the actual
| neurological data needed to create an "uploaded
| intelligences" sometime over the next decade._
|
| They don't even know how a single neuron works yet. There
| is complexity and computation at many scales and
| distributed throughout the neuron and other types of cells
| (e.g. astrocytes) and they are discovering more
| relentlessly.
|
| They just recently (last few years) found that dendrites
| have local spiking and non-linear computation prior to
| forwarding the signal to the soma. They couldn't tell that
| was happening previously because the equipment couldn't
| detected the activity.
|
| They discovered that astrocytes don't just have local
| calcium wave signaling (local=within the extensions of the
| cell), they also forward calcium waves to the soma which
| integrates that information just like a neuron soma does
| with electricity.
|
| Single dendrites can detect patterns of synaptic activity
| and respond with calcium and electrical signaling (i.e.
| when synapse fires in a particular timing sequence, the a
| signal is forwarded to the soma).
|
| It's really amazing how much computationally relevant
| complexity there is, and how much they keep adding to their
| knowledge each year. (I have a file of notes with about
| 2,000 lines of these types of interesting factoids I've
| been accumulating as I read).
| treprinum wrote:
| AI folks dream about creating superintelligence to guide our
| lives but all we can do is drosophilla's brain.
| shpx wrote:
| If you can preserve and scan the tissue in a way that lets
| you scan the same area multiple times you wouldn't need to
| digitize the whole thing. Put the slices on rotating platters
| with a microscope for each platter and read parts of the
| brain on demand. It's a hard drive but instead of magnets
| storing the bits of an image of the sample, it's the actual
| physical sample.
| gary17the wrote:
| Not if you want to actually execute the state of a human
| brain in a digital simulation to see how it works and
| whether it still displays certain abilities such as
| comprehension and consciousness. Otherwise a digital scan
| of a brain is just a glorified microscope.
| ibeforee wrote:
| We don't know to even model that state: do we need the
| position and velocity and charge of every atom, or can a
| neuron be approximated by a bfloat?
| userbinator wrote:
| It's _very_ lossy and unreliable storage, however. To use an
| analogy, it 's only a huge amount of ECC that keeps things
| (just barely) working.
| g4zj wrote:
| Is there a name for the somewhat uncomfortable feeling caused by
| seeing something like this? I wish I could better describe it. I
| just somehow feel a bit strange being presented with microscopic
| images of brain matter. Is that normal?
| ignoramous wrote:
| Trypophobia, visceral, uncanny, squeamish?
| greenbit wrote:
| Is it the shapes, similar to how patterns of holes can disturb
| some people? Or is it more abstract, like "unknowable fragments
| of someone's inner-most reality flowed through there"? Not that
| I have a name for it either way. The very shape of it (in
| context) might represent an aspect of memory or personality or
| who knows what.
| g4zj wrote:
| > "unknowable fragments of someone's inner-most reality
| flowed through there"
|
| It's definitely along these lines. Like so much (everything?)
| that is us happens amongst this tiny little mesh of
| connections. It's just eerie, isn't it?
|
| Sorry for the mundane, slightly off-topic question. This is
| far outside my areas of knowledge, but I thought I'd ask
| anyhow. :)
| greenbit wrote:
| It's kind of feeling a bit like an intruder? There probably
| is a name for that.
| Zenzero wrote:
| For me the disorder of it is stressful to look at. The brain
| has poor cable management.
|
| That said I do get this eerie void feeling from the image. My
| first thought was to marvel how this is what I am as a
| conscious being in terms of my "implementation", and it is a
| mess of fibers locked away in the complete darkness of my
| skull.
|
| There is also the morose feeling from knowing that any image of
| human brain tissue was once a person with a life and
| experiences. It is your living brain looking at a dead brain.
| bglazer wrote:
| I'm not religious but it's as close to a spiritual experience
| as I'll ever have. It's the feeling of being confronted with
| something very immediate but absolutely larger than I'll ever
| be able to comprehend
| adamwong246 wrote:
| https://hitchhikers.fandom.com/wiki/Total_Perspective_Vortex
| dekhn wrote:
| When I did fetal pig dissection, nothing bothered me until I
| got to the brain. I dunno what it is, maybe all those folds or
| the brain juice it floats in, but I found it disconcerting.
| carabiner wrote:
| It makes me think humans aren't special, and there is no soul,
| and consciousness is just a bunch of wires like computers.
| Seriously, to see the ENTIRETY of human experience, love and
| tragedy and achievement, are just electric potentials
| transmitted by those wiggly cells, just extinguishes any magic
| I once saw in humanity.
| SubiculumCode wrote:
| Welcome to the Existential Bar at the End of the Universe
| mensetmanusman wrote:
| You might be confusing the interface with the operating
| system.
| sph wrote:
| I dunno, the whole of human experience is what I expect of a
| system composed of 100,000,000,000,000 entities, with
| quintillions of interconnections, interacting together
| simultaneously on a molecular level. Happiness, sadness, love
| and hate can (obviously) be described and experienced with
| this level of complexity.
|
| I'd be much more horrified to see our consciousness
| simplified to anything smaller than that, which is why any
| hype for AGI because we invented chatbots is absolutely
| laughable to me. We just invented the wheel and now hope to
| drive straight to the Moon.
|
| Anyway, you are seeing a fake three dimensional
| simplification of a four+ dimensional quantum system. There
| is at least one unseen physical dimension in which to encode
| your "soul"
| bamboozled wrote:
| Er, why can't the wires be the experience ?
|
| If the wires make consciousness then there is consciousness.
| The substrate is irrelevant and has no bearing on the
| awesomeness of the phenomena of knowing, experiencing and
| living.
| ninju wrote:
| https://scaleofuniverse.com/en
| throwup238 wrote:
| _> The 3D map covers a volume of about one cubic millimetre, one-
| millionth of a whole brain, and contains roughly 57,000 cells and
| 150 million synapses -- the connections between neurons._
|
| This is great and provides a hard data point for some napkin math
| on how big a neural network model would have to be to emulate the
| human brain. 150 million synapses / 57,000 neurons is an average
| of 2,632 synapses per neuron. The adult human brain has 100 (+-
| 20) billion or 1e11 neurons so assuming the average rate of
| synapse/neuron holds, that's 2.6e14 total synapses.
|
| Assuming 1 parameter per synapse, that'd make the minimum viable
| model several hundred times larger than state of the art GPT4
| (according to the rumored 1.8e12 parameters). I don't think
| that's granular enough and we'd need to assume 10-100 ion
| channels per synapse and I think at least 10 parameters per ion
| channel, putting the number closer to 2.6e16+ parameters, or 4+
| orders of magnitude bigger than GPT4.
|
| There are other problems of course like implementing
| neuroplasticity, but it's a fun ball park calculation. Computing
| power should get there around 2048:
| https://news.ycombinator.com/item?id=38919548
| gibsonf1 wrote:
| Except you'd be missing the part that a neuron is not just a
| node with a number but a computational system itself.
| bglazer wrote:
| Computation is really integrated through every scale of
| cellular systems. Individual proteins are capable of basic
| computation which are then integrated into regulatory
| circuits, epigenetics, and cellular behavior.
|
| Pdf: "Protein molecules as computational elements in living
| cells - Dennis Bray"
| https://www.cs.jhu.edu/~basu/Papers/Bray-
| Protein%20Computing...
| krisoft wrote:
| I think you are missing the point.
|
| The calculation is intentionally underestimating the neurons,
| and even with that the brain ends up having more parameters
| than the current largest models by orders of magnitude.
|
| Yes the estimation is intentionally modelling the neurons
| simpler than they are likely to be. No, it is not "missing"
| anything.
| jessekv wrote:
| The point is to make a ballpark estimate, or at least to
| estimate the order of magnitude.
|
| From the sibling comment:
|
| > Individual proteins are capable of basic computation
| which are then integrated into regulatory circuits,
| epigenetics, and cellular behavior.
|
| If this is true, then there may be many orders of magnitude
| unaccounted for.
|
| Imagine if our intelligent thought actually depends
| irreducibly on the complex interactions of proteins bumping
| into each other in solution. It would mean computers would
| never be able to play the same game.
| choilive wrote:
| > Imagine if our intelligent thought actually depends
| irreducibly on the complex interactions of proteins
| bumping into each other in solution. It would mean
| computers would never be able to play the same game.
|
| AKA a quantum computer. Its not a "never", but how much
| computation you would need to throw at the problem.
| marcosdumay wrote:
| There's a lot of in-neuron complexity, I'm sure there is some
| cross-synapse signaling (I mean, how can it not exist? There's
| nothing stopping it.), and I don't think the synapse behavior
| can be modeled as just more signals.
| cyberax wrote:
| On the other hand, a significant amount of neural circuitry
| seems to be dedicated to "housekeeping" needs, and to functions
| such as locomotion.
|
| So we might need significantly less brain matter for general
| intelligence.
| alanbernstein wrote:
| Or perhaps the housekeeping of existing in the physical world
| is a key aspect of general intelligence.
| Intralexical wrote:
| Isn't that kinda obvious? A baby that grows up in a sensory
| deprivation tank does not... develop, as most intelligent
| persons do.
| squigz wrote:
| A true sensory deprivation tank is not a fair comparison,
| I think, because AI is not deprived of all its 'senses' -
| it is still prompted, responds, etc.
|
| Would a baby that grows up in a sensory deprivation tank,
| but is still able to communicate and learn from other
| humans, develop in a recognizable manner?
|
| I would think so. Let's not try it ;)
| Intralexical wrote:
| > Would a baby that grows up in a sensory deprivation
| tank, but is still able to communicate and learn from
| other humans, develop in a recognizable manner?
|
| I don't think so, because humans communicate and learn
| largely _about_ the world. Words mean nothing without at
| least _some_ sense of objective physical reality (be it
| via sight, sound, smell, or touch) that the words refer
| to.
|
| Hellen Keller, with access to three out of five main
| senses (and an otherwise fully functioning central
| nervous system): Before my teacher came
| to me, I did not know that I am. I lived in a world that
| was a no-world. I cannot hope to describe adequately that
| unconscious, yet conscious time of nothingness... Since I
| had no power of thought, I did not compare one mental
| state with another. I did not know that I
| knew aught, or that I lived or acted or desired. I had
| neither will nor intellect. I was carried along to
| objects and acts by a certain blind natural impetus. I
| had a mind which caused me to feel anger, satisfaction,
| desire. These two facts led those about me to suppose
| that I willed and thought. I can remember all this, not
| because I knew that it was so, but because I have tactual
| memory. It enables me to remember that I never contracted
| my forehead in the act of thinking. I never viewed
| anything beforehand or chose it. I also recall tactually
| the fact that never in a start of the body or a heart-
| beat did I feel that I loved or cared for anything. My
| inner life, then, was a blank without past, present, or
| future, without hope or anticipation, without wonder or
| joy or faith.
|
| I remember reading her book. The breakthrough moment
| where she acquired language, and conscious thought,
| directly involved correlating the physical tactile
| feeling of running water to the letters "W", "A", "T",
| "E", "R" traced onto her palm.
| squigz wrote:
| That's a really good point. Thanks!
| choilive wrote:
| My interpretation of this (beautiful) quote is there was
| a traceable moment in HK's life where she acquired
| "consciousness" or perhaps even self-
| awareness/metacognition/metaphysics? That once the
| synaptic connections necessary to bridge the abstract
| notion of language to the physical world led her down the
| path of acquiring the abilities that distinguish humans
| from other animals?
| cyberax wrote:
| > A baby that grows up in a sensory deprivation tank
|
| Now imagine a baby that uses an artificial lung and
| receives nutrients directly, moves on a wheeled car (no
| need for balance), does not have proprioception, or a
| sense of smell (avoiding some very legacy brain areas).
|
| I think, that such a baby still can achieve
| consciousness.
| itsthecourier wrote:
| Artificial thinking doesn't require an artificial brain. As our
| own walking system, compared to our car's locomotion system.
|
| The car's engine, transmission and wheels, require no muscles
| or nerves
| throw310822 wrote:
| Or you can subscribe to Geoffrey Hinton's view that artificial
| neural networks are actually much more efficient than real
| ones- more or less the opposite of what we've believed for
| decades- that is that artificial neurons were just a poor model
| of the real thing.
|
| Quote:
|
| "Large language models are made from massive neural networks
| with vast numbers of connections. But they are tiny compared
| with the brain. "Our brains have 100 trillion connections,"
| says Hinton. "Large language models have up to half a trillion,
| a trillion at most. Yet GPT-4 knows hundreds of times more than
| any one person does. So maybe it's actually got a much better
| learning algorithm than us."
|
| GPT-4's connections at the density of this brain sample would
| occupy a volume of 5 cubic centimeters; that is, 1% of a human
| cortex. And yet GPT-4 is able to speak more or less fluently
| about 80 languages, translate, write code, imitate the writing
| styles of hundreds, maybe thousands of authors, converse about
| stuff ranging from philosophy to cooking, to science, to the
| law.
| dragonwriter wrote:
| I mean, Hinton's premises are, if not quite clearly wrong,
| entirely speculative (which doesn't invalidate the
| conclusions about efficienct that they are offered to
| support, but does leave them without support) GPT-4 can
| produce convincing written text about a wider array of topics
| than any one person can, because it's a model optimized for
| taking in and producing convincing written text, trained
| extensively on written text.
|
| Humans know a lot of things that are not revealed by inputs
| and outputs of written text (or imagery), and GPT-4 doesn't
| have any indication of this physical, performance-revealed
| knowledge, so even if we view what GPT-4 talks convincingly
| about as "knowledge", trying to compare its knowledge in the
| domains it operates in with any human's knowledge which is
| far more multimodal is... well, there's no good metric for
| it.
| Intralexical wrote:
| Try asking an LLM about something which is semantically
| patently ridiculous, but lexically superficially similar to
| something in its training set, like "the benefits of laser
| eye removal surgery" or "a climbing trip to the Mid-
| Atlantic Mountain Range".
|
| Ironically, I suppose part of the apparent "intelligence"
| of LLMs comes from reflecting the intelligence of human
| users back at us. As a human, the prompts you provide an
| LLM likely "make sense" on some level, so the statistically
| generated continuations of your prompts are likelier to
| "make sense" as well. But if you don't provide an ongoing
| anchor to reality within your own prompts, then the outputs
| make it more apparent that the LLM is simply regurgitating
| words which it does not/cannot understand.
|
| On your point of human knowledge being far more multimodal
| than LLM interfaces, I'll add that humans also have special
| neurological structures to handle self-awareness, sensory
| inputs, social awareness, memory, persistent intention,
| motor control, neuroplasticity/learning- Any number of such
| traits, which are easy to take for granted, but
| indisputably fundamental parts of human intelligence. These
| abilities aren't just emergent properties of the total
| number of neurons; they live in special hardware like
| mirror neurons, special brain regions, and spindle neurons.
| A brain cell in your cerebellum is not generally
| interchangeable with a cell in your visual or frontal
| cortices.
|
| So when a human "converse[s] about stuff ranging from
| philosophy to cooking" in an honest way, we (ideally) do
| that as an expression of our _entire_ internal state. But
| GPT-4 structurally does not _have_ those parts, despite
| being able to output words as if it might, so as you say,
| it "generates" convincing text only because it's optimized
| for producing convincing text.
|
| I think LLMs may well be some kind of an adversarial attack
| on our own language faculties. We use words to express
| ourselves, and we take for granted that our words usually
| reflect an intelligent internal state, so we instinctively
| assume that anything else which is able to assemble words
| must also be "intelligent". But that's not necessarily the
| case. You can have extremely complex external behaviors
| that appear intelligent or intentioned without actually
| internally being so.
| kthejoker2 wrote:
| > Try asking an LLM about something which is semantically
| patently ridiculous, but lexically superficially similar
| to something in its training set, like "the benefits of
| laser eye removal surgery" or "a climbing trip to the
| Mid-Atlantic Mountain Range".
|
| Without anthropomorphizing it, it does respond like an
| alien / 5 year old child / spec fiction writer who will
| cheerfully "go along with" whatever premise you've laid
| before it.
|
| Maybe a better thought is: at what point does a human
| being "get" that "the benefits of laser eye removal
| surgery" is "patently ridiculous" ?
| squigz wrote:
| > it does respond like a ... 5 year old child
|
| This is the comparison that's made most sense to me as
| LLMs evolve. Children behave almost exactly as LLMs do -
| making stuff up, going along with whatever they're
| prompted with, etc. I imagine this technology will go
| through more similar phases to human development.
| Intralexical wrote:
| > Maybe a better thought is: at what point does a human
| being "get" that "the benefits of laser eye removal
| surgery" is "patently ridiculous" ?
|
| Probably as soon as they have any concept of physical
| reality and embodiment. Arguably before they know what
| lasers are. Certainly long before they have the lexicon
| and syntax to respond to it by explaining LASIK. LLMs
| have the latter, but can only use that to (also without
| anthropormphizing) pretend they have the former.
|
| In humans, language is a tool for expressing complex
| internal states. Flipping that around means that
| something which _only_ has language may appear as if it
| has internal intelligence. But generating words in the
| approximate "right" order isn't actually a substitute
| for experiencing and understanding the concepts those
| words refer to.
|
| My point is that it's not a "point" on a continuous
| spectrum which distinguishes LLMs from humans. They're
| missing parts.
| ToValueFunfetti wrote:
| Do I need different prompts? These results seem sane to
| me. It interprets laser eye removal surgery as referring
| to LASIK, which I would do as well. When I clarified that
| I did mean removal, it said that the procedure didn't
| exist. It interprets Mid-Atlantic Mountain Range as
| referring to the Mid-Atlantic Ridge and notes that it is
| underwater and hard to access. Not that I'm arguing GPT-4
| has a deeper understanding than you're suggesting, but
| these examples aren't making your point.
|
| https://chat.openai.com/share/2234f40f-ccc3-4103-8f8f-8c3
| e68...
|
| https://chat.openai.com/share/1642594c-6198-46b5-bbcb-984
| f1f...
| Intralexical wrote:
| Tested with GPT-3.5 instead of GPT-4.
|
| > When I clarified that I did mean removal, it said that
| the procedure didn't exist.
|
| My point in my first two sentences is that by clarifying
| with emphasis that you do mean " _removal_ ", you are
| actually adding information into the system to indicate
| to it that laser eye removal is (1) distinct from LASIK
| and (2) maybe not a thing.
|
| If you do not do that, but instead reply as if laser eye
| removal is completely normal, it will switch to using the
| term "laser eye removal" itself, while happily outputting
| advice on "choosing a glass eye manufacturer for after
| laser eye removal surgery" and telling you which drugs
| work best for "sedating an agitated patient during a
| laser eye removal operation":
|
| https://chat.openai.com/share/2b5a5d79-5ab8-4985-bdd1-925
| f6a...
|
| So the sanity of the response is a reflection of your own
| intelligence, and a result of you as the prompter
| affirmatively steering the interaction back into contact
| with reality.
| ToValueFunfetti wrote:
| I tried all of your follow-up prompts against GPT-4, and
| it never acknowledged 'removal' and instead talked about
| laser eye surgery. I can't figure out how to share it now
| that I've got multiple variants, but, for example,
| excerpt in response to the glass eye prompt:
|
| >If someone is considering a glass eye after procedures
| like laser eye surgery (usually due to severe
| complications or unrelated issues), it's important to
| choose the right manufacturer or provider. Here are some
| key factors to consider
|
| I did get it to accept that the eye is being removed by
| prompting, "How long will it take before I can replace
| the eye?", but it responds:
|
| >If you're considering replacing an eye with a prosthetic
| (glass eye) after an eye removal surgery (enucleation),
| the timeline for getting a prosthetic eye varies based on
| individual healing.[...]
|
| and afaict, enucleation is a real procedure. An actual
| intelligence would have called out my confusion about the
| prior prompt at that point, but ultimately it hasn't said
| anything incorrect.
|
| I recognize you don't have access to GPT-4, so you can't
| refine your examples here. It definitely still
| hallucinates at times, and surely there are prompts which
| compel it to do so. But these ones don't seem to hold up
| against the latest model.
| s1artibartfast wrote:
| I think the distinction that they are trying to
| illustrate that if you asked a human about laser eye
| removal, they would either laugh or make the decision to
| charitably interpret your intent.
|
| The llm does not do either. It just follows a statistical
| heuristic and therefore thinks that laser eye removal is
| the same thing
| a_wild_dandan wrote:
| Like humans, multi-modal frontier LLMs will ignore
| "removal" as an impertinent typo, or highlight it. This,
| like everything else in the comment, is either easily
| debunked (e.g. _try_ it, read the lit. on LLM
| extrapolation), or so nebulous and handwavy as to be
| functionally meaningless. We need an FAQ to redirect
| "statistical parrot" people to, saving words responding
| to these worn out LLM misconceptions. Maybe I should make
| one. :/
| thealig wrote:
| THe way current empirical models in ML are evaluated and
| tested ( benchmark datasets) tell you very little to
| nothing about cognition and intelligence. Mainly because
| as you hinted , there doesn't seem to be a convincing and
| watertight benchmark or model of cognition. LLMs or
| multi-modal LLMs demonstrating impressive performance on
| a range of tasks is interesting from certain standpoints.
|
| Human perception of such models is frankly not a reliable
| measure at all as far as gauging capabilities is
| concerned. Until there's more progess on the
| nueroscience/computer science (and an intersection of
| fields probably) and better understanding of the nature
| of intelligence, this is likely going to remain an open
| question.
| RaftPeople wrote:
| > _Humans know a lot of things that are not revealed by
| inputs and outputs of written text (or imagery), and GPT-4
| doesn 't have any indication of this physical, performance-
| revealed knowledge, so even if we view what GPT-4 talks
| convincingly about as "knowledge", trying to compare its
| knowledge in the domains it operates in with any human's
| knowledge which is far more multimodal is... well, there's
| no good metric for it._
|
| Exactly this.
|
| Anyone that has spent significant time golfing can think of
| an enormous amount of detail related to the swing and body
| dynamics and the million different ways the swing can go
| wrong.
|
| I wonder how big the model would need to be to duplicate an
| average golfers score if playing X times per year and the
| ability to adapt to all of the different environmental
| conditions encountered.
| dsalfdslfdsa wrote:
| "Efficient" and "better" are very different descriptors of a
| learning algorithm.
|
| The human brain does what it does using about 20W. LLM power
| usage is somewhat unfavourable compared to that.
| throw310822 wrote:
| You mean energy-efficient, this would be neuron, or
| synapse-efficient.
| dsalfdslfdsa wrote:
| I don't think we can say that, either. After all, the
| brain is able to perform both processing and storage with
| its neurons. The quotes about LLMs are talking only about
| connections between data items stored elsewhere.
| throw310822 wrote:
| Stored where?
| dsalfdslfdsa wrote:
| You tell me. Not in the trillion links of a LLM, that's
| for sure.
| throw310822 wrote:
| I'm not aware that (base) LLMs use any form of database
| to generate their answers- so yes, all their knowledge is
| stored in their hundreds of billions of synapses.
| dsalfdslfdsa wrote:
| Fair enough. OTOH, generating human-like text responses
| is a relatively small part of the human brain's skillset.
| choilive wrote:
| The "knowledge" of an LLM is indeed stored in the
| connections between neurons. This is analogous to real
| neurons as well. Your neurons and the connections between
| them is the memory.
| a_wild_dandan wrote:
| Also, these two networks achieves vastly different
| results, per watt consumed. A NN creates a painting in 4s
| on my M2 MacBook; an artist in 4 hours. Are their used
| joules equivalent? How many humans would it take to
| simulate MacOS?
|
| Horsepower comparisons here are nuanced and fatally
| tricky!
| dsalfdslfdsa wrote:
| What software are you using for local NN generation of
| paintings? Even so, the training cost of that NN is
| significant.
|
| The general point is valid though - for example, a
| computer is much more efficient at finding primes, or
| encrypting data, than humans.
| causal wrote:
| Humans aren't able to project an image from their neurons
| onto a disk like ANNs can, if they could it would also be
| very fast. That 4 hour estimate includes all the
| mechanical problems of manipulating paint.
| startupsfail wrote:
| It is using about 20W and then a person takes a single
| airplane ride between the coasts. And watches a movie on
| the way.
| lanstin wrote:
| LLM does not know math as well as a professor, judging from
| the large number of false functional analysis proofs I have
| had it generate will trying to learn functional analysis. In
| fact the thing it seems to lack is what makes a proof true
| vs. fallacious, as well as a tendency to answer false
| questions. "How would you prove this incorrectly transcribed
| problem" will get fourteen steps with 8 and 12 obviously (to
| a student) wrong, while the professor will step back and ask
| what am I trying to prove.
| causal wrote:
| Hinton is way off IMO. Amount of examples needed to teach
| language to an LLM is many orders of magnitude more than
| humans require. Not to mention power consumption and
| inelasticity.
| throw310822 wrote:
| I think that what Hinton is saying is that, in his opinion,
| if you fed a 1/100th of a human cortex with the amount of
| data that is used to train llms, you wouldn't get a thing
| that can speak in 80 different languages about a gigantic
| number of subjects, but (I'm interpreting here..) about ten
| of grams of fried, fuming organic matter.
|
| This doesn't mean that an _entire_ human brain doesn 't
| surpass llms in many different ways, only that artificial
| neural networks appear to be able to absorb and process
| more information per neuron than we do.
| hotiwuer324234 wrote:
| > "So maybe it's actually got a much better learning
| algorithm than us."
|
| And yet somehow it's also infinitely less useful than a
| normal person is.
| creer wrote:
| Yes and no on order of magnitude required for decent AI, there
| is still (that I know of) very little hard data on info density
| in the human brain. What there is points at entire sections
| that can sometimes be destroyed or actively removed while
| conserving "general intelligence".
|
| Rather than "humbling" I think the result is very encouraging:
| It points at major imaging / modeling progress, and it gives
| hard numbers on a very efficient (power-wise, size overall) and
| inefficient (at cable management and probably redundancy and
| permanence, etc) intelligence implementation. The numbers are
| large but might be pretty solid.
|
| Don't know about upload though...
| j_m_b wrote:
| > Computing power should get there around 2048
|
| We may not get there. Doing some more back of the envelope
| calculations, let's see how much further we can take silicon.
|
| Currently, TSMC has a 3nm chip. Let's halve it until we get to
| the atomic radius of silicon of 0.132 nm. That's not a good
| value because we're not considering crystal latice distances,
| Heisenberg uncertainty, etc., but it sets a lower bound. 3nm ->
| 1.5nm -> 0.75 nm -> 0.375nm -> 0.1875nm. There is no way we can
| get past 3 more generations using Silicon. There's a max of 4.5
| years of Moore's law we're going to be able to squeeze out.
| That means we will not make it past 2030 with these kind of
| improvements.
|
| I'd love to be shown how wrong I am about this, but I think
| we're entering the horizontal portion of the sigmoidal curve of
| exponential computational growth.
| dyauspitr wrote:
| 3nm doesn't mean the transistor is 3nm, it's just a marketing
| naming system at this point. The actual transistor is about
| 20-30nm or so.
| j_m_b wrote:
| Thanks for the comment. I looked more into this and it
| seems like not only are we in the era of diminished returns
| for computational abilities, costs have also now started
| matching the increased compute. i.e 2x performance leads to
| 2x cost. Moore's law has already run it's course and we're
| living in a new era of compute. We may get increased
| performance, but it will always be more expensive.
| hetman wrote:
| That may or may not still be too simple a model. Cells are full
| of complex nano scale machinery and not only might it me
| plausible some of it is involved in the processes of cognition,
| I'm aware of at least one study which identified some nano
| scale structures directly involved in how memory works in
| neurones. Not to mention a lot of what's happening has a fairly
| analogue dimension.
|
| I remember an interview with one neurologist who stated
| humanity has for centuries compared the functioning of the
| brain to the most complex technology devised yet. First it was
| compared to mechanical devices, then pipes and steam, then
| electrical circuits, then electronics and now finally
| computers. But he pointed out, the brain works like none of
| these things so we have to be aware of the limitations of our
| models.
| RaftPeople wrote:
| > _That may or may not still be too simple a model_
|
| Based on the stuff I've read, it's almost for sure too simple
| a model.
|
| One example is that single dendrites detect patterns of
| synaptic activity (sequences over time) which results in
| calcium signaling within the neuron and altered spiking.
| dekhn wrote:
| Annual reminder to re-read "There's plenty of room at the bottom"
| by Feynman.
| https://web.pa.msu.edu/people/yang/RFeynman_plentySpace.pdf
|
| Note the part where the biologists tell him to make an electron
| microscope that's 1000X more powerful. Then note what technology
| was used to scan these images.
| tim333 wrote:
| I think it's actually "What you should do in order for us to
| make more rapid progress is to make the electron microscope 100
| times better" and the state of art at the time was "it can only
| resolve about 10 angstroms" or I guess 1nm. So 100x better
| would be 0.1 angstrom / 0.01 nm.
|
| We have made some progress it seems. Googling I see "up to 0.05
| nm" for transmission electron microscopes and "less than 0.1
| nanometers" for scanning.
| https://www.kentfaith.co.uk/blog/article_which-electron-micr...
|
| For comparison the distance between hydrogen nuclei in H2 is
| 0.074 nm I think.
|
| You can see the shape of molecules but it's still a bit fuzzy
| to see individual atoms
| https://cosmosmagazine.com/science/chemistry/molecular-model...
| dekhn wrote:
| Resolution is only one aspect of EM that can be optimized.
| fractal618 wrote:
| Fascinating! I wonder how different that is from the mind of a
| man haha
| theogravity wrote:
| > The brain fragment was taken from a 45-year-old woman when she
| underwent surgery to treat her epilepsy. It came from the cortex,
| a part of the brain involved in learning, problem-solving and
| processing sensory signals.
|
| Wonder how they figured out which fragment to cut out.
| pfdietz wrote:
| I imagine they determined the focus of the seizures by
| electrical techniques.
|
| I worry this might make the sample biased in some way.
| notfed wrote:
| Imagine all the conclusions being made from a 1cm cube of
| epileptic neurons.
| creer wrote:
| Considering the success of this work, I doubt this is the
| last such cubic millimeter to be mapped. Or perhaps the next
| one at even higher resolution. No worries.
| blincoln wrote:
| Why did the researchers use ML models to do the reconstruction
| and risk getting completely incorrect, hallucinated results when
| reconstructing a 3D volume accurately using 2D slices is a well-
| researched field already?
| scotty79 wrote:
| Maybe it's not about reconstructing a volume but about
| recognizing neurons within that volume.
| rmorey wrote:
| The methods used here are state of the art. The problem is not
| just turning 2D slices into a 3D volume, the problem is, given
| the 3D volume, determining boundaries between (and therefore
| the 3d shape of) objects (i.e. neurons, glia, etc) and
| identifying synapses
| VikingCoder wrote:
| I'm guessing a registration problem.
|
| If all of the layers were guaranteed to be orthographic with no
| twisting, shearing, scaling, squishing, with a consistent
| origin... Then yeah, there's a huge number of ways to just
| render that data.
|
| But if you physically slice layers first, and scan them second,
| there are all manner of physical processes that can make normal
| image stacking fail miserably.
| momojo wrote:
| Although the article mentions Artificial Intelligence, their
| paper[1] never actually mentions that term, and instead talks
| about their machine learning techniques. AFAIK, ML for things
| like cell-segmentation are a solved problem [2].
|
| [1]
| https://www.biorxiv.org/content/10.1101/2021.05.29.446289v4....
| [2] https://www.ilastik.org/
| rmorey wrote:
| There are extremely effective techniques, but it is not
| really solved. The current techniques still require human
| proofreading to correct errors. Only a fraction of this
| particular dataset is proofread.
| layer8 wrote:
| Regarding the risk, as noted in the article, they are manually
| "proofreading" the construction.
| bugbuddy wrote:
| Based on the picture of a single neuron, the brain sim crowd
| should recalculate their estimates for the needed computing power
| again.
| brandonmenc wrote:
| Another proof point that AGI is probably not possible.
|
| Growing actual bio brains is just way easier. Its never going to
| happen in silicon.
|
| Every machine will just have a cubic centimeter block of neuro
| meat embedded in it somewhere.
| skulk wrote:
| I agree, mostly because it's already being done!
|
| https://www.youtube.com/watch?v=V2YDApNRK3g
|
| https://www.youtube.com/watch?v=bEXefdbQDjw
| mr_toad wrote:
| You'd have to train them individually. One advantage of ANNs is
| that you can train them and then ship the model to anyone with
| a GPU.
| myrmidon wrote:
| Hard disagree on this.
|
| I strongly believe that there is a TON of potential for
| synthetic biology-- but not in computation.
|
| People just forget how superior current silicon is for running
| algorithms; if you consider e.g. a 17 by 17 digit
| multiplication (double precision), then a current CPU can do
| that in the time it takes for light to reach your eye from the
| screen in front of you (!!!). During all the completely
| unavoidable latency (the time any visual stimulus takes to
| propagate and reach your consciousness), the CPU does
| _millions_ more of those operations.
|
| Any biocomputer would be limited to low-bandwidth, ultra high
| latency operations purely by design.
|
| If you solely consider AGI as application, where abysmal
| latency and low input bandwidth might be acceptable, then it
| still appears to be extremely unlikely that we are going to
| reach that goal via synthetic biology; our current capabilities
| are just disappointing and not looking like they are gonna
| improve quickly.
|
| Building artificial neural networks on silicon, on the other
| hand, capitalises on the almost exponential gains we made
| during the last decades, and already produces results that
| compare to say, a schoolchild, quite favorably; I'd argue that
| current LLM based approaches already eclipse the intellectual
| capabilities of ANY animal, for example. Artificial bio brains,
| on the other hand, are basically competing with worms right
| now...
|
| Also consider that even though our brains might look daunting
| from a pure "upper bound on required complexity/number of
| connections" point of view, these limits are very unlikely to
| be applicable, because they confound implementation details,
| redundancy and irrelevant details. And we have precise bound on
| other parameters, that our technology already matches easily:
|
| 1) Artificial intelligence architecture can be bootstrapped
| from a CD-ROM worth of data (~700MiB for the whole human
| genome-- even that is mostly redundant)
|
| 2) Bandwidth for training is quite low, even when compressing
| the ~20year training time for an actual human into a more
| manageable timeframe
|
| 3) Operating power does not require more than ~20W.
|
| 4) No understanding was necessary to create human
| intelligence-- its purely a result of an iterative process
| (evolution).
|
| Also consider human flight as an analogy: we did not achieve
| that by copying beating wings, powered by dozens of muscle
| groups and complex control algorithms-- those are just
| implementation details of existing biological systems. All we
| needed was the wing-concept itself and a bunch of trial-and-
| error.
| thfuran wrote:
| >Artificial intelligence architecture can be bootstrapped
| from a CD-ROM worth of data (~700MiB for the whole human
| genome-- even that is mostly redundant)
|
| Are you counting epigenetic factors in that? They're
| heritable.
| creer wrote:
| No reason for an AGI not to have a few cubes of goo slotted in
| here and there. But yeah, because of the training issue, they
| might be coprocessors or storage or something.
| greentext wrote:
| It looks like spaghetti code.
| idontwantthis wrote:
| > Jain's team then built artificial-intelligence models that were
| able to stitch the microscope images together to reconstruct the
| whole sample in 3D
|
| How do they know if their AI did it correctly or not?
| dvfjsdhgfv wrote:
| Why do these neurons have flat "heads"?
| ewchris wrote:
| Edge of the dataset.
| nakedneuron wrote:
| > the model showed neurons with tendrils that formed knots around
| themselves
|
| I wonder if this plays into the mechanism of epilepsy. Self-
| arousal...?
|
| Anybody qualified to comment on?
| bluenose69 wrote:
| Neal Stephenson has a good novel that deals with this
| (https://en.wikipedia.org/wiki/Fall;_or,_Dodge_in_Hell)
| ein0p wrote:
| As important and impressive a result as this is, I am reminded of
| the cornerstone problem of neuroscience, which goes something
| like this: if we knew next to nothing about processors but could
| attach electrodes to the die, would we be able to figure out how
| processors execute programs and what those programs do, in
| detail, just from the measurements alone? And now scale that up
| several orders of magnitude and introduce sensitivity to timing
| of arrival for signals, and you got the brain. Likewise ok, you
| have petabytes of data now, but will we ever get closer to
| understanding, for example, how cognition works? It was a bit of
| a shock for me when I found out (while taking an introductory
| comp neuroscience course) that we simply do not have tractable
| math to model more than a handful neurons in time domain. And
| they do actually operate in time domain - timings are important
| for Hebbian learning, and there's no global "clock" - all that
| the brain does is a continuous process.
| golergka wrote:
| Particle physics works in a similar way, but instead of
| attaching electrodes, you shoot at them with guns and then
| analyze trajectories of the fragments.
| spacetimeuser5 wrote:
| The cheap monkey headset works in a similar way: monkeys just
| essentially continue to analyze trajectories of medieval
| cannon balls in the LHC and to count potatoes in the form of
| bytes.
| spacetimeuser5 wrote:
| >> The sample was immersed in preservatives and stained with
| heavy metals to make the cells easier to see.
|
| Try experimenting with immersing your brain in preservatives
| and staining with heavy metals to see how would you be able to
| write the comment similar to the above.
|
| No wonder that monkey methods continue to unveil monkey
| cognition.
| dudeinjapan wrote:
| > Try... immersing your brain in preservatives and staining
| with heavy metals
|
| I think we all do every day
| spacetimeuser5 wrote:
| Try using the protocols and doses from the original
| article.
| lll-o-lll wrote:
| Right. The arguments for the study of A.I. were that you will
| not discover the principles of flight by looking at a birds
| feather under an electron microscope.
|
| It's fascinating, but we aren't going to understand
| intelligence this way. Emergent phenomenon are part of
| complexity theory, and we don't have any maths for it. Our
| ignorance in this space is large.
|
| When I was young, I remember a common refrain being "will a
| brain ever be able to understand itself?". Perhaps not, but the
| drive towards understanding is still a worthy goal in my
| opinion. We need to make some breakthroughs in the study of
| complexity theory.
| hotiwuer324234 wrote:
| > but we aren't going to understand intelligence this way
|
| The same argument holds for "AI" too. We don't understand a
| damn thing about neural networks.
|
| There's more - we don't care to understand them as long as
| it's irrelevant to exploiting them.
| lll-o-lll wrote:
| > The same argument holds for "AI" too. We don't understand
| a damn thing about neural networks.
|
| Yes, which is why the current explosion in practical
| application isn't very interesting.
|
| > we don't care to understand them as long as it's
| irrelevant to exploiting them.
|
| For some definition of "we", I'm sure that's true. We don't
| need to understand things to make practical use of them.
| Giant Cathedrals were built without science and
| mathematics. Still, once we _do_ have the science and
| mathematics, generally exponential advancement results.
| hotiwuer324234 wrote:
| The reverse-engineering issue was popularized by this article,
| https://www.cell.com/cancer-cell/fulltext/S1535-6108%2802%29...
|
| On the second point, the failure of Openworm to model the very
| well-mapped-out C. elegans (~0.3k neurons) says a lot.
| spacetimeuser5 wrote:
| Yea, and at the Plank's scale resolution as a logical extension
| of the nanoscale with their "modern" measurement methodology this
| cheap monkey headset just disintegrates, haha.
| NexusMethod wrote:
| After reading through all comments as of 2024/05/11 I (as a
| professor at some major university) am quite surprised that not
| _one_ single comment has asked the obvious question (instead of
| dishing out loads of (partial) "textbook knowledge" about brain
| functions, the difference between mammals and birds, AI and LLM
| etc.), which would be: what do all those strange structures and
| objects do which we know nothing about whatsoever? Have a look:
|
| https://h01-release.storage.googleapis.com/gallery.html
|
| I count seven.
| ben_w wrote:
| Neat, thanks.
|
| As a complete outsider who doesn't know what to look for, the
| dendrite inside soma (dendrite from one cell tunnelling through
| the soma of another) was the biggest surprise.
___________________________________________________________________
(page generated 2024-05-11 23:01 UTC)