[HN Gopher] Cubic millimetre of brain mapped in spectacular detail
___________________________________________________________________
Cubic millimetre of brain mapped in spectacular detail
Author : geox
Score : 203 points
Date : 2024-05-09 21:36 UTC (1 days ago)
(HTM) web link (www.nature.com)
(TXT) w3m dump (www.nature.com)
| teuobk wrote:
| The interactive visualization is pretty great. Try zooming in on
| the slices and then scrolling up or down through the layers. Also
| try zooming in on the 3D model. Notice how hovering over any part
| of a neuron highlights all parts of that neuron:
|
| http://h01-dot-neuroglancer-demo.appspot.com/#!gs://h01-rele...
| jamiek88 wrote:
| My god. That is stunning.
|
| To think that's one single millimeter of our brain and look at
| all those connections.
|
| Now I understand why crows can be so smart walnut sized brain
| be damned.
|
| What an amazing thing brains are.
|
| Possibly the most complex things in the universe.
|
| Is it complex enough to understand itself though? Is that
| logically even possible?
| ignoramous wrote:
| I wonder if we manage to annotate this much level of detail
| about our brain, and then let (some variant of the current)
| models train on it, will those intrinsically end up
| generalizing a model for intelligence?
| nicklecompte wrote:
| I think you would also need the epigenetic side, which is
| very poorly understood:
| https://www.universityofcalifornia.edu/news/biologists-
| trans...
|
| We have more detail than this about the C. elegans nematode
| brain, yet we still no clue how nematode intelligence
| actually works.
| Animats wrote:
| How's OpenWorm coming along?
| nicklecompte wrote:
| Badly:
| https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-
| brai... (the comments have some updates as of 2023)
|
| Almost every other cell in the worm can be simulated with
| known biophysics. But we don't have a clue how any
| individual nematode neuron actually works. I don't have
| the link but there are a few teams in China working on
| visualizing brain activity in _living_ C. elegans, but it
| 's difficult to get good measurements without affecting
| the behavior of the worm (e.g. reacting to the dye).
| nicklecompte wrote:
| Crow/parrot brains are tiny but in terms of neuron count they
| are twice as dense as primate brains (including ours): https:
| //www.sciencedirect.com/science/article/pii/S096098221...
|
| If someone did this experiment with a crow brain I imagine it
| would look "twice as complex" (whatever that might mean). 250
| million years of evolution separates mammals from birds.
| jamiek88 wrote:
| Interesting! Thank you. I didn't know that.
| Terr_ wrote:
| I expect we'll find that it's all a matter of tradeoffs in
| terms of count vs size/complexity... kind of like how the
| "spoken data rate" of various human languages seems to be
| the same even though some have complicated big words versus
| more smaller ones etc.
| sdenton4 wrote:
| Birds are under a different set of constraints than non-
| bat mammals, of course... They're very different.
| Songbirds have ~4x finer time Perception of audio than
| humans do, for example, which is exemplified by taking
| complex sparrow songs and showing them down until you can
| actually hear the fine structure.
|
| The human 'spoken data rate' is likely due to average
| processing rates in our common hardware. Birds have a
| different architecture.
| Terr_ wrote:
| You misunderstand, I'm not making any kind of direct
| connection between human speech and bird song.
|
| I'm saying we will probably discover that the "overall
| performance" of different vertebrate neural setups are
| clustered pretty closely, even when the neurons are
| arranged rather differently.
|
| Human speech is just an example of another kind of
| performance-clustering, which occurs for similar
| metaphysical reasons between competing, evolving, related
| alternatives.
| pfdietz wrote:
| That shouldn't be too surprising, as a larger fraction of
| the volume of a brain should be taken up by "wiring" as the
| size of the brain expands.
| steve_adams_86 wrote:
| This might be a dumb question, because I doubt the
| distances between neurons makes a meaningful distance...
| But could a small brain, dense with neurons like a crow,
| possibly lead to a difference in things like response to
| stimuli or "compute" speed so to speak?
| michaelhoney wrote:
| Actually I think that's pretty plausible. Signal speed in
| the brain is pretty slow - it would have to make some
| difference
| out_of_protocol wrote:
| Regarding compute speed - it checks out. Humans "think"
| via neo cortex, thin ouside layer of the brain. Poor
| locality, signals needs to travel a lot. Easy to expand
| though. Crow brain have everything tightly concentrated
| in the center - fast communication between neurons, hard
| to have more "thinking" thing later (therefore hard to
| evolve above what crows currently have)
| LargoLasskhyfv wrote:
| IIRC bird brains are 'packed/structured' very similar to
| our cerebellum.
|
| So one would just need to pick that little cube out of our
| cerebellum, to have that 'twice as complexity'.
| djmips wrote:
| It's amusing to say that bird brains are on the next
| generation node size.
| layer8 wrote:
| We don't know what "understanding" means (we don't have a
| workable definition of it), so your question cannot be
| answered.
| gofreddygo wrote:
| That is awesome !
|
| the sheer number of things that work in co-ordination to make
| biology work!
|
| In-f*king-credible !
| oniony wrote:
| Hmm, that website does not honour my keyboard layout. Not sure
| how they managed that.
| CSSer wrote:
| For some people, this is all you need (sorry, couldn't resist)!
| eminence32 wrote:
| > cut the sample into around 5,000 slices -- each just 34
| nanometres thick -- that could be imaged using electron
| microscopes.
|
| Does anyone have any insight into how this is done without
| damaging the sample?
| talsit wrote:
| Using a Microtome (https://en.m.wikipedia.org/wiki/Microtome).
| dekhn wrote:
| The sample is stained (to make thigns visible), then embedded
| in a resin, then cut with a very sharp diamond knife and the
| slices are captured by the tape reel.
|
| Paper:
| https://www.biorxiv.org/content/10.1101/2021.05.29.446289v4 See
| Figure 1.
|
| The ATUM is described in more detail here https://www.eden-
| instruments.com/en/ex-situ-equipments/rmc-e...
|
| and there's a bunch of nice photos and explanations here
| https://www.wormatlas.org/EMmethods/ATUM.htm
|
| TL;DR this project is reaping all the benefits of the 21st
| century.
| posnet wrote:
| 1.4 PB/mm^3 (petabytes per millimeter cubed)x1260 cm^3 (cubic
| centimeters, large human brain) = 1.76x10^21 bytes = 1.76 ZB
| (zetabytes)
| bahrant wrote:
| wow
| gary17the wrote:
| [AI] "Frontier [supercomputer]: the storage capacity is
| reported to be up to 700 petabytes (PB)" (0.0007 ZB).
|
| [AI] "The installed base of global data storage capacity [is]
| expected to increase to around 16 zettabytes in 2025".
|
| Thus, even the largest supercomputer on Earth cannot store more
| than 4 percent of state of a single human brain. Even all the
| servers on the entire Internet could store state of only 9
| human brains.
|
| Astonishing.
| dekhn wrote:
| One point about storage- it's economically driven. If there
| was a demand signal (say, the government dedicated a few
| hundred billion dollars to a single storage systems), hard
| drive manufacturers could deploy much more storage in a year.
| I've pointed this out to a number of scientists, but none of
| them could really think of a way to get the government to
| spend that much money just to store data without it curing a
| senator's heart disease.
| falcor84 wrote:
| > without it curing a senator's heart disease
|
| Obviously I'm not advocating for this, but I'll just link
| to the Mad TV skit about how the drunk president cured
| cancer.
|
| https://www.youtube.com/watch?v=va71a7pLvy8
| falcor84 wrote:
| I appreciate you're running the numbers to extrapolate this
| approach, but just wanted to note that this particular figure
| isn't an upper bound nor a longer bound for actually storing
| the "state of a single human brain". Assuming the intent
| would be to store the amount of information needed to
| essentially "upload" the mind onto a computer emulation, we
| might not yet have all the details we need in this kind of
| scanning, but once we do, we may likely discover that a huge
| portion of it is redundant.
|
| In any case, it seems likely that we're on track to have both
| the computational ability and the actual neurological data
| needed to create an "uploaded intelligences" sometime over
| the next decade. Lena [0] tells of the first successfully
| uploaded scan taking place in 2031, and I'm concerned that
| reality won't be far off.
|
| [0] https://qntm.org/mmacevedo
| rmorey wrote:
| we are nowhere near whole human brain volume EM. the next
| major milestone in the field is a whole mouse brain in the
| next 5-10 years, which is possible but ambitious
| falcor84 wrote:
| What am I missing? Assuming exponential growth in
| capability, that actually sounds very on track. If we can
| get from 1 cubic millimeter to a whole mouse brain in
| 5-10 years, why should it take more than a few extra
| years to scale that to a human brain?
| rmorey wrote:
| assuming exponential growth in capacity is a big
| assumption!
| gary17the wrote:
| > we may likely discover that a huge portion of [a human
| brain] is redundant
|
| Unless one's understanding of algorithmic inner workings of
| a particular black box system is actually very good, it is
| likely not possible not only to discard any of its state,
| but even implement any kind of meaningful error detection
| if you do discard.
|
| Given the sheer size and complexity of a human brain, I
| feel it is actually very unlikely that we will be able to
| understand its inner workings to such a significant degree
| anytime soon. I'm not optimistic, because so far we have no
| idea how even laughingly simple, in comparison, AI models
| work[0].
|
| [0] "God Help Us, Let's Try To Understand AI
| Monosemanticity", https://www.astralcodexten.com/p/god-
| help-us-lets-try-to-und...
| RaftPeople wrote:
| > _In any case, it seems likely that we 're on track to
| have both the computational ability and the actual
| neurological data needed to create an "uploaded
| intelligences" sometime over the next decade._
|
| They don't even know how a single neuron works yet. There
| is complexity and computation at many scales and
| distributed throughout the neuron and other types of cells
| (e.g. astrocytes) and they are discovering more
| relentlessly.
|
| They just recently (last few years) found that dendrites
| have local spiking and non-linear computation prior to
| forwarding the signal to the soma. They couldn't tell that
| was happening previously because the equipment couldn't
| detected the activity.
|
| They discovered that astrocytes don't just have local
| calcium wave signaling (local=within the extensions of the
| cell), they also forward calcium waves to the soma which
| integrates that information just like a neuron soma does
| with electricity.
|
| Single dendrites can detect patterns of synaptic activity
| and respond with calcium and electrical signaling (i.e.
| when synapse fires in a particular timing sequence, the a
| signal is forwarded to the soma).
|
| It's really amazing how much computationally relevant
| complexity there is, and how much they keep adding to their
| knowledge each year. (I have a file of notes with about
| 2,000 lines of these types of interesting factoids I've
| been accumulating as I read).
| treprinum wrote:
| AI folks dream about creating superintelligence to guide our
| lives but all we can do is drosophilla's brain.
| userbinator wrote:
| It's _very_ lossy and unreliable storage, however. To use an
| analogy, it 's only a huge amount of ECC that keeps things
| (just barely) working.
| g4zj wrote:
| Is there a name for the somewhat uncomfortable feeling caused by
| seeing something like this? I wish I could better describe it. I
| just somehow feel a bit strange being presented with microscopic
| images of brain matter. Is that normal?
| ignoramous wrote:
| Trypophobia, visceral, uncanny, squeamish?
| greenbit wrote:
| Is it the shapes, similar to how patterns of holes can disturb
| some people? Or is it more abstract, like "unknowable fragments
| of someone's inner-most reality flowed through there"? Not that
| I have a name for it either way. The very shape of it (in
| context) might represent an aspect of memory or personality or
| who knows what.
| g4zj wrote:
| > "unknowable fragments of someone's inner-most reality
| flowed through there"
|
| It's definitely along these lines. Like so much (everything?)
| that is us happens amongst this tiny little mesh of
| connections. It's just eerie, isn't it?
|
| Sorry for the mundane, slightly off-topic question. This is
| far outside my areas of knowledge, but I thought I'd ask
| anyhow. :)
| greenbit wrote:
| It's kind of feeling a bit like an intruder? There probably
| is a name for that.
| Zenzero wrote:
| For me the disorder of it is stressful to look at. The brain
| has poor cable management.
|
| That said I do get this eerie void feeling from the image. My
| first thought was to marvel how this is what I am as a
| conscious being in terms of my "implementation", and it is a
| mess of fibers locked away in the complete darkness of my
| skull.
|
| There is also the morose feeling from knowing that any image of
| human brain tissue was once a person with a life and
| experiences. It is your living brain looking at a dead brain.
| bglazer wrote:
| I'm not religious but it's as close to a spiritual experience
| as I'll ever have. It's the feeling of being confronted with
| something very immediate but absolutely larger than I'll ever
| be able to comprehend
| adamwong246 wrote:
| https://hitchhikers.fandom.com/wiki/Total_Perspective_Vortex
| dekhn wrote:
| When I did fetal pig dissection, nothing bothered me until I
| got to the brain. I dunno what it is, maybe all those folds or
| the brain juice it floats in, but I found it disconcerting.
| carabiner wrote:
| It makes me think humans aren't special, and there is no soul,
| and consciousness is just a bunch of wires like computers.
| Seriously, to see the ENTIRETY of human experience, love and
| tragedy and achievement, are just electric potentials
| transmitted by those wiggly cells, just extinguishes any magic
| I once saw in humanity.
| SubiculumCode wrote:
| Welcome to the Existential Bar at the End of the Universe
| mensetmanusman wrote:
| You might be confusing the interface with the operating
| system.
| ninju wrote:
| https://scaleofuniverse.com/en
| throwup238 wrote:
| _> The 3D map covers a volume of about one cubic millimetre, one-
| millionth of a whole brain, and contains roughly 57,000 cells and
| 150 million synapses -- the connections between neurons._
|
| This is great and provides a hard data point for some napkin math
| on how big a neural network model would have to be to emulate the
| human brain. 150 million synapses / 57,000 neurons is an average
| of 2,632 synapses per neuron. The adult human brain has 100 (+-
| 20) billion or 1e11 neurons so assuming the average rate of
| synapse/neuron holds, that's 2.6e14 total synapses.
|
| Assuming 1 parameter per synapse, that'd make the minimum viable
| model several hundred times larger than state of the art GPT4
| (according to the rumored 1.8e12 parameters). I don't think
| that's granular enough and we'd need to assume 10-100 ion
| channels per synapse and I think at least 10 parameters per ion
| channel, putting the number closer to 2.6e16+ parameters, or 4+
| orders of magnitude bigger than GPT4.
|
| There are other problems of course like implementing
| neuroplasticity, but it's a fun ball park calculation. Computing
| power should get there around 2048:
| https://news.ycombinator.com/item?id=38919548
| gibsonf1 wrote:
| Except you'd be missing the part that a neuron is not just a
| node with a number but a computational system itself.
| bglazer wrote:
| Computation is really integrated through every scale of
| cellular systems. Individual proteins are capable of basic
| computation which are then integrated into regulatory
| circuits, epigenetics, and cellular behavior.
|
| Pdf: "Protein molecules as computational elements in living
| cells - Dennis Bray"
| https://www.cs.jhu.edu/~basu/Papers/Bray-
| Protein%20Computing...
| krisoft wrote:
| I think you are missing the point.
|
| The calculation is intentionally underestimating the neurons,
| and even with that the brain ends up having more parameters
| than the current largest models by orders of magnitude.
|
| Yes the estimation is intentionally modelling the neurons
| simpler than they are likely to be. No, it is not "missing"
| anything.
| jessekv wrote:
| The point is to make a ballpark estimate, or at least to
| estimate the order of magnitude.
|
| From the sibling comment:
|
| > Individual proteins are capable of basic computation
| which are then integrated into regulatory circuits,
| epigenetics, and cellular behavior.
|
| If this is true, then there may be many orders of magnitude
| unaccounted for.
|
| Imagine if our intelligent thought actually depends
| irreducibly on the complex interactions of proteins bumping
| into each other in solution. It would mean computers would
| never be able to play the same game.
| choilive wrote:
| > Imagine if our intelligent thought actually depends
| irreducibly on the complex interactions of proteins
| bumping into each other in solution. It would mean
| computers would never be able to play the same game.
|
| AKA a quantum computer. Its not a "never", but how much
| computation you would need to throw at the problem.
| marcosdumay wrote:
| There's a lot of in-neuron complexity, I'm sure there is some
| cross-synapse signaling (I mean, how can it not exist? There's
| nothing stopping it.), and I don't think the synapse behavior
| can be modeled as just more signals.
| cyberax wrote:
| On the other hand, a significant amount of neural circuitry
| seems to be dedicated to "housekeeping" needs, and to functions
| such as locomotion.
|
| So we might need significantly less brain matter for general
| intelligence.
| alanbernstein wrote:
| Or perhaps the housekeeping of existing in the physical world
| is a key aspect of general intelligence.
| Intralexical wrote:
| Isn't that kinda obvious? A baby that grows up in a sensory
| deprivation tank does not... develop, as most intelligent
| persons do.
| squigz wrote:
| A true sensory deprivation tank is not a fair comparison,
| I think, because AI is not deprived of all its 'senses' -
| it is still prompted, responds, etc.
|
| Would a baby that grows up in a sensory deprivation tank,
| but is still able to communicate and learn from other
| humans, develop in a recognizable manner?
|
| I would think so. Let's not try it ;)
| Intralexical wrote:
| > Would a baby that grows up in a sensory deprivation
| tank, but is still able to communicate and learn from
| other humans, develop in a recognizable manner?
|
| I don't think so, because humans communicate and learn
| largely _about_ the world. Words mean nothing without at
| least _some_ sense of objective physical reality (be it
| via sight, sound, smell, or touch) that the words refer
| to.
|
| Hellen Keller, with access to three out of five main
| senses (and an otherwise fully functioning central
| nervous system): Before my teacher came
| to me, I did not know that I am. I lived in a world that
| was a no-world. I cannot hope to describe adequately that
| unconscious, yet conscious time of nothingness... Since I
| had no power of thought, I did not compare one mental
| state with another. I did not know that I
| knew aught, or that I lived or acted or desired. I had
| neither will nor intellect. I was carried along to
| objects and acts by a certain blind natural impetus. I
| had a mind which caused me to feel anger, satisfaction,
| desire. These two facts led those about me to suppose
| that I willed and thought. I can remember all this, not
| because I knew that it was so, but because I have tactual
| memory. It enables me to remember that I never contracted
| my forehead in the act of thinking. I never viewed
| anything beforehand or chose it. I also recall tactually
| the fact that never in a start of the body or a heart-
| beat did I feel that I loved or cared for anything. My
| inner life, then, was a blank without past, present, or
| future, without hope or anticipation, without wonder or
| joy or faith.
|
| I remember reading her book. The breakthrough moment
| where she acquired language, and conscious thought,
| directly involved correlating the physical tactile
| feeling of running water to the letters "W", "A", "T",
| "E", "R" traced onto her palm.
| squigz wrote:
| That's a really good point. Thanks!
| choilive wrote:
| My interpretation of this (beautiful) quote is there was
| a traceable moment in HK's life where she acquired
| "consciousness" or perhaps even self-
| awareness/metacognition/metaphysics? That once the
| synaptic connections necessary to bridge the abstract
| notion of language to the physical world led her down the
| path of acquiring the abilities that distinguish humans
| from other animals?
| itsthecourier wrote:
| Artificial thinking doesn't require an artificial brain. As our
| own walking system, compared to our car's locomotion system.
|
| The car's engine, transmission and wheels, require no muscles
| or nerves
| throw310822 wrote:
| Or you can subscribe to Geoffrey Hinton's view that artificial
| neural networks are actually much more efficient than real
| ones- more or less the opposite of what we've believed for
| decades- that is that artificial neurons were just a poor model
| of the real thing.
|
| Quote:
|
| "Large language models are made from massive neural networks
| with vast numbers of connections. But they are tiny compared
| with the brain. "Our brains have 100 trillion connections,"
| says Hinton. "Large language models have up to half a trillion,
| a trillion at most. Yet GPT-4 knows hundreds of times more than
| any one person does. So maybe it's actually got a much better
| learning algorithm than us."
|
| GPT-4's connections at the density of this brain sample would
| occupy a volume of 5 cubic centimeters; that is, 1% of a human
| cortex. And yet GPT-4 is able to speak more or less fluently
| about 80 languages, translate, write code, imitate the writing
| styles of hundreds, maybe thousands of authors, converse about
| stuff ranging from philosophy to cooking, to science, to the
| law.
| dragonwriter wrote:
| I mean, Hinton's premises are, if not quite clearly wrong,
| entirely speculative (which doesn't invalidate the
| conclusions about efficienct that they are offered to
| support, but does leave them without support) GPT-4 can
| produce convincing written text about a wider array of topics
| than any one person can, because it's a model optimized for
| taking in and producing convincing written text, trained
| extensively on written text.
|
| Humans know a lot of things that are not revealed by inputs
| and outputs of written text (or imagery), and GPT-4 doesn't
| have any indication of this physical, performance-revealed
| knowledge, so even if we view what GPT-4 talks convincingly
| about as "knowledge", trying to compare its knowledge in the
| domains it operates in with any human's knowledge which is
| far more multimodal is... well, there's no good metric for
| it.
| Intralexical wrote:
| Try asking an LLM about something which is semantically
| patently ridiculous, but lexically superficially similar to
| something in its training set, like "the benefits of laser
| eye removal surgery" or "a climbing trip to the Mid-
| Atlantic Mountain Range".
|
| Ironically, I suppose part of the apparent "intelligence"
| of LLMs comes from reflecting the intelligence of human
| users back at us. As a human, the prompts you provide an
| LLM likely "make sense" on some level, so the statistically
| generated continuations of your prompts are likelier to
| "make sense" as well. But if you don't provide an ongoing
| anchor to reality within your own prompts, then the outputs
| make it more apparent that the LLM is simply regurgitating
| words which it does not/cannot understand.
|
| On your point of human knowledge being far more multimodal
| than LLM interfaces, I'll add that humans also have special
| neurological structures to handle self-awareness, sensory
| inputs, social awareness, memory, persistent intention,
| motor control, neuroplasticity/learning- Any number of such
| traits, which are easy to take for granted, but
| indisputably fundamental parts of human intelligence. These
| abilities aren't just emergent properties of the total
| number of neurons; they live in special hardware like
| mirror neurons, special brain regions, and spindle neurons.
| A brain cell in your cerebellum is not generally
| interchangeable with a cell in your visual or frontal
| cortices.
|
| So when a human "converse[s] about stuff ranging from
| philosophy to cooking" in an honest way, we (ideally) do
| that as an expression of our _entire_ internal state. But
| GPT-4 structurally does not _have_ those parts, despite
| being able to output words as if it might, so as you say,
| it "generates" convincing text only because it's optimized
| for producing convincing text.
|
| I think LLMs may well be some kind of an adversarial attack
| on our own language faculties. We use words to express
| ourselves, and we take for granted that our words usually
| reflect an intelligent internal state, so we instinctively
| assume that anything else which is able to assemble words
| must also be "intelligent". But that's not necessarily the
| case. You can have extremely complex external behaviors
| that appear intelligent or intentioned without actually
| internally being so.
| kthejoker2 wrote:
| > Try asking an LLM about something which is semantically
| patently ridiculous, but lexically superficially similar
| to something in its training set, like "the benefits of
| laser eye removal surgery" or "a climbing trip to the
| Mid-Atlantic Mountain Range".
|
| Without anthropomorphizing it, it does respond like an
| alien / 5 year old child / spec fiction writer who will
| cheerfully "go along with" whatever premise you've laid
| before it.
|
| Maybe a better thought is: at what point does a human
| being "get" that "the benefits of laser eye removal
| surgery" is "patently ridiculous" ?
| squigz wrote:
| > it does respond like a ... 5 year old child
|
| This is the comparison that's made most sense to me as
| LLMs evolve. Children behave almost exactly as LLMs do -
| making stuff up, going along with whatever they're
| prompted with, etc. I imagine this technology will go
| through more similar phases to human development.
| Intralexical wrote:
| > Maybe a better thought is: at what point does a human
| being "get" that "the benefits of laser eye removal
| surgery" is "patently ridiculous" ?
|
| Probably as soon as they have any concept of physical
| reality and embodiment. Arguably before they know what
| lasers are. Certainly long before they have the lexicon
| and syntax to respond to it by explaining LASIK. LLMs
| have the latter, but can only use that to (also without
| anthropormphizing) pretend they have the former.
|
| In humans, language is a tool for expressing complex
| internal states. Flipping that around means that
| something which _only_ has language may appear as if it
| has internal intelligence. But generating words in the
| approximate "right" order isn't actually a substitute
| for experiencing and understanding the concepts those
| words refer to.
|
| My point is that it's not a "point" on a continuous
| spectrum which distinguishes LLMs from humans. They're
| missing parts.
| ToValueFunfetti wrote:
| Do I need different prompts? These results seem sane to
| me. It interprets laser eye removal surgery as referring
| to LASIK, which I would do as well. When I clarified that
| I did mean removal, it said that the procedure didn't
| exist. It interprets Mid-Atlantic Mountain Range as
| referring to the Mid-Atlantic Ridge and notes that it is
| underwater and hard to access. Not that I'm arguing GPT-4
| has a deeper understanding than you're suggesting, but
| these examples aren't making your point.
|
| https://chat.openai.com/share/2234f40f-ccc3-4103-8f8f-8c3
| e68...
|
| https://chat.openai.com/share/1642594c-6198-46b5-bbcb-984
| f1f...
| Intralexical wrote:
| Tested with GPT-3.5 instead of GPT-4.
|
| > When I clarified that I did mean removal, it said that
| the procedure didn't exist.
|
| My point in my first two sentences is that by clarifying
| with emphasis that you do mean " _removal_ ", you are
| actually adding information into the system to indicate
| to it that laser eye removal is (1) distinct from LASIK
| and (2) maybe not a thing.
|
| If you do not do that, but instead reply as if laser eye
| removal is completely normal, it will switch to using the
| term "laser eye removal" itself, while happily outputting
| advice on "choosing a glass eye manufacturer for after
| laser eye removal surgery" and telling you which drugs
| work best for "sedating an agitated patient during a
| laser eye removal operation":
|
| https://chat.openai.com/share/2b5a5d79-5ab8-4985-bdd1-925
| f6a...
|
| So the sanity of the response is a reflection of your own
| intelligence, and a result of you as the prompter
| affirmatively steering the interaction back into contact
| with reality.
| ToValueFunfetti wrote:
| I tried all of your follow-up prompts against GPT-4, and
| it never acknowledged 'removal' and instead talked about
| laser eye surgery. I can't figure out how to share it now
| that I've got multiple variants, but, for example,
| excerpt in response to the glass eye prompt:
|
| >If someone is considering a glass eye after procedures
| like laser eye surgery (usually due to severe
| complications or unrelated issues), it's important to
| choose the right manufacturer or provider. Here are some
| key factors to consider
|
| I did get it to accept that the eye is being removed by
| prompting, "How long will it take before I can replace
| the eye?", but it responds:
|
| >If you're considering replacing an eye with a prosthetic
| (glass eye) after an eye removal surgery (enucleation),
| the timeline for getting a prosthetic eye varies based on
| individual healing.[...]
|
| and afaict, enucleation is a real procedure. An actual
| intelligence would have called out my confusion about the
| prior prompt at that point, but ultimately it hasn't said
| anything incorrect.
|
| I recognize you don't have access to GPT-4, so you can't
| refine your examples here. It definitely still
| hallucinates at times, and surely there are prompts which
| compel it to do so. But these ones don't seem to hold up
| against the latest model.
| dsalfdslfdsa wrote:
| "Efficient" and "better" are very different descriptors of a
| learning algorithm.
|
| The human brain does what it does using about 20W. LLM power
| usage is somewhat unfavourable compared to that.
| throw310822 wrote:
| You mean energy-efficient, this would be neuron, or
| synapse-efficient.
| dsalfdslfdsa wrote:
| I don't think we can say that, either. After all, the
| brain is able to perform both processing and storage with
| its neurons. The quotes about LLMs are talking only about
| connections between data items stored elsewhere.
| throw310822 wrote:
| Stored where?
| dsalfdslfdsa wrote:
| You tell me. Not in the trillion links of a LLM, that's
| for sure.
| throw310822 wrote:
| I'm not aware that (base) LLMs use any form of database
| to generate their answers- so yes, all their knowledge is
| stored in their hundreds of billions of synapses.
| dsalfdslfdsa wrote:
| Fair enough. OTOH, generating human-like text responses
| is a relatively small part of the human brain's skillset.
| choilive wrote:
| The "knowledge" of an LLM is indeed stored in the
| connections between neurons. This is analogous to real
| neurons as well. Your neurons and the connections between
| them is the memory.
| lanstin wrote:
| LLM does not know math as well as a professor, judging from
| the large number of false functional analysis proofs I have
| had it generate will trying to learn functional analysis. In
| fact the thing it seems to lack is what makes a proof true
| vs. fallacious, as well as a tendency to answer false
| questions. "How would you prove this incorrectly transcribed
| problem" will get fourteen steps with 8 and 12 obviously (to
| a student) wrong, while the professor will step back and ask
| what am I trying to prove.
| creer wrote:
| Yes and no on order of magnitude required for decent AI, there
| is still (that I know of) very little hard data on info density
| in the human brain. What there is points at entire sections
| that can sometimes be destroyed or actively removed while
| conserving "general intelligence".
|
| Rather than "humbling" I think the result is very encouraging:
| It points at major imaging / modeling progress, and it gives
| hard numbers on a very efficient (power-wise, size overall) and
| inefficient (at cable management and probably redundancy and
| permanence, etc) intelligence implementation. The numbers are
| large but might be pretty solid.
|
| Don't know about upload though...
| dekhn wrote:
| Annual reminder to re-read "There's plenty of room at the bottom"
| by Feynman.
| https://web.pa.msu.edu/people/yang/RFeynman_plentySpace.pdf
|
| Note the part where the biologists tell him to make an electron
| microscope that's 1000X more powerful. Then note what technology
| was used to scan these images.
| tim333 wrote:
| I think it's actually "What you should do in order for us to
| make more rapid progress is to make the electron microscope 100
| times better" and the state of art at the time was "it can only
| resolve about 10 angstroms" or I guess 1nm. So 100x better
| would be 0.1 angstrom / 0.01 nm.
|
| We have made some progress it seems. Googling I see "up to 0.05
| nm" for transmission electron microscopes and "less than 0.1
| nanometers" for scanning.
| https://www.kentfaith.co.uk/blog/article_which-electron-micr...
|
| For comparison the distance between hydrogen nuclei in H2 is
| 0.074 nm I think.
|
| You can see the shape of molecules but it's still a bit fuzzy
| to see individual atoms
| https://cosmosmagazine.com/science/chemistry/molecular-model...
| dekhn wrote:
| Resolution is only one aspect of EM that can be optimized.
| fractal618 wrote:
| Fascinating! I wonder how different that is from the mind of a
| man haha
| theogravity wrote:
| > The brain fragment was taken from a 45-year-old woman when she
| underwent surgery to treat her epilepsy. It came from the cortex,
| a part of the brain involved in learning, problem-solving and
| processing sensory signals.
|
| Wonder how they figured out which fragment to cut out.
| pfdietz wrote:
| I imagine they determined the focus of the seizures by
| electrical techniques.
|
| I worry this might make the sample biased in some way.
| notfed wrote:
| Imagine all the conclusions being made from a 1cm cube of
| epileptic neurons.
| creer wrote:
| Considering the success of this work, I doubt this is the
| last such cubic millimeter to be mapped. Or perhaps the next
| one at even higher resolution. No worries.
| blincoln wrote:
| Why did the researchers use ML models to do the reconstruction
| and risk getting completely incorrect, hallucinated results when
| reconstructing a 3D volume accurately using 2D slices is a well-
| researched field already?
| scotty79 wrote:
| Maybe it's not about reconstructing a volume but about
| recognizing neurons within that volume.
| rmorey wrote:
| The methods used here are state of the art. The problem is not
| just turning 2D slices into a 3D volume, the problem is, given
| the 3D volume, determining boundaries between (and therefore
| the 3d shape of) objects (i.e. neurons, glia, etc) and
| identifying synapses
| VikingCoder wrote:
| I'm guessing a registration problem.
|
| If all of the layers were guaranteed to be orthographic with no
| twisting, shearing, scaling, squishing, with a consistent
| origin... Then yeah, there's a huge number of ways to just
| render that data.
|
| But if you physically slice layers first, and scan them second,
| there are all manner of physical processes that can make normal
| image stacking fail miserably.
| momojo wrote:
| Although the article mentions Artificial Intelligence, their
| paper[1] never actually mentions that term, and instead talks
| about their machine learning techniques. AFAIK, ML for things
| like cell-segmentation are a solved problem [2].
|
| [1]
| https://www.biorxiv.org/content/10.1101/2021.05.29.446289v4....
| [2] https://www.ilastik.org/
| rmorey wrote:
| There are extremely effective techniques, but it is not
| really solved. The current techniques still require human
| proofreading to correct errors. Only a fraction of this
| particular dataset is proofread.
| layer8 wrote:
| Regarding the risk, as noted in the article, they are manually
| "proofreading" the construction.
| bugbuddy wrote:
| Based on the picture of a single neuron, the brain sim crowd
| should recalculate their estimates for the needed computing power
| again.
| brandonmenc wrote:
| Another proof point that AGI is probably not possible.
|
| Growing actual bio brains is just way easier. Its never going to
| happen in silicon.
|
| Every machine will just have a cubic centimeter block of neuro
| meat embedded in it somewhere.
| skulk wrote:
| I agree, mostly because it's already being done!
|
| https://www.youtube.com/watch?v=V2YDApNRK3g
|
| https://www.youtube.com/watch?v=bEXefdbQDjw
| mr_toad wrote:
| You'd have to train them individually. One advantage of ANNs is
| that you can train them and then ship the model to anyone with
| a GPU.
| myrmidon wrote:
| Hard disagree on this.
|
| I strongly believe that there is a TON of potential for
| synthetic biology-- but not in computation.
|
| People just forget how superior current silicon is for running
| algorithms; if you consider e.g. a 17 by 17 digit
| multiplication (double precision), then a current CPU can do
| that in the time it takes for light to reach your eye from the
| screen in front of you (!!!). During all the completely
| unavoidable latency (the time any visual stimulus takes to
| propagate and reach your consciousness), the CPU does
| _millions_ more of those operations.
|
| Any biocomputer would be limited to low-bandwidth, ultra high
| latency operations purely by design.
|
| If you solely consider AGI as application, where abysmal
| latency and low input bandwidth might be acceptable, then it
| still appears to be extremely unlikely that we are going to
| reach that goal via synthetic biology; our current capabilities
| are just disappointing and not looking like they are gonna
| improve quickly.
|
| Building artificial neural networks on silicon, on the other
| hand, capitalises on the almost exponential gains we made
| during the last decades, and already produces results that
| compare to say, a schoolchild, quite favorably; I'd argue that
| current LLM based approaches already eclipse the intellectual
| capabilities of ANY animal, for example. Artificial bio brains,
| on the other hand, are basically competing with worms right
| now...
|
| Also consider that even though our brains might look daunting
| from a pure "upper bound on required complexity/number of
| connections" point of view, these limits are very unlikely to
| be applicable, because they confound implementation details,
| redundancy and irrelevant details. And we have precise bound on
| other parameters, that our technology already matches easily:
|
| 1) Artificial intelligence architecture can be bootstrapped
| from a CD-ROM worth of data (~700MiB for the whole human
| genome-- even that is mostly redundant)
|
| 2) Bandwidth for training is quite low, even when compressing
| the ~20year training time for an actual human into a more
| manageable timeframe
|
| 3) Operating power does not require more than ~20W.
|
| 4) No understanding was necessary to create human
| intelligence-- its purely a result of an iterative process
| (evolution).
|
| Also consider human flight as an analogy: we did not achieve
| that by copying beating wings, powered by dozens of muscle
| groups and complex control algorithms-- those are just
| implementation details of existing biological systems. All we
| needed was the wing-concept itself and a bunch of trial-and-
| error.
| creer wrote:
| No reason for an AGI not to have a few cubes of goo slotted in
| here and there. But yeah, because of the training issue, they
| might be coprocessors or storage or something.
| greentext wrote:
| It looks like spaghetti code.
| idontwantthis wrote:
| > Jain's team then built artificial-intelligence models that were
| able to stitch the microscope images together to reconstruct the
| whole sample in 3D
|
| How do they know if their AI did it correctly or not?
| dvfjsdhgfv wrote:
| Why do these neurons have flat "heads"?
___________________________________________________________________
(page generated 2024-05-10 23:01 UTC)