[HN Gopher] Cerebrum: Simulate and infer synaptic connectivity i...
___________________________________________________________________
Cerebrum: Simulate and infer synaptic connectivity in large-scale
brain networks
Author : notallm
Score : 61 points
Date : 2024-12-24 18:14 UTC (4 hours ago)
(HTM) web link (svbrain.xyz)
(TXT) w3m dump (svbrain.xyz)
| HL33tibCe7 wrote:
| Question: would such a simulation be conscious? If not, why not?
| jncfhnb wrote:
| Also, would it be quorled?
| fellerts wrote:
| What now?
| jncfhnb wrote:
| Precisely
| GlenTheMachine wrote:
| This, exactly.
|
| Nobody can defined what consciousness is in terms that can be
| experimentally validated. Until that happens not only can the
| question not be answered, it isn't even a question that makes
| any sense.
| bqmjjx0kac wrote:
| There are more questions that make sense than those that
| can be tested.
|
| To the point, I have a convincing experience of
| consciousness and free will -- qualia -- and I suspect a
| digital clone of my brain would have a similar experience.
| Although this question is not testable, I think it's
| inaccurate to say that it doesn't make sense.
| jncfhnb wrote:
| It's not really the testableness that makes the problem.
| It's the fact that you're asking about X where X is
| undefined.
|
| If you can provide a satisfying definition of X it's very
| clear
| kelseyfrog wrote:
| How would you operationalize quorledness?
| Vampiero wrote:
| Answer: no one knows how consciousness works
| fallingknife wrote:
| If you accept the two premises:
|
| 1. A human brain has consciousness
|
| 2. The device is a perfect digital replica of a human brain
|
| I can't think of any argument that the device does not have
| consciousness that doesn't either rely on basically magic or
| lead to other ridiculous conclusions.
| Vampiero wrote:
| The argument is that if you accept your precondition then you
| must also accept superdeterminism and that free will does not
| exist.
|
| Because physicalism implies that the mind is an emergent
| phenomenon, and because quantum physics is a linear theory,
| there's simply no place for free will unless you go down some
| weird/unfalsifiable rabbit holes like the MWI.
|
| So a lot of people prefer to latch onto the idea that there
| is a soul, because if there wasn't one, then they wouldn't be
| special anymore.
| namero999 wrote:
| At the present state of affairs, "a human brain has
| consciousness" is the magical bit.
| binoct wrote:
| There are some great, thorny, philosophical and physical
| arguments to be had with your proposed conclusion, but let's
| say we all agree with it.
|
| The bigger, more relevant, and testable challenge is premise
| #2. The gap between this proposed research tool and "a
| perfect digital replica of a human brain" (and all
| functionally relevant inputs and outputs from all other
| systems and organs in the body) is monstrously large. Given
| that we don't understand what mechanism(s) consciousness
| arises from, a model would have to be 100% perfect in
| basically all aspects for the conclusion to be true.
| namero999 wrote:
| Of course not. A simulation is not the process itself. So even
| provisionally granting that consciousness is magically created
| by the brain (for which we have no evidence), a computer
| simulation would still not be a brain and therefore not create
| consciousness.
|
| You would not expect your computer to pee on your desk if you
| were to simulate kidney function, would you?
| lucasoshiro wrote:
| > You would not expect your computer to pee on your desk if
| you were to simulate kidney function
|
| If my computer is connected to actuators that opens a pee
| valve like a brain is, then I expect.
|
| The main point, in think, is that we can't say precisely what
| consciousness is. Everything definition on that that I can
| imagine is something that can be replicated in a computer or
| that relies on belief, like the existence of a soul...
|
| I hope that we have answers for that before the technology
| that allow us to do that
| geor9e wrote:
| Just like the thousands of times this has been asked in the
| last century of sci-fi novels, the answer to such semantic
| questions depends on the mood of the audience for how
| approximate of a copy they need the ship of Theseus to be
| before they're comfortable using a certain word for it.
| bossyTeacher wrote:
| Bit hard to answer that since there is no definition of
| consciousness, is there? If I gave you access to the brain of
| some living animal, you wouldn't be able to tell whether it was
| "conscious" would you? Not sure, how can we expect that from an
| artificial and highly simplified version of a neural network
| aithrowawaycomm wrote:
| The real problem is that this uses a very a crude and
| inadequate model for neurons: it ignores neurotransmitters,
| epigentics, or the dendritic complexity of cortical neurons.
| There's no chance this system will ever come close to
| simulating an actual brain.
| jncfhnb wrote:
| I feel like these kinds of things are misguided. Our "minds" are
| not, for lack of a better term, Chinese rooms operating on
| external stimulus. Our minds aren't just brains, they're deeply
| tied to our bodily state and influenced by hormonal mechanics
| that many different organs besides the brain control. It's kind
| of romantic to say we could digitally replicate a brain in
| isolation, but our minds are messy and tangled. We might tend to
| think of these modifiers, like being hangry or horny, as
| deviations from a "normal", but frankly I doubt it. I would wager
| these dynamics actually control the majority of our "minds" and
| the brain is just encoding/decoding hardware.
| fallingknife wrote:
| If you can digitally replicate a brain, you can also digitally
| replicate all those input signals.
| Out_of_Characte wrote:
| I would agree if their goal indeed is to put a mind in a jar
| but, I've not read anything in the article that could indicate
| that. So may I suggest my own interpretation:
|
| Accurate understanding of 'normal' brain behaviour might lead
| to increased understanding of brain diseases. Thats why
| alzheimers was mentioned. But more importantly, if our
| understanding of the brain becomes good enough, we might be
| able to make a neural net understand our thoughts if we can
| adapt to it.
| lucasoshiro wrote:
| From a computer science perspective, the stimuli from the other
| organs, the hormones, oxygen levels and so on would be the
| inputs, while the actions and thoughts would be the outputs.
|
| It's like saying that we can't simulate a computer in a Turing
| machine because a Turing machine doesn't have a USB port to
| connect a mouse. Change the perspetive considering that the
| mouse movements are inputs and everything works. Same idea
| someothherguyy wrote:
| > It's like saying that we can't simulate a computer in a
| Turing machine because a Turing machine doesn't have a USB
| port to connect a mouse.
|
| I don't follow the analogy.
| dfgtyu65r wrote:
| So, if I understand correctly they're using Hodgkin-Huxley LIF
| neurons but trained end-to-end in a graph neural network. Through
| training to reproduce the neural data, the network learns the
| underlying connectivity of the neural system?
|
| This seems very cool, but I'm surprised this kind of thing
| attracts VC money! I'm also skeptical how well this would scale
| due to the inherently underdetermined nature of neural
| recordings, but I've only skimmed the PDF so may be missing their
| goals and approach.
| marmaduke wrote:
| HH is kinda the opposite of LIF on the abstraction spectrum.
| dfgtyu65r wrote:
| I mean HH is an elaboration of the LIF with the addition of
| several equations for the various ion channels, but yeah I
| see what you mean.
| kelseyfrog wrote:
| There's much better brains to recreate than mine. Shit's broken.
| dang wrote:
| We've taken your brain out of the title now (no offense).
|
| (Submitted title was "Cerebrum: What if we could recreate your
| brain?")
| seany62 wrote:
| Based on my very limited knowledge of how current "AI" systems
| work, this is the much better approach to achieving true AI.
| We've only modeled one small aspect of the human (the neuron) and
| brute forced it to work. It takes an LLM millions of examples to
| learn what a human can in a couple of minutes--then how are we
| even "close" to achieving AGI?
|
| Should we not mimic our biology as closely as possible rather
| than trying to model how we __think__ it works (i.e. chain of
| thought, etc.). This is how neural networks got started, right?
| Recreate something nature has taken millions of years developing
| and see what happens. This stuff is so interesting.
| etrautmann wrote:
| There's currently an enormous gulf in between modeling biology
| and AGI, to the point where it's not even clear exactly where
| one should start. Lots of things should indeed be tried, but
| it's not obvious what could lead to impact right now.
| idiotsecant wrote:
| Great, let's do that. So how does consciousness work again,
| biologically?
| albumen wrote:
| Why are you asking them? Isn't to discover that a major
| reason to model neural networks?
| veidelis wrote:
| What is consciousness?
| lostmsu wrote:
| > It takes an LLM millions of examples to learn what a human
| can in a couple of minutes
|
| LLMs learn more than humans learn in a lifetime in under 2
| years. I don't know why people keep repeating this "couple of
| minutes". Humans win on neither the data volume to learn
| something nor the time.
|
| How much time do you need to learn lyrics of a song? How much
| time do you think a LLaMA 3.1 8B on a 2x3090 need? What if you
| need to remember it tomorrow?
| someothherguyy wrote:
| > How much time do you need to learn lyrics of a song? How
| much time do you think a LLaMA 3.1 8B on a 2x3090 need?
|
| Probably not the best example. How long does it take to input
| song lyrics into a file to have an operating system "learn"
| it?
| aithrowawaycomm wrote:
| They mean learning _concepts,_ not rote factual information.
| I also hate this misanthropic "LLMs know more than average
| humans" falsehood. What it actually means "LLMs know more
| general purpose trivial than average humans" because average
| humans are busy learning things like what their boss is like,
| how their kids are doing in school, how precisely their car
| handles, etc.
| pedrosorio wrote:
| > Should we not mimic our biology as closely as possible rather
| than trying to model how we __think__ it works (i.e. chain of
| thought, etc.).
|
| Should we not mimic migrating birds' biology as closely as
| possible instead of trying to engineer airplanes for
| transatlantic flight that are only very loosely inspired in the
| animals that actually fly?
| patrickhogan1 wrote:
| Because it works. The Vikings embodied a mindset of skaldic
| pragmatism: doing things because they worked, without needing
| to understand or optimize them.
|
| Our bodies are Vikings. Our minds still want to know why.
| krapp wrote:
| I'm pretty sure the Vikings understood their craft very well.
| You don't become a maritime power that pillages all of Europe
| and reaches the New World long before Columbus without
| understanding how things work.
| marmaduke wrote:
| having worked on whole brain modeling the last 15 years and
| european infra for supporting this kinda research, this is a
| terrible buzzword salad. the pdf is on par with a typical masters
| project.
| SubiculumCode wrote:
| I'm a bit confused at what this is actually. Is it a modeling
| framework that you use to build and study a network? Is it a
| system that you use to help you analyze your neural recording
| datasets? Neuroscience is a big place, so I feel like maybe the
| article and technical paper is speaking to a different audience
| than me, a neuroimager.
| fschuett wrote:
| Simulating a brain would mean that reason, the ability to discern
| good from bad, is a statistical process. All scientific evidence
| so far shows that this is not the case, since AIs do not have the
| ability to "understand" what they're doing, their input data has
| to be classified first to be usable to the machine. Especially
| the problem of model collapse shows this, when an AI is trained
| on the output of another AI, trained on the output of another AI,
| it will eventually produce garbage, why? Because it doesn't
| "understand" what it's doing, it just matches patterns. The only
| way to correct it is with hundreds or even thousands of employees
| that give meaning to the data to guide the model.
|
| Consciousness presumes the ability of making conscious decisions,
| especially the ability to have introspection and more
| importantly, free will (otherwise the decision would not be
| conscious, but robotic regurgitation), to reflect and to judge on
| the "goodness" or "badness" of decisions, the "morality". Since
| it is evident that humans do not always do the logical best thing
| (look around you how many people make garbage decisions), a
| machine can never function like a human can, it can never have
| opinions (that aren't pre-trained input), as it makes no
| distinction between good and bad without external input. A
| machine has no free will, which is a requirement for
| consciousness. At best, it can be a good faksimile. It can be
| useful, yes, but it cannot make conscious decisions.
|
| The created cannot be bigger than the creator in terms of
| informational content, otherwise you'd create a supernatural
| "ghost" in the machine. I hope I don't have to explain why I
| consider creating ghosts unachievable. Even with photo or video
| AIs, there is no "new" content, just rehashed old content which
| is a subset of the trained data (why AI-generated photos often
| have this kind of "smooth" look to them). The only reason the
| output of AI has any meaning to us is because we give it meaning,
| not the computer.
|
| So, before wasting millions of compute hours on this project, I'd
| first try to hire and indebted millennial who will be glad to
| finally put his philosophy degree to good use.
| kelseyfrog wrote:
| Consciouness of the gaps.
|
| Labeling ourselves as the most intelligent species has done
| irreparable psychic damage.
| eMPee584 wrote:
| This describes the current situation, but what about if models
| become self-learning and dynamic both in content (weights) as
| in structure / architecture? what changes if these digital
| systems are combined with biological neuronal networks and
| quantum processors? It seems too early to rule out a
| possibility of emergence of consciousness from machines..
| beings, yet uncreated..
| jmyeet wrote:
| Fun fact: neurons are kept electrically negative or, more
| specifically, the resting membrane poential is negative [1]. It
| does this with a mechanism that exchange sodium and potassium
| ions, a process that uses approximately 10% of the body's entire
| energy budget [2].
|
| I think it'll be incredibly difficult simulate a neuron in a
| meaningful way because neurons, like any cell, are a protein
| soup. They're exchanging ions. those ions will affect the cell.
| The neuron's connections grow and change.
|
| [1]: https://www.khanacademy.org/science/biology/human-
| biology/ne...
|
| [2]:
| https://bionumbers.hms.harvard.edu/bionumber.aspx?id=103545&...
___________________________________________________________________
(page generated 2024-12-24 23:00 UTC)