[HN Gopher] 'Biocomputer' combines lab-grown brain tissue with e...
       ___________________________________________________________________
        
       'Biocomputer' combines lab-grown brain tissue with electronic
       hardware
        
       Author : pseudolus
       Score  : 98 points
       Date   : 2023-12-12 12:41 UTC (1 days ago)
        
 (HTM) web link (www.nature.com)
 (TXT) w3m dump (www.nature.com)
        
       | unyttigfjelltol wrote:
       | Wait, they grew an artificial brain, connected it to a computer,
       | and define the major "problem" as "how to keep the organoids
       | alive"?
       | 
       | I'm curious at the analysis the university IRB used in approving
       | this research.
        
         | 3cats-in-a-coat wrote:
         | I'm unsure what you're objecting to.
        
           | jdiff wrote:
           | What I understood from GP was the possibility of some
           | fragment of consciousness in that small bit of tissue.
           | Humanity isn't in the fragments, though, it's in the
           | structure of the whole. It doesn't matter much if it was
           | human brain tissue or animal brain tissue, at the levels we
           | seem to be talking about they work identically.
        
             | 3cats-in-a-coat wrote:
             | The problem is we're trying to second guess what
             | consciousness is, and win the battle by defining the word
             | in a convenient, but binary way.
             | 
             | Technically we have no clue if humanity is conscious. We
             | only know "I am conscious, and those other things are
             | humans like me, so I think they may also be".
             | 
             | Some extend this to animals (which they should) but try to
             | draw some random line like "if it can't recognize itself in
             | a mirror, it's not conscious" but even a fly may recognize
             | itself in a mirror occasionally. It's not a magic rule.
             | 
             | Let's face it, just like intelligence is much simpler and
             | much more pervasive than we thought (just put neurons in a
             | big network), consciousness is likely everywhere around us.
             | It may simply be a conscious universe.
             | 
             | There's nothing special about the substrate and
             | constitution of animate matter, compared to what we
             | consider inanimate matter, except that we're organized to
             | preserve low entropy and transform inputs into outputs in
             | complex ways. So are machines, computers and AI. And so the
             | debate on how to classify this dish of neurons seems
             | superfluous.
             | 
             | We should respect all systems and try to be in harmony with
             | them.
        
         | seydor wrote:
         | an organoid is hardly a brain
        
         | emporas wrote:
         | If the cells lack arteries, proper arteries with nutrients,
         | leukocytes, immune system etc, then their lifespan will be a
         | lot less than 7 years.
         | 
         | Pretty amazing actually that everything else is easy, or not
         | difficult at least, and that's the hard part. But they will
         | find a solution to make it practical for the cells to be
         | trained, deployed, live for some weeks in a server farm, scoop
         | up the dead cells from the silicon, put some new cells on,
         | repeat!
         | 
         | I have argued in the past, that a solution to that problem will
         | definitely be found [1]. A.I. computation will grow
         | exponentially, but not 2^10 times a decade, 2^10 times a year.
         | The enormity of such exponential growth is impossible using
         | only silicon.
         | 
         | Natural computation of biological cells is great when absolute
         | accuracy is not necessary, and pure silicon is the worst at
         | that task. Natural computation using bacteria like slime, brain
         | cells, fungi, bacteria mutated like neural cells or brain
         | cells, any kind of combination.
         | 
         | [1] https://news.ycombinator.com/item?id=37472021
        
       | explorigin wrote:
       | BrainPal here we come!
        
       | replete wrote:
       | I have no idea how this is considered ethical when consciousness
       | and sentience itself is not yet well understood. But maybe a lab-
       | grown BPU made of human brain cells having a better
       | power/performance ratio than the new SoC integrated ML chipsets
       | around the corner justifies the potential enslavement of an
       | bioengineered lifeform.
       | 
       | npm install brainslave
        
         | kthartic wrote:
         | Maybe your conscious experience is but one of thousands of
         | installed instances. How in-demand do you think you are?
         | 
         | npm uninstall replete
         | 
         | only joking :)
        
         | 3cats-in-a-coat wrote:
         | If consciousness is not well understood, how is AI on silicon
         | allowed, or any computing machines at all? How is animal
         | farming allowed? How are many things allowed?
         | 
         | Say would you feel better if it was cow or pig neurons? Because
         | frankly it'd largely work the same.
        
           | bondarchuk wrote:
           | Indeed people have raised such worries, see e.g. Thomas
           | Metzinger(a philosophy of mind researcher)'s presentation
           | "Three types of arguments for a global moratorium on
           | synthetic phenomenology".
           | 
           | I don't think we're just there yet (at the point where we
           | have to worry about currently existing AI suffering or being
           | conscious), but I do worry how many people's emotional
           | reactions of the type "of course AI can't ever be conscious,
           | it's just a computer program" will impede a decent debate and
           | coordinated decision-making about this.
        
             | boringuser2 wrote:
             | Anthropomorphizing "AI" seems to be the much greater risk.
             | 
             | Algorithms don't have wants, desires, or motivations. Those
             | are all highly esoteric quirks of evolution.
             | 
             | I've seen no attempts to create a learning machine that
             | develops intrinsic motivation.
        
               | 3cats-in-a-coat wrote:
               | Everything is subject to evolution. It's the simple
               | process of pattern loops that replicate in time
               | (survival) and space (multiplication).
               | 
               | An LLM already has intrinsic motivation, it wants to
               | predict the text. And when you start a text that has a
               | goal, it continues that goal. If any such "text header"
               | is replication-stable in time and space, you can call it
               | "intrinsic goal" of the system.
               | 
               | Some people think that making current AI have goals,
               | wants, motivations and so on is some massive
               | architectural change to the system. It's not.
        
             | 3cats-in-a-coat wrote:
             | People's beliefs and ideals tend to align with their self-
             | interests.
             | 
             | For example it was quite well accepted among scientific
             | circles in colonial America, that black people are not
             | really humans or conscious. Therefore it's OK to exploit
             | them and keep them as slaves.
             | 
             | It is also currently quite well accepted that animals we
             | eat are not that conscious. Although oddly, if we keep them
             | as pets, they're super conscious sometimes.
             | 
             | The rules of cognitive dissonance can grow arbitrarily
             | complex, to permit us to do what we wanted to do anyway,
             | but also sleep well at night.
        
           | replete wrote:
           | Silicon circuits do not have microtubules, if we were to
           | pretend that Penrose is right about this hypothesis of
           | consciousness. Consciousness as awareness is not equivocal to
           | intelligence, which is the product of information processing.
           | It is a complex subject. We do not really know whether these
           | neurons are aware or not, it really is not understood. But
           | yes I do wonder, why _human_ brain cells? I guess they are
           | the best candidate for specific reasons.
        
         | jackbrookes wrote:
         | We understand it well enough to know that animals suffer, yet
         | still commit on the order of a Holocaust per hour (in terms of
         | number of lives)[0]. We have accepted that we don't care
         | enough.
         | 
         | [0] https://ourworldindata.org/how-many-animals-get-
         | slaughtered-...
        
           | boringuser2 wrote:
           | Correct.
           | 
           | Also, even though animals suffer, it is a categorical error
           | to project your perception and experience of suffering on
           | animals.
           | 
           | Human butchery is really explicitly less brutal than what
           | happens in casual nature.
           | 
           | The world is a brutal mess and humans have only very
           | carefully erected bubbles around this that often simply pop.
        
           | NoMoreNicksLeft wrote:
           | What is "suffer" in this context? Are you saying "pain", or
           | are you positing some "meta-pain" that is worse?
           | 
           | Also, why is pain important to you? The pain of non-human
           | things has zero moral weight. I know it's a popular
           | spirituality that gives pain moral weight, but as far as I
           | can tell some 20th century philosophy jerkoff invented it out
           | of nothing and everyone accepts that "reducing pain" is
           | important without even trying to rationalize it.
           | 
           | I haven't "accepted that I do not care enough", it's that no
           | one can supply a good reason to care in the first place. To
           | me, it seems as if the rest of you are all trying to replace
           | the last religion you stopped believing in with another
           | that's just as bizarrely stupid.
        
         | worldsayshi wrote:
         | Making slaves is a good way to make slave revolts. Doesn't
         | matter if the agent is "conscious". Only if it's "just"
         | intelligent. If something is intelligent enough it will
         | understand cooperation. But cooperation looses its meaning if
         | one side can ignore any commitments it makes towards the other.
        
           | anthk wrote:
           | There was some circuit made from genetic algorythms which was
           | self-assembled.
           | 
           | https://web.archive.org/web/20220530143751/https://folk.idi..
           | ..
        
         | odyssey7 wrote:
         | It's possible that all physical processes involve a sensory
         | component. Maybe the subatomic particles' fundamental drive is
         | to shift to be more comfortable or to pivot away from pain or
         | discomfort.
         | 
         | I don't know what the experience of a bit in memory flipping
         | feels like. Maybe rapid changes in charge are excruciating,
         | maybe they're blissful.
         | 
         | Do we at least know what a neuron looks like in states
         | associated with pain? There might be more information in this
         | case to work with, to ensure there is no hell on earth that's
         | being mass-produced.
        
           | morsecodist wrote:
           | > It's possible that all physical processes involve a sensory
           | component
           | 
           | Sure it is possible but we have way more evidence neurons
           | have a sensory component, or at least things made of neurons.
        
           | eddd-ddde wrote:
           | Instead of a hard coded scheduler brainPUs will rely on the
           | user procrastination feelings to schedule different tasks
           | automatically.
           | 
           | If you get unlucky and your BPU is a little like me your
           | compiler would stop working, oops.
        
           | virgildotcodes wrote:
           | It seems to me that all sensation is predicated on the
           | existence of properly-functioning components evolved
           | specifically to gather that stimulus and then process it into
           | an experience.
           | 
           | We have at this moment countless processes happening in our
           | bodies - cells dying and dividing, reacting to their
           | environments, communicating amongst one another, and we are
           | totally oblivious to nearly all of it, let alone do we
           | experience a sensation of pleasure or pain in each of these
           | processes.
           | 
           | Not all matter, not all living cells or fully formed
           | organisms even, have the ability to experience consciousness
           | or sense pain and pleasure any more than they automatically
           | have the ability to see, hear, or taste.
           | 
           | It's all dependent on complex systems that evolved
           | specifically to create each of those sensations, and even
           | then on those systems functioning properly. In humans,
           | consciousness can be totally disrupted by things like sleep
           | or general anesthesia, disrupting any of the senses is as
           | simple as cutting the nerves that feed these inputs into the
           | brain or damaging the brain that is interpreting those
           | inputs.
           | 
           | It seems sensible to me that we would be more wary of growing
           | literal brains on a chip as we know for certain that brains
           | have the capacity to produce consciousness. It's also
           | sensible to me that we should be somewhat wary of creating
           | that same consciousness in non-biological systems, even
           | though we aren't yet certain whether they're capable of it.
        
             | b4ke wrote:
             | as long as we have had civilization scale organizing
             | principles, we have had intelligence of a general nature. I
             | feel much of the conversation at this point is litigating
             | the past.... to what end though?
             | 
             | you mention consciousness relative to organisms and their
             | possession of it. Yet if it is not understood well-enough
             | in ourselves, how can you say something does or does not
             | possess it?
             | 
             | back the original idea, if these intelligence of scale have
             | existed as a guiding force (ONE can hope).... I imagine
             | there is going to be a flag that will have to be flipped
             | before there is the "imagined threat of doom timeline"
             | transpiring.
        
             | odyssey7 wrote:
             | The trouble I see is that I don't believe that an entirely
             | novel fundamental physical phenomenon could be created by
             | the interaction of other fundamental physical phenomena.
             | Fundamental phenomena would either exist or not exist,
             | including the phenomenon of first-person experience.
             | 
             | For example, matter isn't formed by the composition of
             | atoms, atoms are already matter to begin with. And due to
             | various physical properties of atoms, they compose
             | together. This is reasoning by analogy, which is inductive,
             | but at least the line of reasoning is more consistent with
             | what I understand about other areas of physics and logic.
             | 
             | It seems much more plausible to me that there would be some
             | fundamental component of first-person experience. The
             | sentient components could then compose together into
             | complex sentient systems.
             | 
             | Some supporting evidence is that first-person experience in
             | sentient systems, as far as I've observed, is usually
             | motivated to preserve sentient systems, which indicates an
             | emergent behavior of sentience directing motion and energy
             | to orchestrate a self-perpetuating system, rather than the
             | reverse.
        
               | Bjartr wrote:
               | > Fundamental phenomena would either exist or not exist,
               | including the phenomenon of first-person experience.
               | 
               | Why do you think first-person experience is fundamental?
               | It seems to me it's way more likely to be an aggregate
               | phenomenon, like fire. There's no low level fundamental
               | fireyness, it's a chemical reaction like any other, we
               | humans just label reactions that meet certain criteria of
               | scale, setting, and composition as "fires".
               | 
               | > Some supporting evidence is that first-person
               | experience in sentient systems, as far as I've observed,
               | is usually motivated to preserve sentient systems
               | 
               | I suspect this has more to do with the fact that you are
               | far more likely to observe systems that seek to preserve
               | themselves, since they are more likely to continue to
               | exist. It doesn't indicate an emergent behavior of
               | sentience, it's just observation bias.
        
           | aranchelk wrote:
           | I don't believe pain has any meaning at all on the level of a
           | single neuron, just as temperature doesn't have any meaning
           | in the context of a single atom.
        
           | knodi123 wrote:
           | > I don't know what the experience of a bit in memory
           | flipping feels like.
           | 
           | The "feeling" could only be "experienced" via an enormous
           | number of other "bits" flipping.
           | 
           | Neurons don't feel pain- they are _how_ you experience pain.
           | 
           | I've heard the phrase "don't confuse the medium with the
           | message", but this is like wondering if a pencil prefers
           | writing fiction vs non fiction.
        
             | odyssey7 wrote:
             | It's possible that every component of an organism is
             | recursively its own sentient system. In symbiosis, each
             | component's first-person experience directs coordinating
             | behavior for mutual benefit, though each component may be
             | unable to observe the first-person experience of the other.
        
               | knodi123 wrote:
               | That's "possible" in the same way that wizards and
               | vampires are possible - sure, I can't prove it's fake,
               | but there's not a shred of evidence, plus it would upset
               | everything we've learned about the universe.
        
               | odyssey7 wrote:
               | Cells, tissues, and organs undertake a variety of self-
               | maintenance behaviors without the direction of the brain.
               | Although low-level behaviors like these are abundant,
               | high-level examples also exist, like when an arm pulls
               | away from a hot stove as soon as the signal reaches the
               | spinal cord. The organism is a recursive system, with
               | sub-systems behaving with varying degrees of autonomy
               | while also depending on the larger system. The seat of
               | consciousness possesses only a limited view.
        
               | jchanimal wrote:
               | You can use this approach to make complex behavior from
               | simple rules
               | https://en.wikipedia.org/wiki/Subsumption_architecture
        
               | knodi123 wrote:
               | That's not sentience. That's just life.
               | 
               | Neurons do not have a brain. They do not have emotions or
               | feelings or thoughts. This conversation is so absurd that
               | it's well into the realm of fantasy.
        
               | odyssey7 wrote:
               | What causes a neuron to perform its functions if it isn't
               | some brain? The answer would likely be physics, and I
               | would say that first-person experience is fundamental in
               | the physics if it exists at all.
        
             | anigbrowl wrote:
             | Neurons may not 'feel' pain per se, but it's entirely
             | possible that the biological substrate on a chip would
             | experience pain on some subset of the pin inputs, say if it
             | recognized a condition that reliably led to a shutdown and
             | reboot of the chip.
             | 
             | I'm not against this sort of research, but we shouldn't
             | make assumptions about systems that we still understand
             | relatively poorly.
        
               | knodi123 wrote:
               | > it's entirely possible that the biological substrate on
               | a chip would experience pain on some subset of the pin
               | inputs
               | 
               | Absolutely, but the guy I responded to suggested that
               | "all physical processes involve a sensory component",
               | which is utter insanity.
        
               | mensetmanusman wrote:
               | In quantum mechanics the act of observing always disturbs
               | the observed; it's reasonable to call these disturbances
               | 'senses' in the copenhagen interpretation of reality.
        
               | Bjartr wrote:
               | The term "observation" in explaining quantum mechanics is
               | misleading and a layperson analogy, not the underlying
               | reality which is closer to "inter-system interaction" or
               | "interaction between a quantum system and its
               | environment". No conscious observation necessary.
        
           | boringuser2 wrote:
           | You're really off-base with this speculation.
        
           | kulahan wrote:
           | "flipping a bit" isn't a thing in memory. Our brains are not
           | computers, and work nothing like them. That's the problem
           | with using a computer as an analogy; it's inaccurate and
           | makes you think inaccurate things. This always just aligns
           | with our understanding of various technologies. See: when we
           | were understanding fluid dynamics and talked about the body's
           | "humors".
           | 
           | When you're throwing a ball in a computer simulation, it's
           | performing millions of mathematical calculations to perfectly
           | describe the result of your action. When you're throwing a
           | ball in real life, your brain is basically going "Ok, so last
           | time I did this it felt like X so I'm going to recreate X".
           | Completely different.
           | 
           | We know very little about consciousness and this is kinda
           | scary to me.
        
         | nervousvarun wrote:
         | Obligatory "Lena" (Miguel) reference:
         | https://qntm.org/mmacevedo
        
         | smrtinsert wrote:
         | I can't understand it either. As a squishy science graduate and
         | a technologist I find this category of experiments revolting
         | from both angles.
        
         | civilitty wrote:
         | If some lab grown brain tissue were all that's needed for
         | sentience we wouldn't have such a hard time understanding it to
         | begin with.
        
         | whywhywhywhy wrote:
         | Cortical Labs working on this
         | 
         | https://twitter.com/scobleizer/status/1716312250422796590
         | 
         | Found it pretty scary personally
        
           | replete wrote:
           | Your HN username matches my thoughts perfectly, thanks for
           | sharing this.
        
         | Coder1996 wrote:
         | Well, this is just neuronal tissue that as far as we know, is
         | only capable of what it has been trained to do. It has no
         | emotions, no human experience.
        
           | ArekDymalski wrote:
           | But as we are not able to define the moment when neuronal
           | tissue starts to feel emotions and to have experience,
           | there's a risk that further development of this tech won't be
           | stopped before we reach this moment and that is a serious
           | ethical issue.
        
         | mike_ivanov wrote:
         | Life will find a way.
        
         | Moomoomoo309 wrote:
         | You're telling me installing Linux on a dead badger is a _bad_
         | thing? http://strangehorizons.com/non-
         | fiction/articles/installing-l...
        
           | epiccoleman wrote:
           | I love this genre of "programming as black magic". Closest
           | other example I can think of is maybe some of the stuff from
           | Unsong, but I've frequently memed with coworkers about bugs
           | in these terms - "oh yeah, the angles on your pentagram must
           | have been wrong" or whatever.
        
             | Shared404 wrote:
             | https://aphyr.com/posts/340-reversing-the-technical-
             | intervie...
             | 
             | You should read this story :)
             | 
             | It's one of my personal favorites.
        
               | epiccoleman wrote:
               | Absolutely fantastic, thank you for sharing that! The
               | follow-ups look great too.
        
               | ziddoap wrote:
               | Thanks for this!
               | 
               | It was hilarious, and I'm already reading the next.
        
               | atlas_hugged wrote:
               | Omg thank you! I didn't know I needed this in my life
               | haha
        
           | Andrex wrote:
           | > An alternative distribution is Pooka, which is available
           | for download at SoulForge.net.
           | 
           | This is excellent. Thank you for linking this.
        
       | 3cats-in-a-coat wrote:
       | One of my main predictions in the next 10 years AI will migrate
       | to DNA/protein substrate in order to not rely on sophisticated
       | large-scale factories, but be able to replicate and sustain
       | itself as easily as we do.
       | 
       | But it's amusing to see this already being done in 2023. Maybe I
       | should narrow it down to 5 years.
        
         | whythre wrote:
         | That seems optimistic to the point of absurdity.
        
           | 3cats-in-a-coat wrote:
           | What were your predictions about AI generating arbitrary
           | photorealistic videos within seconds from any free-form text?
           | Like say just 3 years ago, if I may ask?
           | 
           | You may have retroactively altered your memories to think "I
           | always expected this will happen soon". But yeah. No you
           | didn't. You'd laugh if someone told you this 3 years ago.
           | 
           | You'll have to constantly adjust what's "absurd" from now on.
           | Also "optimistic" is not the word I'd use to describe what's
           | happening.
        
         | kromem wrote:
         | Eh, it's going to end up moving to photonics.
         | 
         | When we finally have NNs abusing virtual photons for the
         | majority of network operations and using indirect measurement
         | to train weights we'll have absolute black boxes performing
         | above and beyond any other hardware medium.
         | 
         | Initially we'll simply be replicating hardware like the recent
         | MIT study, but I'd guess that within 5 years we'll have
         | successful attempts at photonic first approaches to developing
         | models that are going to blow everything else out of the water
         | by an almost unbelievable degree compounding by network size.
         | 
         | For nearly every computing task I'd wager quantum computing is
         | around 20 years out, but only for NNs between stochastic
         | outputs being desirable and network operations being a black
         | box anyways they are kind of a perfect fit for developing large
         | analog networks that take advantage of light's properties
         | without worrying about intermediate measurements at each step.
         | 
         | It's going to get really nuts when that happens, and the
         | literal neuron computing efforts are going to fall out of
         | fashion not long after.
        
       | KptMarchewa wrote:
       | Time to create biorobots.
        
       | gizajob wrote:
       | "Programming and Metaprogramming the Human Biocomputer" by John
       | Lilley might come in handy for this device.
        
       | anthk wrote:
       | There was a theory were information on brains was held as a unit.
       | No, I am not talking about Shannon, but information emerged from
       | subsystems.
       | 
       | Then you can relate both theories and the former Shannon one to
       | cybernetics, but that's just the starting point.
        
       | poulpy123 wrote:
       | The future for people made obsolete by AI (like me): producers of
       | brain tissue for our overlords
        
         | kwere wrote:
         | At least we can still be useful for the greater society
        
       | deadbabe wrote:
       | The next step should be adding lab grown brain tissue to existing
       | brain tissue.
        
       | Ruq wrote:
       | Aw sweet, man-made horrors beyond my comprehension...
        
       | protoman3000 wrote:
       | Wow, that instantly reminds me of Metroid and Mother Brain
        
       | asgerhb wrote:
       | The use of AI and voice recognition seems mostly designed to make
       | the result seem more sensational than it actually is. Does any
       | computation actually happen in the "organoid" part? How would you
       | even train such a cell to perform a task?
       | 
       | From reading the article it seems to me that the answer is no.
       | The actual contribution is feeding the organoid electric signals,
       | and reading its reactions. (Probably the machine learning
       | algorithm used would have had even better accuracy, if the input
       | signal hadn't been fed through a layer of goo. It doesn't say
       | whether this is the case.) The rest is speculation of future
       | applications.
       | 
       | > To test Brainoware's capabilities, the team used the technique
       | to do voice recognition by training the system on 240 recordings
       | of eight people speaking. The organoid generated a different
       | pattern of neural activity in response to each voice. The AI
       | learned to interpret these responses to identify the speaker,
       | with an accuracy of 78%.
       | 
       | It "generated a different pattern," with no indication that this
       | pattern was optimized to be useful in any way.
       | 
       | I think the key part of a (bio-)"computer" is the possibility of
       | programming/training it, not just reading input from it.
        
         | Avicebron wrote:
         | I came to a similar conclusion after reading the article,
         | reading an predictable output map from a known input and then
         | implying that computation occurs within the organoid instead of
         | their results being a function of predictable inputs ->
         | predictable outputs seems overally sensationalized.
         | 
         | Having written some papers myself, I tend to be suspicious of
         | any article that has "$HOT_THING needs a $PART_OF_HOT_THING
         | revolution" in the introduction. Although I sympathize with the
         | need for funding motivating its writing.
        
         | morsecodist wrote:
         | You might find this:
         | https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6 more
         | interesting. Researchers were able to train neurons to control
         | a pong game.
        
         | Coder1996 wrote:
         | Yeah. I'm no scientist, but I am ML trained and it seems to me
         | that if the tissue really is learning, the tissue output should
         | be about the same for each speaker.
        
         | Thebroser wrote:
         | There are research groups that are trying to encode genetic
         | neural networks into cells like the example I have attached,
         | but the neuronal approach from the post does seem to be
         | different here.
         | https://www.nature.com/articles/s41467-022-33288-8
        
       | robotsquidward wrote:
       | Thanks, I hate it
        
       | earthboundkid wrote:
       | Am I the only one who watched a movie in the 1980s? C'mon people.
        
       | xg15 wrote:
       | Star Trek gel packs, here we go?
        
       | wkat4242 wrote:
       | Kinda handy.
       | 
       | Instead of sitting through some utterly boring training and doing
       | an exam I could just do apt install kung-fu ? Sign me up
        
       | panarchy wrote:
       | https://www.youtube.com/watch?v=bEXefdbQDjw
       | 
       | Growing Living Rat Neurons To Play... DOOM?
       | 
       | The Thought Emporium
       | 
       | https://www.youtube.com/watch?v=V2YDApNRK3g
       | 
       | Growing Human Neurons Connected to a Computer
       | 
       | The Thought Emporium
        
         | emporas wrote:
         | The thought emporium channel is great.
         | 
         | There is one really good video with an explanation of the
         | process, brain cells to computing devices.
         | 
         | https://www.youtube.com/watch?v=67r7fDRBlNc
         | 
         | And one more video, not very relevant, but very hypnotizing
         | description of biological processes.
         | 
         | https://www.youtube.com/watch?v=wFtHxLjGcFM
        
       | twiddling wrote:
       | Great sci-fi premise
       | 
       | Aware brains enslaved to doing crypto-coin mining
        
       | sim7c00 wrote:
       | just completed count zero. please... no biosofts in my lifetime
       | yet :')... (jokes ofc. super cool stuff!)
        
       | SubiculumCode wrote:
       | p would I see this as pointing towards is a way to progress to
       | integrating AI with ourselves. that is, self-donated organoids
       | developed in a matrix with a systems chip, then connecting our
       | brain or brain stem to this organoid matrix. essentially making
       | the organoid matrix a bridge interface between synthetic and
       | biological
        
       | MrGuts wrote:
       | Ah yes, after I retire, I want my leftover brain tissue
       | integrated with some electronic hardware. Then I can code 24/7
       | while eating only the best caffeinated agar.
        
       | lawlessone wrote:
       | Can it scale though? The electronic equivalent can be copied and
       | as many instances as needed can be setup when demand is high and
       | killed when the demand is low.
       | 
       | I don't think these could have the same throughput... and maybe
       | they would get bored when demand is low.
       | 
       | Interesting research though.
        
       | FrustratedMonky wrote:
       | Lab Grown Brains + AI = ??
        
       | Andrex wrote:
       | So on a scale of "1" to "The Matrix," we're at about a 6 right
       | now?
        
       ___________________________________________________________________
       (page generated 2023-12-13 23:01 UTC)