[HN Gopher] When will computer hardware match the human brain? (...
       ___________________________________________________________________
        
       When will computer hardware match the human brain? (1998)
        
       Author : cs702
       Score  : 69 points
       Date   : 2024-04-22 16:43 UTC (6 hours ago)
        
 (HTM) web link (www.jetpress.org)
 (TXT) w3m dump (www.jetpress.org)
        
       | cs702 wrote:
       | Moravec's plots (from 1998!) are looking more and more like
       | prophecies:
       | 
       | https://www.jetpress.org/volume1/power_075.jpg
       | 
       | https://www.jetpress.org/volume1/All_things_075.jpg
        
         | BitwiseFool wrote:
         | What I find incredible about the progress of computing power is
         | that there isn't anything that _actually_ makes Moore 's law a
         | given. Engineers keep discovering advances in materials science
         | and manufacturing that enable advancements on such a consistent
         | pace. Is this a fluke? Luck? What enables this?
        
           | almostnormal wrote:
           | The market does. New stuff needs to be sold every year, and
           | needs to be some amount faster to find a buyer. The cost of
           | development is minimized while still reaching that goal,
           | limiting the gain to no more than necessary.
        
           | jcranmer wrote:
           | Tens of billions of dollars of investment specifically to
           | keep up with Moore's Law.
           | 
           | It's basically akin to something like the Manhattan Project
           | or the Apollo Program, just something initiated by private
           | corporations rather than a government-directed investment
           | program.
        
           | loandbehold wrote:
           | One thing driving Moore's law is the every generation of
           | computers is used to design/simulate next generation. It's a
           | positive feedback loop.
        
       | jll29 wrote:
       | Moravac (in the linked paper): "In both cases, the evidence for
       | an intelligent mind lies in the machine's performance, not its
       | makeup."
       | 
       | Do you agree?
       | 
       | I'm much less keen to ascribe "intelligence" to large, pretrained
       | language models given that I know how primitive their training
       | regime is compared to a scenario where I might have been
       | "blended" by their ability to "chat" (double quote here since I
       | know ChatGPT and the likes do not have a memory, so all prior
       | interactions have to be re-submitted with each turn of a
       | conversation).
       | 
       | Intuitively, I'd be more prone to ascribe intelligence based on
       | convincing ways of construction that go along with intellignece-
       | like performance, especially if the model also makes human-like
       | errors.
        
         | jimkoen wrote:
         | Reducing the capability of the human brain to performance alone
         | is too simplistic, especially when looking at LLM's. Even if we
         | would assign some intelligence to LLM's, they need a 400w GPU
         | at inference time, and several orders of magnitude more of
         | those at training time. The human brain runs constanly at ~20w.
         | 
         | I highly doubt you'd be able to get even close to that kind of
         | performance with current manufacturing processes. We'd need
         | something entirely different from laser lithography for that to
         | happen.
        
           | psunavy03 wrote:
           | > We'd need something entirely different from laser
           | lithography for that to happen.
           | 
           | Like, you know . . . nerve cells.
        
             | syndicatedjelly wrote:
             | Care to explain?
        
           | ben_w wrote:
           | The problem isn't the manufacturing process, but rather the
           | architecture.
           | 
           | At a low level: We take an analog component, then drive it in
           | a way that lets us treat it as digital, then combine loads of
           | them together so we can synthesise a low-resolution
           | approximation of an analog process.
           | 
           | At a higher level: We don't really understand how our brains
           | are architected yet, just that it can make better guesses
           | from fewer examples than our AI.
           | 
           | Also, 400 W of electricity is generally cheaper than 20 W of
           | calories (let alone the 38-100 W rest of body needed to keep
           | the brain alive depending on how much of a couch potato the
           | human is).
        
           | pixl97 wrote:
           | >The human brain runs constanly at ~20w.
           | 
           | Most likely because any brains that required more energy died
           | off at evolutionary time scales. And while there are some
           | problems with burning massive amounts of energy to achieve a
           | task (see: global warming) this is not likely a significant
           | short falling that large scale AI models have to worry about.
           | Seemingly there are plenty of humans willing to hook them up
           | to power sources at this time.
           | 
           | Also you might want to consider the 0-16 year training stages
           | for human which have become more like 0-21 year training
           | stages with at minimum 8 hours of downtime per day. This does
           | adjust the power dynamics pretty considerably in that the
           | time actually thinking daily drops to around 1/3rd the day
           | boosting effective power use to 60w (as in you've wasted 2/3s
           | the power eating, sleeping, and pooping). In addition that
           | model you've spend a lot of power training is able to be
           | duplicated across thousands/millions of instances in short
           | order, where as you're praying that human you've trained
           | doesn't step out in front of a bus.
           | 
           | So yes, reducing the capability of a human brain/body to
           | performance alone is far too simplistic.
        
         | djokkataja wrote:
         | I agree with Moravec. As he points out a bit later on:
         | 
         | > Only on the outside, where they can be appreciated as a
         | whole, will the impression of intelligence emerge. A human
         | brain, too, does not exhibit the intelligence under a
         | neurobiologist's microscope that it does participating in a
         | lively conversation.
         | 
         | We only have fuzzy definitions of "intelligence", not any
         | essential, unambiguous things we can point to at a minute
         | level, like a specific arrangement of certain atoms.
         | 
         | Put another way, we've used the term "intelligent" to refer to
         | people (or not) because we found it useful to describe a
         | complex bundle of traits in a simple way. But now that we're
         | training LLMs to do things that used to be assumed to be
         | exclusively the capacity of humans, the term is getting
         | stretched and twisted and losing some of its usefulness.
         | 
         | Maybe it would be more useful to subdivide the term a bit by
         | referring to "human intelligence" versus "LLM intelligence".
         | And when some new developments in AI seem like they're
         | different from "LLM intelligence", we can call them by whatever
         | distinguishes them, like "Q* intelligence", for example.
        
           | visarga wrote:
           | > The intelligence of a system is a measure of its skill-
           | acquisition efficiency over a scope of tasks, concerning
           | priors, experience, and generalization difficulty.
           | 
           | (Chollet, 2019, https://arxiv.org/pdf/1911.01547.pdf)
           | 
           | Priors here means how targeted is the model design to the
           | task. Experience means how large is the necessary training
           | set. Generalization difficulty is how hard is the task.
           | 
           | So intelligence is defined as ability to learn a large number
           | of tasks with as little experience and model selection as
           | possible. If it's a skill only possible because your model
           | already follows the structure of the problem, then it won't
           | generalize. If it requires too much training data, it's not
           | very intelligent. If it's just a set number of skills and
           | can't learn new ones quickly, it's not intelligent.
        
             | ElectronCharge wrote:
             | Your final paragraph is a poor definition of human level
             | intelligence.
             | 
             | Yes, learning is an important aspect of human cognition.
             | However, the key factor that humans possess that LLMs will
             | never possess, is the ability to reason logically. That
             | facility is necessary in order to make new discoveries
             | based on prior logical frameworks like math, physics, and
             | computer science.
             | 
             | I believe LLMs are more akin to our subconscious processes
             | like image recognition, or forming a sentence. What's
             | missing is an executive layer that has one or more streams
             | of consciousness, and which can reason logically with full
             | access to its corpus of knowledge. That would also add the
             | ability for the AI to explain how it reached a particular
             | conclusion.
             | 
             | There are likely other nuances required (motivation etc.)
             | for (super) human AI, but some form of conscious executive
             | is a hard requirement.
        
         | pfdietz wrote:
         | This is reminding me again of The Bitter Lesson.
         | 
         | http://www.incompleteideas.net/IncIdeas/BitterLesson.html
        
           | andoando wrote:
           | I am not a fan of this article. The vary foundation of
           | computer science was an attempt to emulate a human mind
           | processing data.
           | 
           | Foundational changes are of course harder, but it does not
           | mean we should drop it all together.
        
             | rerdavies wrote:
             | > The very foundation of computer science was an attempt to
             | emulate a human mind processing data.
             | 
             | The very foundation of computer science was an attempt to
             | emulate a human mind mindlessly processing data. Fixed that
             | for you.
             | 
             | And I'm still not sure I agree.
             | 
             | The foundation of computer science was at attempt to
             | process data so that human minds didn't have to endure the
             | drudgery of such mindless tasks.
        
           | Animats wrote:
           | From that article: "actual contents of minds are
           | tremendously, irredeemably complex".
           | 
           | But they're not. The "bitter lesson" of machine learning is
           | that the primitive operations are really simple. You just
           | need a lot of them, and as you add more, it gets better.
           | 
           | Now we have a better idea of how evolution did it.
        
         | ben_w wrote:
         | LLMs are radically unlike organic minds.
         | 
         | Given their performance, I think it is important to pay
         | attention to their weirdnesses -- I can call them "intelligent"
         | or "dumb" without contradiction depending on which specific
         | point is under consideration.
         | 
         | Transistors outpace biological synapses by the same degree to
         | which a marathon runner outpaces _continental drift_. This
         | speed difference is what allows computers to read the entire
         | text content of the internet on a regular basis, whereas a
         | human can 't read all of just the current version of the
         | English language Wikipedia once in their lifetime.
         | 
         | But current AI is very sample-inefficient: if a human were to
         | read as much as an LLM, they would be world experts at
         | everything, not varying between "secondary school" and "fresh
         | graduate" depending on the subject... but even that description
         | is misleading, because humans have the System 1/System 2[0]
         | distinction and limited attention[1], whereas LLMs pay
         | attention to approximately everything in the context window and
         | (seem to) be at a standard between our System 1 and System 2.
         | 
         | If you're asking about an LLM's intelligence because you want
         | to replace an intern, then the AI are intelligent; but if
         | you're asking because you want to know how many examples they
         | need in order to decode North Sentinelese or Linear A, then
         | (from what I understand) these AI are extremely stupid.
         | 
         | It doesn't matter if a submarine swims[2], it still isn't going
         | to fit into a flooded cave.
         | 
         | [0] https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
         | 
         | [1] https://youtu.be/vJG698U2Mvo?si=omf3xleqPw5u6Y2k
         | 
         | https://youtu.be/ubNF9QNEQLA?si=Ja-9Ak4iCbcxbWdh
         | 
         | https://youtu.be/v3iPrBrGSJM?si=9cKHXEvEGl764Efa
         | 
         | https://www.americanbar.org/groups/intellectual_property_law...
         | 
         | [2] https://www.goodreads.com/quotes/32629-the-question-of-
         | wheth...
        
           | lukeschlather wrote:
           | > if a human were to read as much as an LLM, they would be
           | world experts at everything
           | 
           | It would take a human more than one lifetime to read
           | everything most LLMs have read. I often have trouble
           | remembering specifics of something I read an hour ago, never
           | mind a decade ago. I can't imagine a human being an expert in
           | something they read 300 years ago.
        
           | glial wrote:
           | > LLMs are radically unlike organic minds.
           | 
           | I used to think this too, but now I'm not so sure.
           | 
           | There's an influential school of thought arguing that one of
           | the primary tasks of the brain is to predict sensory input,
           | e.g. to take sequences of input and predict the next
           | observation. This perspective explains many phenomena in
           | perception, motor control, and more. In an abstract sense,
           | it's not that different from what LLMs do -- take a sequence
           | and predict the next item in a sequence.
           | 
           | The System 1 snd System 2 framework is appealing and quite
           | helpful, but let's unpack it a little. A mode of response is
           | said to be "System 1" if it's habitual, fast, and implemented
           | as a stimulus-response mapping. Similar to LLMs, System 1 is
           | a "lookup table" of actions.
           | 
           | System 2 is said to be slow, simulation-based, etc. But tasks
           | performed via System 2 can transition to System 1 through
           | practice ('automaticity' is the keyword here). Moreover, pre-
           | automatic slow System 2 actions are compositions are simpler
           | sets of actions. You deliberate about how to compose a
           | photograph, or choose a school for your children, but many of
           | the component actions (changing a camera setting, typing in a
           | web URL) are habitual. It seems to me that what people call
           | System 2 actions are often "stitched-together" System 1
           | behaviors. Solving calculus problem may be System 2, but
           | adding 2+3 or writing the derivative of x^2 is System 1. I'm
           | not sure the distinction between Systems 1 and 2 is as clear
           | as people make it out to be -- and the effects of practice
           | make the distinction even fuzzier.
           | 
           | What does System 2 have that LLMs lack? I'd argue: a working
           | memory buffer. If you have working memory, you can then
           | compose System 1 actions. In a way, a Turing machine is
           | System 1 rule manipulation + working memory. Chain-of-thought
           | is a hacky working memory buffer, and it improves results
           | markedly. But I think we could do better with more
           | intentional design.
           | 
           | [1] https://mitpress.mit.edu/9780262516013/bayesian-brain/
           | [2] https://pubmed.ncbi.nlm.nih.gov/35012898/
        
         | FrustratedMonky wrote:
         | "ChatGPT and the likes do not have a memory"
         | 
         | Can't we consider the context window to be memory?
         | 
         | And eventually wont we have larger and larger context windows,
         | and perhaps even individual training where part of the context
         | window 'conversation' is also fed back into the training data?
         | 
         | Seems like this will come to pass eventually.
        
         | gnramires wrote:
         | For reference, I believe relatively rudimentary chatbots have
         | been able to pass forms of the Turing Test for a while, while
         | also being unable to do almost all the things humans can do. It
         | turns out you can make a chatbot quite convincing for a very
         | short conversation with someone you've never met over the
         | internet[1]. I think there's a trend that fooling perception is
         | significantly easier than having the right capabilities. Maybe
         | with sufficiently advanced testing you can judge, but I don't
         | think this is the case in general: in general, our thoughts may
         | be significantly different than we can converse. One obvious
         | example is simply people with disabilities (say a paraplegic)
         | who can't talk at all (while having thoughts of their own):
         | output capability need not necessarily reflect internal
         | capability.
         | 
         | Also, take this more advanced example: if you built a
         | sufficiently large lookup table (of course, you need (possibly
         | human) intelligence to build it, and it would be of impractical
         | size), you can build a chatbot completely indistinguishable
         | from a human that nonetheless doesn't seem like it really is
         | intelligent (or conscious for that matter). The only operations
         | it would perform would be some sort of decoding of the input
         | into an astronomically large number to retrieve from the lookup
         | table, and then using some (possibly rudimentary) seeking
         | apparatus to retrieve the content that corresponds to the
         | input. Your input size can be arbitrarily large enabling
         | arbitrarily long conversations. It seems that to judge
         | intelligence we really need to examine the internals and look
         | at their structure.
         | 
         | I have a hunch that our particular 'feeling of consciousness'
         | stems from the massive interconnectivity of the brain. All
         | sorts of neurons from around the brain have some activity all
         | the time (I believe the brain's energy consumption doesn't vary
         | greatly with activity, so we use it), and whatever we perceive
         | goes through an enormous number of interconnected neurons
         | (representing concepts, impressions, ideas, etc.); unlike
         | typical CPU-based algorithms that process a somewhat large
         | number (potentially 100s of millions) of steps sequentially. My
         | intuition does seem to pair with modern neural architectures
         | (i.e. they could have some consciousness?), but I really don't
         | know how we could do this sort of judgement before
         | understanding better the details of the brain and other
         | properties of cognition. I think this is a very important
         | research area, and I'm not sure we have enough people working
         | on it or using the correct tools (it relies heavily on logic,
         | philosophy and metaphysical arguments; as well as require
         | personal insight from being conscious :) ).
         | 
         | [1] From sources I can't remember, and from wiki:
         | https://en.wikipedia.org/wiki/Turing_test#Loebner_Prize
        
         | idiotsecant wrote:
         | Intelligence is what you call it when you don't know how it
         | works yet.
        
       | smusamashah wrote:
       | Quote from article                   At the present rate,
       | computers suitable for humanlike robots will appear in the 2020s.
       | Can the pace be sustained for another three decades? The graph
       | shows no sign of abatement. If anything, it hints that further
       | contractions in time scale are in store. But, one often
       | encounters thoughtful articles by knowledgeable people in the
       | semiconductor industry giving detailed reasons why the decades of
       | phenomenal growth must soon come to an end.
       | 
       | I don't know where to look to confirm that can current computers
       | do 1 billion MIPS (Million Instructions per Second?) as predicted
       | by this article?
       | 
       | EDIT: This Wikipedia article puts an AMD CPU at ~2 million MIPS
       | https://en.wikipedia.org/wiki/Instructions_per_second as the
       | highest one.
        
         | kolinko wrote:
         | M1/M2s are around 2-4 trillion if I'm not mistaken
        
           | foobiekr wrote:
           | not even within 2 orders of magnitude.
        
             | rerdavies wrote:
             | Just rack-mount a hundred iphones. :-P
        
           | beautifulfreak wrote:
           | ~2.6 teraflops FP32 (CPU) for the M3 Max.
        
         | ben_w wrote:
         | When the article was written, almost nobody cared about GPUs or
         | similar architectures being really good at parallel
         | computation.
         | 
         | So while he wrote about MIPS, and those have indeed basically
         | stopped significantly improving, FLOPS have continued to
         | improve. And for AI in particular, for inference at least, we
         | can get away with 8 bit floats, which is why my phone does
         | 1.58e13/second, only a factor of x60 from a million-billion.
        
         | pulvinar wrote:
         | Today we'd include the GPU. If we include supercomputers, we
         | really have to compare total global capacity, which is on the
         | order of 1 million times that of 1997 [0]. That basically fits
         | his trend line.
         | 
         | [0]: https://ourworldindata.org/grapher/supercomputer-power-
         | flops
        
           | geysersam wrote:
           | The graph in the article showing computing power per $1000,
           | doesn't that make the comparison to supercomputers
           | misleading?
        
         | floxy wrote:
         | Nvidia claims the H100 GPU can do >3e15 8-bit operations per
         | second (3 billion - million instructions per second):
         | 
         | https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor...
        
         | ertgbnm wrote:
         | The promise of Instruction per Second scaling forever did stop
         | being super-exponential. Instead of CPU based Instructions per
         | Second we have continued scaling compute with GPU based
         | Floating-point Operations Per Second which don't exactly have a
         | 1:1 equivalence with instructions but are close enough. Nvidia
         | released the "first" GPU in 1999 so it's not too surprising
         | that this guy wasn't thinking in those terms in 1998. That's
         | why Kurzweil and Bostrom used FLOPS in their extrapolations.
         | They also end up in the neighborhood of 2020s/2030s in their
         | later predictions in the 2000s.
         | 
         | With that in mind we are currently at 1 billion GigaFLOPS for
         | the largest super computer in 2023. And the trendline since the
         | 90's has kept up with the prediction made in the post. Although
         | we can't exactly say 1 GigaFLOPS = 1000 MIPS.
         | 
         | https://ourworldindata.org/grapher/supercomputer-power-flops...
        
         | jetrink wrote:
         | > I don't know where to look to confirm that can current
         | computers do 1 billion MIPS (Million Instructions per Second?)
         | as predicted by this article?
         | 
         | An RTX 4090 has AI compute of up to 661 teraflops in FP16[1].
         | FLOPS and IPS are not interchangeable, but just for fun, that's
         | 6.6 * 10^14 FLOPS, which is just short of 1 billion million
         | (10^15).
         | 
         | 1. https://www.tomshardware.com/features/nvidia-ada-lovelace-
         | an...
        
       | HarHarVeryFunny wrote:
       | It's a tough question since we still don't understand the brain
       | well enough to know what degree of fidelity of copying it is
       | necessary to achieve the same/similar functionality.
       | 
       | What is the computational equivalent of a biological neuron? How
       | much of the detailed chemistry is really relevant to it's
       | operation (doing useful computation), or can we just use the ANN
       | model of synapses as weights, and the neuron itself as a
       | summation device plus a non-linearity?
       | 
       | Maybe we can functionally model the human brain at a much
       | coarser, more efficient, level than individual neurons - at
       | cortical mini/macro column level perhaps, or as an abstract
       | architecture not directly related to nature's messy
       | implementation?
        
       | vrnvu wrote:
       | Aren't we there yet?
       | 
       | George Hotz has a nice series of posts on this:
       | 
       | https://geohot.github.io/blog/jekyll/update/2020/08/20/a-sil...
       | 
       | https://geohot.github.io/blog/jekyll/update/2022/02/17/brain...
       | 
       | https://geohot.github.io/blog/jekyll/update/2023/04/26/a-per...
       | 
       | I sorted them by date.
        
         | abeppu wrote:
         | Hmm, this reasoning is making a lot of really questionable
         | assumptions:
         | 
         | > We'll use the estimates of 100 billion neurons and 100
         | trillion synapses for this post. That's 100 teraweights.
         | 
         | ... or maybe actual synapses cannot be described with a single
         | weight.
         | 
         | > The max firing rate seems to be 200hz [4]. I really want an
         | estimate of "neuron lag" here, but let's upper bound it at 5ms.
         | 
         | ... but _ANNs_ output float activations per pass. Biological
         | neurons encode values with sequences of spikes which vary in
         | timing, so the firing rate doesn 't on its own tell you the
         | rate at which neurons can communicate new values.
         | 
         | > Remember also, that the brain is always learning, so it needs
         | to be doing forward and backward passes. I'm not exactly sure
         | why they are different, but [6] and [7] point to the backward
         | pass taking 2x more compute than the forward pass.
         | 
         | ... but the brain probably isn't doing backprop, in part
         | because it doesn't get to observe the 'correct' output, compute
         | a loss, etc, and because the brain isn't a DAG.
        
           | vrnvu wrote:
           | Yeah, fore sure. I just share it as a fun read. I think they
           | have been discussed in HN before.
        
       | crispyambulance wrote:
       | Moravec also wrote a book much along these lines in the late 80's
       | (Mind Children). If I recall correctly, a good part of it was
       | also about the idea of "transferring" human consciousness into a
       | machine "host". The idea being that a sufficiently advanced
       | computer would be able to somehow make sense of a human's neuron
       | mappings and then "continue running" as that individual.
       | 
       | It was provocative back then, and still is!
       | 
       | The idea of having my consciousness continue its existence
       | indefinitely in somebody's kubernetes cluster, however, seems
       | like a very special vision of hell.
        
         | syntheticnature wrote:
         | See https://qntm.org/mmacevedo
        
         | ycombinete wrote:
         | This is also the premise of Greg Egan's novel _Permutation
         | City_.
         | 
         | Also a provocative read.
        
       | geysersam wrote:
       | The estimate for how many operations per second the brain does is
       | quite a wild guess. It starts out very reasonable, estimating the
       | amount of information the eyes feed to the brain. But from there
       | on it's really just a wild guess. We don't know how the brain
       | processes visual input, we don't know what the fundamental unit
       | of processing in the brain is, or if there is one.
        
         | nyrikki wrote:
         | It is more than a wild guess, it is a guess based on the
         | outdated perceptron model of the brain.
         | 
         | Active dendrites the that can do operations like xor before
         | anything reaches the soma, or use spike timing, once again
         | before the soma are a couple of examples.
         | 
         | SNNs, or spikey artificial NNs have been hitting limits of the
         | computable numbers.
         | 
         | Riddled basins as an example, which are sets with no open
         | subsets. Basically any circle you can draw will always contain
         | at least one point on a boundary set.
         | 
         | https://arxiv.org/abs/1711.02160
         | 
         | This is a much stronger constraint than even classic chaos.
         | 
         | While I am sure we will continue to try to make comparisons,
         | wetwear and hardware are cheese and chalk.
         | 
         | Both have things they are better at and neither is a direct
         | replacement for the other.
        
         | hax0ron3 wrote:
         | It also blows my mind that that, even though DNA seems to just
         | code for proteins and does not store a schematic for a brain in
         | any way that we've been able to decipher so far, human eggs
         | pretty reliably end up growing into people who 9 months after
         | conception already have a bunch of stuff, including visual
         | processing, working. Of course the DNA is not the only input,
         | there is also the mother's body, the whole process of
         | pregnancy, but I don't know how that contributes to the new
         | baby being able to enter the world already intelligent.
         | 
         | It's possible that I'm not aware of some breakthroughs on this
         | topic, though.
        
           | viewtransform wrote:
           | The story of what we know so far about the development
           | process is fascinating and captured in this book "Endless
           | Forms Most Beautiful" by Sean B. Carroll (2005)
        
           | idiotsecant wrote:
           | It is awesome, in the full nearly dreadfully amazing sense of
           | the word.
        
         | marcosdumay wrote:
         | I remember people discovering that the retina neurons were
         | actually in a very non-common and much less connected geometry
         | than most of our neurons, what meant that the numbers on that
         | page were (an unknown number of) orders of magnitude smaller
         | than reality.
         | 
         | But I have no idea what is known nowadays.
        
         | tim333 wrote:
         | Not really. The brain doesn't do operations per second like a
         | computer. What he is guessing is that if so many operations per
         | second can produce similar results to a certain volume of
         | brain, in this case the retina, then maybe that holds for the
         | rest of the brain. It seems to me a reasonable guess that
         | appears to be proving approximately true.
        
       | freitzkriesler2 wrote:
       | I'd argue the issue isn't hardware anymore but "software" or
       | algorithms that get the hardware to start acting human.
        
       | thangalin wrote:
       | 2038.
       | 
       | * 86 billion neurons in a brain [1]
       | 
       | * 400 transistors to simulate a synapse [2]
       | 
       | That's 34 trillion, 400 billion transistors to simulate a human
       | brain.
       | 
       | As of 2024, the GB200 Grace Blackwell GPU has 208 billion
       | MOSFETs[3]. In 2023, AMD's MI300A CPU had 146 billion
       | transistors[3]. In 2021, the Versal VP1802 FPGA had 92 billion
       | transistors[3]. Intel projects 1 trillion by 2030; TSMC suggests
       | 200 billion by 2030.
       | 
       | We'll likely have real-time brain analogs by 2064.
       | 
       | (Aside, these are the dates I've used in my hard sci-fi novel.
       | See my profile for details.)
       | 
       | [1]: https://pubmed.ncbi.nlm.nih.gov/19226510/
       | 
       | [2]: https://historyofinformation.com/detail.php?id=3901
       | 
       | [3]: https://en.wikipedia.org/wiki/Transistor_count
        
         | hulitu wrote:
         | Too many presumptions.
        
           | smt88 wrote:
           | 400 transistors per synapse is a foundational assumption and
           | is total nonsense
        
             | Terr_ wrote:
             | Also, at what _rate?_ If it 's not at least one second per
             | second, that changes things quite a bit.
        
           | idiotsecant wrote:
           | So list your better assumptions. The exercise is
           | guesstimating the future. Gather your unique perspective and
           | offer up some upper and lower bounds.
        
         | RaftPeople wrote:
         | > * 86 billion neurons in a brain [1]*
         | 
         | > * 400 transistors to simulate a synapse [2]*
         | 
         | > * That's 34 trillion, 400 billion transistors to simulate a
         | human brain.*
         | 
         | You forgot about:
         | 
         | 1-Astrocytes - more common than neurons, computational, have
         | their own style of internal and external signaling, have bi-
         | directional communication with neurons at the synapse (and are
         | thought to control the neurons)
         | 
         | 2-Synapse: a single synapse can be both excitatory and
         | inhibitory depending on it's structure and the permeant ions
         | inside and outside the cell at that point in time and space -
         | are your 400 transistors handling all of that dynamic
         | capability?
         | 
         | 3-Brain waves: are now shown to be causal (influencing neuron
         | behavior) not just a by-product of activity. Many different
         | types of waves (different frequencies) that operate very
         | locally (high freq), or broadly (low freq). Information is
         | encoded in frequency, phase, space and time.
         | 
         | This is the summary, the details are even more interesting and
         | complex. Some of the details are probably not adding
         | computational capabilities, but many clearly are.
        
         | rerdavies wrote:
         | > Blackwell GPU has 208 billion MOSFETs[3]
         | 
         | Things us hard sci fi fans will insist on:
         | 
         | - You have significantly undercounted transistors. As of 2024
         | you can put up to 8.4 terabytes of LPDDR5X memory into an
         | NVidia Grace Blackwell rack. So that's 72 trillion transistors
         | (and another 72 trillion capacitors) right there.
         | 
         | - A GPU executes significantly faster than a neuron.
         | 
         | - The hardware of one GPU can be used to simulate billions of
         | neurons in realtime.
         | 
         | - Why limit yourself to one GB200 NVL72 rack, when you could
         | have a warehouse full? (what happens when you create a mind
         | that's a thousand times more powerful than a human mind?)
         | 
         | You really need to separate the comparison into state (how much
         | memory), and computation rate. I think you'll find that an
         | NVidia GB2000 NVL72 will outperform a brain by at least an
         | order of magnitude. And the cost of feeding brains far exceeds
         | the cost of feeding GB2000's. Plus, brains are notoriously
         | unreliable.
         | 
         | The current generation of public-facing AIs are using ~24Gb of
         | memory, mostly because using more would cost more than can
         | conveniently given away or rented out for pennies. If I were an
         | evil genius looking to take over the world today, I'd be
         | building terabyte-scale AIs right now, and I'd not be telling
         | ANYONE. And definitely not running it in Europe or the US where
         | it might be facing imminent legislative attempts to limit what
         | it can do. Antarctica, perhaps.
        
           | RaftPeople wrote:
           | > _The hardware of one GPU can be used to simulate billions
           | of neurons in realtime._
           | 
           | You mean simulate billions artificial neurons, right?
           | 
           | You can't simulate a single biological neuron yet because
           | it's way too complex and they still don't even understand all
           | of the details.
           | 
           | If they did have all of the details, at minimum you would
           | need to simulate the concentrations of ions internally and
           | externally as well as the electrical field around the neuron
           | to truly simulate the dynamic nature of a neuron in space and
           | time. It's response to stimulus is dependent on all of that
           | and more.
        
         | Retric wrote:
         | Each of those cells has ~7,000 synapses each of which is both
         | doing some computation and sending information. Further, this
         | needs to be reconfigurable as synaptic connections aren't
         | static so you can't simply make a chip with hardwired
         | connections.
         | 
         | You could ballpark that as that's 400 * 7000 * 86 billion
         | transistors to simulate a brain that can't learn anything,
         | though I don't see the point. Reasonably equivalent real time
         | brain emulation is likely much further I'd say 2070 on a
         | cluster isn't unrealistic, but we're not getting there on a
         | single chip using lithography.
         | 
         | What nobody talks about is the bandwidth requirements if this
         | isn't all in hardware. You basically need random access to 100
         | trillion values (+100t weights) ~100-1,000+ times a second.
         | Which requires splitting things across multiple chips and some
         | kind of 3D mesh of really high bandwidth connections.
        
         | onlyrealcuzzo wrote:
         | Won't we run into physical limits long before 2064?
         | 
         | I imagine that would push out the timeline.
        
         | tim333 wrote:
         | Transistors are much faster than synapses so a few can simulate
         | a bunch of synapses. As a result you can probably get by with
         | like a million times less than your estimate.
        
       | 2genders46305 wrote:
       | hi are u lonely want ai gf?? https://discord.gg/candyai
       | NXYmOBxjimjWHcNfG
        
       | 2genders39738 wrote:
       | Are you lonely? Do u want an AI girlfriend?
       | https://discord.gg/elyza WsDRxebrXTnUmjLje
        
       | 2genders30497 wrote:
       | hi are u lonely want ai gf?? https://discord.gg/candyai
       | BnBXRVtFcywAbKeyu
        
       ___________________________________________________________________
       (page generated 2024-04-22 23:01 UTC)