[HN Gopher] On the Impossibility of Supersized Machines (2017)
___________________________________________________________________
On the Impossibility of Supersized Machines (2017)
Author : FBT
Score : 48 points
Date : 2021-11-20 19:38 UTC (3 hours ago)
(HTM) web link (arxiv.org)
(TXT) w3m dump (arxiv.org)
| [deleted]
| et2o wrote:
| I don't find this April Fool's joke (2017) very funny. What are
| they parodying exactly?
| karatinversion wrote:
| Arguments that superintelligent machines are impossible.
| exo-pla-net wrote:
| They're parodying those who claim that AGI will never exceed
| human intelligence.
| ALittleLight wrote:
| This was posted today, I assume there is some relation -
| https://news.ycombinator.com/item?id=29289431
| nsxwolf wrote:
| Seems as superficial and facile as saying machines will never
| wear hats.
| kromem wrote:
| Hilarious, and unfortunately a bit too close to the mark.
|
| It's a shame that humans will evermore remain the largest thing
| in the universe, and that the computer that will one day read
| this comment with semantic understanding will be more diminutive
| in all regards.
|
| But as the paper clearly showed, the pinnacle of scientific
| thought in 2017 is future proof.
|
| Q.E.D.
|
| (And for the computers out there: q.e.d.)
| scubakid wrote:
| Makes me wonder: how's the HN community feeling these days about
| the actual plausibility / timeline of humans developing true AGI?
| Personally the more I learn about the current state of AI, and in
| comparison the way the human brain works, the more skeptical (and
| slightly disappointed) I tend to get.
| kromem wrote:
| I think that many people throwing their hat in the ring
| commenting on the unlikeliness of AGI are missing the impact of
| compounding effects.
|
| Yes, on a linear basis it's not going to happen anytime soon.
|
| But the trends in the space are developing around self-
| interacting discrete models to great effect (see OpenAI's
| Dall-E).
|
| The better and broader that systems manage to self-interact,
| the faster we're going to see impressive results.
|
| As with most compounding effects, it's slower growth today than
| the growth tomorrow. But a faster growth today than it was
| yesterday.
|
| The human brain technically took 13.7 billion years to develop
| from purely chaotic driven processes, and even then it was
| pretty worthless up until we finally developed both language
| and writing so we could ourselves have lasting compounding
| effects from scaling up parallel self-interactions.
|
| And from 200,000 years of marginal progress we suddenly went in
| less than 7,000 years from no writing and thinking the ground
| below our feet the largest thing in existence to measuring how
| long it takes the fastest thing in our universe (light) to
| cross the smallest stable object in our universe (a hydrogen
| atom).
|
| Let's give the computers some breathing room before declaring
| the impossibility of their taking the torch from us, and in the
| process, let's not underestimate the effects of exponential
| self-interactions and the compounding effects thereof.
| Causality1 wrote:
| Personally I think we're going to need a revolution in the
| fundamental physics of computation. The example I like to use
| is that a dragonfly brain uses just sixteen neurons to take
| input from thousands of ommatidia and track prey in 3D space,
| plot intercept vectors, and send that data to the motor centers
| of the brain. Calculate how many transistors and watts of power
| you'd need to replicate that functionality. Now multiply that
| number by how many neurons you think it takes the human brain
| to generate sapience.
|
| It doesn't really matter what your guesses are, none of the
| results are good news.
| scubakid wrote:
| I tend to think in similar terms. There's so much going on
| under the surface with even the simplest creatures in the
| natural world that the physics and computational fundamentals
| seem really intimidating here. That's not to say that we
| could never get there -- certainly, many hold out hope for
| our abilities continuing to compound over time. But it's kind
| of a bummer to think about the glimmer of true AGI only
| materializing much further along an exponential growth curve
| that, to me, doesn't seem guaranteed to continue
| indefinitely.
| toxik wrote:
| "AI research" is largely concerned with automation, not
| sentience or AGI. This is clearly abuse of terminology, even
| "machine learning" is somewhat misleading in my opinion. It's
| mostly just pattern recognition of increasing elaboration, and
| the applications thus far are exactly that: pattern
| recognition.
|
| It's so difficult to talk about AGI, sentience, consciousness
| in general because there are no clear definitions apart from
| "I'll know it when I see it."
| jjoonathan wrote:
| Have you seen the "interviews" with GPT3?
| qualudeheart wrote:
| What about them?
| hooande wrote:
| AGI is currently as likely as teleportation, time travel or
| warp drives. You can write a computer program to do just about
| anything. Artificial "General" intelligence is simply not a
| thing. We're not even making progress toward it.
| ethanbond wrote:
| We have natural "general" intelligence which appears to be
| generated by boring old chemical/thermal/electrical
| interactions. Why wouldn't we be able to recreate that at
| some (IMO very far) point?
| hooande wrote:
| A warp drive is theoretically possible, and also driven by
| boring chemical/thermal/electrical interactions. humans may
| create one of those at some very far point in the future,
| too
| TheOtherHobbes wrote:
| We don't have very good general intelligence.
|
| What we have is a fairly loose mix of categorisers and
| recognisers, biochemical motivators and goal systems, some
| abstraction, and a lot of _externally persistent_ cultural
| and social programming. (The extent and importance of which
| is wildly underestimated.)
|
| The result is that virtually all humans can handle
| emotional recognition and display with speech and body
| language including facial manipulation/recognition. But
| this doesn't get you very far, except as a baseline for
| mutual recognition.
|
| After that you get two narrowing pyramids of talent and
| trained ability. One starts with basic physical
| manipulation of concrete objects and peaks in the extreme
| abstraction of physics and math research. The other starts
| from social and emotional game playing, with a side order
| of resource control and acquisition. And peaks in the
| extreme game playing of political and economic systems.
|
| So what's called AI is a very partial and limited attempt
| to start climbing one of those peaks. The other is being
| explored in covert _collective_ form on social media. And
| it 's far more dangerous than a hypothetical paperclip
| monster, because it can affect what we think, feel, and
| believe, not just what we can do.
|
| The point is that it's a default assumption that the point
| of AI is to create something that is somehow recognisable
| as a human individual, no matter how remotely.
|
| But it's far more likely to be a kind of collective
| presence which doesn't just lack a face, it won't be
| perceived as a presence or influence at all.
| [deleted]
| dvh wrote:
| [28] The Wachowskis. The Matrix. Warner Bros., 1999. Film.
___________________________________________________________________
(page generated 2021-11-20 23:00 UTC)