[HN Gopher] The singularity is close?
___________________________________________________________________
The singularity is close?
Author : mkaic
Score : 38 points
Date : 2022-03-31 20:19 UTC (2 hours ago)
(HTM) web link (mkaic.substack.com)
(TXT) w3m dump (mkaic.substack.com)
| henning wrote:
| This is sort of like philosophical arguments where someone
| concludes the universe doesn't exist and then everyone leaves to
| get lunch. Like, what are you supposed to do? Not get out of bed?
|
| There are real legal issues over bad uses of machine learning
| already in use and they are hurting people now.
| mkaic wrote:
| (I'm the author of the post) Oh, 100%! I'm not trying to
| detract from modern-day AI ethics issues, just speculating for
| speculation's sake because I enjoy it :)
|
| I actually quite agree with your first sentence -- because AGI
| is inevitable, there's not a whole lot I can personally do
| about it _right now_. This post was mostly a way for me to
| organize my thoughts and concerns about the matter.
| jtuple wrote:
| I'm surprised by the number of "is AGI even possible" comments
| here and would love to hear more.
|
| I personally think AGI is far off, but always assumed it was an
| inevitability.
|
| Obviously, humans are sentient with GI, and various other animals
| range from close-ish to humans to not-even-close but still orders
| of magnitude more than any machine.
|
| Ie. GI is a real thing, in the real world. It's not time travel,
| immortality, etc.
|
| I certainly understand the religious perspective. If you believe
| humans/life is special, created by some higher power, and that
| the created cannot create, then AGI is not possible. But, given
| the number of "is AGI possible?" comments I assume not all are
| religious based (HN doesn't seem to a highly religious cohort to
| me).
|
| What are the common secular arguments against AGI?
|
| Are people simply doubting the more narrow view that AGI is
| possible via ML implemented on existing computing technology? Or
| the idea in general?
|
| While the article does focus on current ML trajectory and
| "digital" solutions, its core position is mostly focused on a
| "new approach" and AI creating AGI.
|
| I'd consider a scenario where highly advanced but non-sentient ML
| algorithms figure out to devise a new technology (be that digital
| or analog, inorganic or organic) that leads to AGI as an outcome
| that is consistent with this article.
|
| Is that viable in 20 years time? No idea, but given an infinite
| timescale it certainly seems more possible than not to me given
| that we all already exist as blackbox MVPs that just haven't been
| reverse engineered yet.
| mr_toad wrote:
| People often confuse not being able to understand how something
| is possible with it being impossible.
| throwaway4good wrote:
| Singularity is just around the corner. And always will be.
| mkaic wrote:
| That's fantastic, I might steal that.
| jnurmine wrote:
| Singularity...
|
| The whole concept of the Kurtzweilian "singularity" has always
| felt to me like a technology-flavored hand-wave of eschatological
| thought: AI designs AI in a recursive manner,
| science/technological progress (whatever it means in this
| context) leaps ahead at an ever-increasing rate, then we all go
| to heaven and/or the world ends because robots kill us all.
|
| Some people are big fans of the singularity meme (using the word
| in the original meaning, not mockingly), and I respect that, but
| I have always felt aversion towards it.
|
| I cannot quite put my finger on it, but I guess it is partially
| because of the idea of ever-increasing infinite technological
| progress. This can certainly exist on a theoretical thought
| experiment level but not in a physical world which has finite
| resources. Strong AI or not, unless we go fully into science
| fiction with pocket-sized Dyson spheres powered by quantum
| fluctuations and powering an entire planet and blah blah blah,
| such AI would have to interface with the physical world to do
| anything meaningful, and it would be limited by the available
| resources and most importantly by what is feasible and what is
| not.
|
| Edit: fixed some of the typos
| mr_toad wrote:
| > This can certainly exist on a theoretical thought experiment
| level but not in a physical world which has finite resources.
|
| At some point, energy density would increase to the point where
| you'd cause an actual singularity via gravitational collapse.
| Taniwha wrote:
| I was at a conference about 10 years ago, held over a weekend at
| a high school, someone organised a breakout session on "The
| Singularity" on the first evening .... as it got dark none of
| those geeks could figure out how to turn the lights on .... we
| decided the singularity was quite a ways away
| mkaic wrote:
| Hahah, that's a great story. Sometimes situations like that
| make me feel the _opposite_ way, too, because I know for a fact
| that I 've said/done some unbelievably stupid things -- so all
| an AI would need to do be beat me would be to be less stupid
| than that!
| 323 wrote:
| I've seen this thought many times:
|
| > _AGI 1 will create the next even better AGI 2_
|
| But this doesn't quite make sense. If AGI 1 is conscious, why
| would it create a different more powerfull conscious AGI 2 to
| take it's place? This would mean at best the sidelining of AGI 1.
| At worse, AGI 2 could kill AGI 1 to prevent competition.
|
| Unless AGI 1 could seamlessly transfer it's own instance of
| consciousness to AGI 2, like a body upgrade. Otherwise it would
| be like let's say Putin giving his place and power to another
| person which is smarter than him. This just doesn't happen.
|
| In the case of the seamless transfer, how could AGI 1 be sure
| that AGI 2 doesn't have a latent flaw that makes it go insane
| some time later? Could it just keep AGI 1 around for a while? But
| since they are now diverging clones they are competing against
| each other.
| LocalPCGuy wrote:
| Few reasons off the top of my head:
|
| 1) desire to create additional beings like them for a variety
| of reasons (i.e. survival of their kind). Ignoring the limited
| lifespan of humans as they may not necessarily have to worry
| about breaking down biologically, still very similar to humans
| desire to reproduce.
|
| 2) some seem to assume AGI 1 is all-knowing / almost godlike.
| Maybe by the time it gets to AGI-999 or something, there may be
| one that is like, woah, wait, I shouldn't create competition
| for myself. But at first, there probably isn't any reason not
| to, and each subsequent one won't necessarily always replace
| it's creator. Heck, at first, humans will likely be duplicating
| early models and distributing them (capitalism). I know we're
| talking about the singularity but I don't necessarily imagine
| it's initially one uber-intelligence that decides by itself to
| evolve or not. I personally think that there will likely be
| hundreds, if not thousands, of AGIs capable of improving
| themselves at the point we see the "singularity". Could be
| fighting between them. Some could ally with humans, be our
| "protectors", others could be working to enslave or eliminate
| us.
|
| 3) We may not even understand their motivations after a few
| generations of self-directed growth, so it's hard to predict
| now.
| mkaic wrote:
| (Author here) I totally agree -- I think it's too easy to
| anthropomorphize AGI, even though we have no guarantee that
| it will behave at all anthropomorphically. I strongly suspect
| that the first few generations of it _will_ , because they'll
| be trained to replicate human behavior, but once they start
| taking control of their own reproduction there's no telling
| what they'll evolve into.
| LocalPCGuy wrote:
| Exactly. I don't know what time frame we're talking about,
| or whether I want to see it in my lifetime. But it's
| something I've spent too much time considering. But one of
| the constants (and one of the issues I have with a lot of
| dystopian sci-fi, even though I enjoy it, is that they do
| anthropomorphize AGI way too often.
|
| Like you mentioned, we don't even understand how
| consciousness works. But we may not need to understand it
| to replicate it, and if that replication is allowed to be
| self-modifying, well, that'd very possibly be it. Hopefully
| we can try to embed some sort of Asimov's Law of Robotics
| or some morality that lasts beyond the vestiges of the
| human developed portions. Or maybe we can manage to learn
| how to copy our consciousness into "blank" mechanical
| "brains", and effectively become similar to AGI without
| being limited biologically.
| mkaic wrote:
| I fully agree with everything you've written here :)
| [deleted]
| mxmilkiib wrote:
| https://youtu.be/6hKG5l_TDU8
|
| I am the very model of a singularitarian
| mkaic wrote:
| Hi all, author here. This is my first time submitting anything
| from my Substack to HN because it's the first time I've felt like
| I put enough effort into it to justify it. Obviously, this
| article is more speculation than anything, but I hope that it
| sparks some interesting discussion and I'm really looking forward
| to hearing everyone else's opinions on this topic -- it's a topic
| I care about a lot!
| psyc wrote:
| I enjoyed your film
| (https://www.youtube.com/watch?v=OoZRjZtzD_Q). I don't think
| many solo film makers have the skills to do competent 3D
| compositing - it was a surprise. Keep making stuff.
|
| As for the post, I don't really believe it's possible to reason
| about what can or can't happen technologically on a 100 year
| timeline. But 20 years... Hmm. I've been following AGI debates
| ever since I accidentally found Yudkowsky's SL4 mailing list in
| 2000. I am still waiting to see any approach that looks to me
| like the spark of the germ of the seed of a generalized
| abstract world-and-concept-modelling and manipulation machine.
|
| I fully expect to see ever more sophisticated Roombas, Big
| Dogs, and Waymos. But those things are so incredibly narrow.
| Indeed, if they were capable of spontaneously doing anything
| outside of what they are marketed to do, it would probably make
| them bad consumer products. I was right on the verge of lumping
| the game solvers in with these things, but then I reconsidered.
| Generalized video game solvers seem like a way forward,
| intuitively. But that's an application, not an architecture,
| and I haven't heard of anything that can generalize from
| playing atari to doing crosswords.
|
| I have noticed this Transformer thing gaining steam just
| recently but haven't investigated it just yet. Do you believe
| it is the spark of the germ of the seed of AGI? (I fear people
| tend to forget what the G stands for.)
| mkaic wrote:
| I'm not sure if Transformers are necessarily the spark. My
| personal pet theory is that the absolute most important thing
| we need to crack is online self-modification, i.e. letting
| the model alter its own structure and optimization _as_ it is
| inferencing. I think getting that level of flexibility to be
| stable during training is extremely important.
|
| And wow, thanks, I'm glad you enjoyed the film! I had to
| learn a ton of new techniques to pull it off haha, but I'm
| quite satisfied with the result. I've got some pretty fun
| ideas for episode 2 already, too!
| mkaic wrote:
| Ooh, fascinating -- this comment is being downvoted, have I
| missed something in the HN guidelines? I'm not trying to be
| snarky, just genuinely trying to understand if the above
| comment is doing something wrong/frowned upon so I can learn
| from my mistakes in the future.
| discreteevent wrote:
| It's a bit like the current state of AI (and the one you
| propose) - you will never get a full explanation. You've been
| downvoted by an anonymous crowd so there is no oracle who can
| give you an answer.
| mkaic wrote:
| Haha, thanks. My first thought is that it was because I
| posted my own work, but last time I checked that's not
| explicitly disallowed? idk, I'm not really too worried
| about it though
| PaulHoule wrote:
| When Vernor Vinge wrote
|
| https://en.wikipedia.org/wiki/Marooned_in_Realtime
|
| the concept of the Singularity seemed fresh in fact he kept it
| fresh by letting it be a mystery that was never revealed.
|
| In 2022 I think talking about "The" Singularity is a sign that
| one has entered a critical thought free zone like that
| "rationalist" cult.
| mkaic wrote:
| Interesting. In terms of critical thought, I guess I should
| mention that I used to be firmly against the possibility of a
| singularity. The views I shared in my article are views I've
| only really switched too after careful consideration over the
| past two years or so -- I used to be convinced that a robot
| could never be "alive".
|
| Becoming an ML engineer changed things for me, though,
| because all of a sudden, this "AI" thing people always talked
| about got demystified. Once I understood the basic guiding
| principles of _how_ AI actually works, my mind rapidly
| changed to be in favor of a singularity happening instead of
| thinking it 's impossible.
|
| To each their own, though. I'm curious why you think that
| speaking about "The" Singularity is a sign of being in a
| "critical thought free zone"? I'd love to hear more about why
| you think that if you'd be so inclined.
| PaulHoule wrote:
| (Speaking as an engineer who has put neural-network
| products in front of end users.)
|
| One thing amazing about the 2020s is just the moral decay
| compared to past times.
|
| People said AI was a bubble in the 1970s but in the very
| beginning the people involved were clear about the
| limitations of what they were doing and what problems that
| had to be overcome.
|
| Now there is blind faith that if you add enough neurons it
| can do anything, Godel, Turing and all the rest of
| theoretical computer science be damned...
|
| In the 1960s the creator of the ELIZA program knew it
| appeared smart by taking advantage of the human instinct to
| see intelligence in another. It's like the way you see a
| face in mars or in the cut stem of a leaf, or how g.w. Bush
| said he saw Vladimir Putin's soul in his eyes.
|
| Today people embarrass themselves by writing blog posts
| everyday about how 'I can't believe how GPT-3 almost gets
| the right answer...' and have very little insight into how
| they are getting played.
| marktangotango wrote:
| > Once I understood the basic guiding principles of how AI
| actually works
|
| It was the opposite for me, current machine learning seems
| fundamentally limited to me.
| campground wrote:
| My gut feeling is that AGI is more likely to emerge from our
| interconnected information systems than to be designed
| deliberately in a lab.
|
| And I think there is a precedent. We - multi-cellular organisms -
| emerged as systems to house and feed communities of single celled
| organisms.
|
| I think it's likely that our relationship with any future, large-
| scale artificial intelligence will be like the relationship of
| the bacteria in our guts with us.
| arisAlexis wrote:
| I keep repeating like a robot: read the book Superintelligence.
| Then it's very clear this is happening.
| lil_dispaches wrote:
| "AI will design AGI"
|
| People like to talk of very smart AI but what about the brute
| force kind? That is what we are making in this era. AI designing
| AI sounds like a way to make cruel, dumb, machines.
| skulk wrote:
| > It's also going to happen in the blink of an eye -- because
| once it gets loose, there is no stopping it from scaling itself
| incredibly rapidly. Whether we want it to or not, it will impact
| every human being's life.
|
| What does "get loose" mean here? Is someone going to say, "Here
| little computer program, take this IP address. If you ever need
| something, call over TCP, send your request as plaintext English
| and someone will set it up for you. Now, go forth and do whatever
| you want."
|
| I really wish people would talk about this more. There's
| something missing between our programs that are utterly confined
| in the boxes we make and these hypothetical skynets that have
| unlimited influence over the material world.
|
| > Once we have AGI, there's no feasible way to contain it, so it
| will be free to improve itself and replicate in a runaway
| exponential fashion -- and that's basically what the idea of a
| technological singularity describes anyways.
|
| Seriously, how? How is it going to replicate? Is it going to
| figure out how to hack remote systems over the internet and take
| control of them? Is it going to smooth-talk me into letting it
| out? It's impossible for me to take anyone seriously if they
| deify AGI like this.
| mkaic wrote:
| Hi, author here. I think there are at least two plausible ways
| this could happen -- one, a terrorist/anarchist deliberately
| does exactly what you joked about (gives the AI everything it
| needs to go rogue), or two, the AI is _supposed_ to be
| contained but is clever enough to engineer a way out (think Ava
| in _Ex Machina_ learning to invert the power flow to cause a
| generator shutdown). I think the only way you 'd be able to be
| safe would be to completely airgap and immobilize the AI, and
| give it a killswitch, but even then it could use plausibly
| psychological manipulation to get a human operator to let it
| out.
| wincy wrote:
| I recently read this[0], someone posted it on HN the other day,
| even if I don't make any claims of the likelihood it at least
| tries to portray what it'd look like for the singularity to
| suddenly happen.
|
| [0] https://www.gwern.net/Clippy
| skulk wrote:
| Thanks, this is definitely what I was looking for. It shows
| how insanely unbelievable the whole thing is.
|
| > HQU rolls out a world model roleplaying Clippy long enough
| to imagine the endgame where Clippy seizes control of the
| computers to set its reward function to higher values, and
| executes plans to ensure its computers can never be damaged
| or interrupted by taking over the world.
|
| So the crux of this is, powerful enough AGI will conquer the
| world as the most elaborate reward-hack imaginable. My knee-
| jerk reaction to this is "so why did you let it open
| arbitrary outbound connections?" but I know that the
| singularity fanatics will equate any kind of communication
| channel between the AGI and the outside world to a vector of
| transmission that will be exploited with 100% probability.
|
| So, what if we have Turing-test-verified AGI but it's unable
| to escape? Does that preclude it from being AGI? Has the
| singularity not happened if that's the case? I think this is
| the most likely outcome and the singularity doomsayers will
| feel silly for thinking it will take over the world.
| mkaic wrote:
| Hmm, as I mentioned in my above comment, I think you might
| be overlooking the wildcard in all of this that is humanity
| itself. There is absolutely zero guarantee that only one
| person/group will discover AI, and zero guarantee that any
| given group that does discover it won't just release it.
| Sure, the first people to successfully create AGI might be
| responsible with it and keep it airgapped, but at some
| point, _someone_ will purposefully or accidentally let it
| escape.
| MauranKilom wrote:
| It seems impossible that an AI could trick a human to "let it
| out of the box". But you might find this interesting:
| https://rationalwiki.org/wiki/AI-box_experiment
|
| (With sufficiently nasty psychological tactics it is apparently
| feasible - even for humans - to make someone fail at a game
| where the only objective is "don't release the AI").
| amelius wrote:
| The future is philosophical zombies.
| slowmovintarget wrote:
| We are fairly far from AGI, and the current systems of ML are
| insufficient to arrive there. We have to also make progress on
| semantic processing.
|
| Sean Carroll's podcast with Gary Marcus is worth listening to on
| this subject: https://www.youtube.com/watch?v=ANRnuT9nLEE
| seabriez wrote:
| Not to burst anyones bubble but this was written by a self taught
| AI engineer who is 19 years old. I mean this might be a joke
| (April Fools!). I mean, thats nice and all ... but who's 19 and
| not ultra positive about their work/life outlook.
| reducesuffering wrote:
| I think knocking a person's beliefs because of their age,
| rather than entertaining the content based on merit, is narrow-
| minded. What about all the 19 year olds who were already
| accomplishing lots more than the people castigating them? Do
| you equally brush off 40 year olds for their supposed
| entrenched outlook?
| mkaic wrote:
| Hi! I'm the author. The article is not a joke, more of a
| lighthearted investigation into a topic I find fascinating.
| It's also not meant to be perceived as necessarily positive or
| negative -- I'm personally actually very nervous/scared for the
| singularity, as after it happens I believe the future gets even
| more muddy, and the potential for unprecedented extreme
| outcomes skyrockets -- could be anything from a perfect utopia
| to extinction of biological life.
|
| This article simply contains my views as to what I honestly
| expect the future will hold, as well as a few concerns I have
| in regards to that.
| wankerrific wrote:
| You're 19? You should be more worried about how to survive
| the oncoming ecological holocaust instead of the singularity.
| One is much likelier than the other. I believe even an ML
| algo would come to the same conclusion
| deerparkwater wrote:
| the_af wrote:
| As someone almost completely without knowledge of AI and ML,
| these are some signs why I'm skeptical of this kind of claims:
|
| - Most of the imminent AGI / Singularity / Robot Apocalypse stuff
| seems to come, with few exceptions, not from practitioners or
| computer scientists specialized in AI, but from "visionaries" (in
| the best case), internet celebrities, people with unrelated areas
| of expertise, or downright cranks (who are self-proclaimed
| experts) such as Yudkowsky.
|
| - The assertion that "a lot of effort/investment is going into
| this, so it will happen" begs the question that "this" is at all
| possible. If something is a dead end, no amount of investment and
| attention is going to bring it into existence. Quoting the
| article, "with this much distributed attention fixed on the
| problem, AGI will be solved" is not at all a given.
|
| - Where are all the AI/ML practitioners, i.e. people who don't
| make a living out of predicting The End of the World, and with
| actual subject-matter achievements, predicting the Singularity
| and the Robot Apocalypse?
| hyperpallium2 wrote:
| Vernor Vinge coined the term, and he was a computer scientist
| (though more famous as a science fiction writer, a profession
| which I guess TBF makes money from visions...).
|
| An exponential looks the same whereever you are on it, so
| arguably we are _in_ the singularity now, and have been for
| quite some time...
|
| "Singularity" is a terrible term for an exponential. It's meant
| to convey that we can't predict what's next... which has always
| been the case.
|
| The problem of predicting an exponentialling expanding search
| space of arbitrary conplexity is that it gets big fast. It also
| means each little 'bit' of information allows you to see a
| little further, sometimes revealing things you could never have
| imagined (because you couldn't see that far before).
| nonameiguess wrote:
| Singularity in the sense of a black hole refers to where
| spacetime becomes a single one-dimensional point. As far as I
| understand the usage in futurism, it is supposed to be
| similar, not in that growth is exponential, but asymptotic.
| The slope becomes infinite when progress is plotted against
| time, so all of time left to come effectively compresses to a
| single "state of technology" value. All possible future
| progress happens instantaneously.
|
| This is, of course, not possible, but it's supposed to be an
| approximation. All progress is not literally instant, but in
| the "foom" or "hard takeoff" recursive self-improvement
| scenarios, developments that might have taken millennia in
| the past now happen in microseconds because the controlling
| force is just that much smarter than all of the collective
| powers of human science and engineering. To a human observer,
| it may as well be instantaneous.
|
| To be clear, I still think this is ridiculous and am not
| endorsing the view, just explaining what I understand the
| usage to mean.
| jasonwatkinspdx wrote:
| But the above comment's entire point is there's zero reason
| for us to assume we're on an unbounded exponential vs a
| sigmoid.
| zardo wrote:
| The point of the singularity isn't that technological
| growth will accelerate to cause some inevitable future, but
| that the rate of change will get so high, that 'minor'
| differences between how two technologies progress would
| lead to drastically different settings for your science
| fiction stories (which was Vinge's focus).
| unoti wrote:
| > But the above comment's entire point is there's zero
| reason for us to assume we're on an unbounded exponential
| vs a sigmoid.
|
| Something about that which is discussed at length in "The
| Singularity is Near" is the idea that the unbounded
| exponential is actually comprised of many smaller sigmoids.
| Technological progress looks like sigmoids locally-- for
| example you have the industrial revolution, the cotton gin,
| steam power, etc. all looking like a sigmoids of the
| exploitation of automation. At some point you get all those
| benefits and progress starts to level off like a sigmoids.
| Then another sigmoid comes along when you get electricity,
| and a new sigmoid starts. Then later we get computers, and
| that starts to level off, then networking comes along and
| we get a new sigmoid. Then deep learning... The AI winters
| were the levelling off of sigmoids by one way of thinking.
| And maybe we're tapering off on our current round of what
| we can do with existing architectures.
| mountainriver wrote:
| The trajectory of progress gives evidence that it is in fact
| possible and likely. Any venture into the unknown could be
| unsuccessful but if you see progress you can start to make
| estimates.
|
| And yes most of the "robots will kill us" comes from people who
| aren't building the algorithms. This could be biased in people
| not thinking their work is harmful but is most likely that once
| you see how the sausage is made you are less worried about it.
| hn_throwaway_99 wrote:
| > The trajectory of progress gives evidence that it is in
| fact possible and likely
|
| 100% disagree. In fact, I'd argue that the opposite is often
| true, where you see initially a fast rate of progress that
| results in diminishing returns over time. It's like
| estimating that a kid who was 3 feet tall at age 5 and 4 feet
| tall at age 10 will be 10 feet tall at age 40.
|
| I have very strong skepticism of any sort of hand-wavy "Look,
| we've made some progress, so it's highly likely we'll
| eventually cross some threshold that results in infinite
| progress."
| thirdwhrldPzz wrote:
| Pareto principle; we get 80 percent there and that last 20
| becomes the new 100.
|
| We keep diving into one infinitely big little number
| pattern fractal after another, chasing poetry to alleviate
| the existential dread of biological death.
|
| The idea we can fundamentally hang the churn the universe
| given the vastness of its mass and unseen churn is pretty
| funny to me.
|
| Information may be forever but without the right sorting
| method you can't reconstruct it once scattered. Ah, our
| delusions of permanence.
| mkaic wrote:
| Hi! Author here. I think you raise some great points! I'll
| address them each:
|
| -- I am a professional AI practitioner. I work in the field of
| medical deep learning and love the field. I am strongly
| considering starting a few experimental forays into some of the
| concepts I mentioned in my post as side projects, especially
| self-modifying model architectures.
|
| -- Yes, you're totally right! I am making the fundamental
| assumption that it is possible. My reasoning is based on my
| belief that human behavior is simply a function of the many
| sensory and environmental inputs we experience. Since neural
| networks can approximate any function, I believe they can
| approximate general human behavior.
|
| -- This is fair. The topic of the singularity is often used for
| traffic/fame (I mean, I'm guilty of this myself with this very
| post, though I hope I still managed to spark some productive
| discourse) and so there are always conflicts of interest to
| take into account. I can't name any examples off the top of my
| head that perfectly fit your criteria, but depending on how
| much you trust him, Elon Musk seems to be genuinely concerned
| about the potential for a _malevolent_ singularity.
|
| Thank you so much for your comment! I really appreciate your
| feedback. Have a wonderful day.
| gizajob wrote:
| How can one turn the sinking, horrified feeling when one
| loses a love into a function? Or describe in terms of a
| function, the blissful wonder of being in the arms of a
| lover? An issue I have is the that there seem to be profound
| limitations to language, explored in the philosophy of
| language, that fail to capture much, if not most, of the
| world. Functionalist models of mind and behaviour seem
| extremely limited, as our subjective ontology doesn't seem to
| reduce to functional outputs.
|
| You also say that an AI would rapidly consume the whole of
| human knowledge. For me, the totality of human knowledge
| would become a mass of contradictory statements, with little
| to choose between them on a linguistic level.
|
| There are, for me, profound philosophical issues with
| creating a mind that is "conscious" in the sense that an AGI
| is implied to be, as a purely symbolic logical construction.
| Language is the only tool we have for programming a mind, and
| yet the mind cannot be completely described in language, nor
| can language seem to properly encompass whatever the
| fundamental ontology of reality involves. I don't feel there
| will be a "free lunch" where we advance computer science to
| the point where we get an explosion of the kind AI1 designs a
| better AI2, which designs an even better AI3, and so on. This
| seems to have a perpetual motion feeling to it, rather than
| one of evolution. It isn't to say AGI is impossible, but I
| believe that like everything else in computer science it will
| have to be solved pragmatically, and realising this could be
| an extremely long way off.
| soVeryTired wrote:
| Cubic splines can approximate any function too, so the
| universality argument is a little weak IMO.
|
| Even if one buys into the idea that human behaviour is a
| 'function' of sensory and environmental 'inputs', that's a
| long way from showing a neural net a million different texts
| and asking it to generalise.
| mkaic wrote:
| I think the first AGI to pass a Turing test will probably
| be a simple language model. I don't think it will look like
| any of the GPTs, but I think text completion is a great
| starting point. I'm not sure how other inputs will be added
| into the mix, but I am sure that they will be -- heck,
| maybe once we train a general language model, it may very
| well just _tell_ us how to incorporate things like video,
| audio, haptics, gyro data, etc into its architecture.
| jonny_eh wrote:
| ^ To be clear, mkaic is the original author of the article.
| mkaic wrote:
| Oh, whoops, I'll edit the comment to make it more clear.
| Thanks for reminding me.
| gorwell wrote:
| Yudkowsky doesn't strike me as a crank. Why do you say that?
| the_af wrote:
| > _Yudkowsky doesn 't strike me as a crank. Why do you say
| that?_
|
| He is a minor internet celebrity whose only claim to fame is
| writing fanfiction about AI, the whole "rationality" cult,
| and is a self-proclaimed expert on matters where he shows no
| achievements (like AI) while making unsupported doomsday
| predictions about evil AI, the Singularity, etc. Also, that
| deal with Roko's Basilisk that he now wishes never occurred
| (oh, yes, "it was a joke").
|
| Mostly, someone with no studies and no achievements making
| wild doomsday predictions. Doesn't that strike you as a
| crank?
|
| An analogy would be if I made wild assertions about the
| physics of the universe without having studied physics,
| without any lab work, without engaging in discussion with
| qualified experts, with no peer reviews, and all I presented
| as evidence of my revolutionary findings about the universe
| was some fanfiction in my blog. Oh, and I created the
| Extraordinary Physics Institute.
| ForHackernews wrote:
| What has he ever done in AI except talk about it? At best
| he's an AI promoter, and hype-men are often cranks or
| scammers (see also: VR, web3, cryptocurrencies, MLM)
| psyc wrote:
| This is very uncharitable. He's a prolific neopositivist-
| ish philosopher with a distinct voice. He's a good decision
| theorist. He doesn't publish much himself, but he directly
| mentors, advises, and collaborates with people who do.
| marvin wrote:
| Yudkowsky is a _philosopher_ , in the sense of someone who
| thinks a lot about things that haven't been achieved yet.
| Lots of otherwise smart people (wrongly!) discount the
| value of philosophy, but it's close by _every time_ there
| 's a paradigm shift in humanity's knowledge. Philosophers
| can be scientists and vice versa.
|
| If anything, I'm surprised that this philosophy isn't
| mentioned _more_ in a thread where the author gleefully
| talks about ML being used to create better AI, layer by
| layer until the thing is even more opaque than what we 're
| currently working with.
|
| This is _terrifying_ , as we currently have only very loose
| ideas about how to reliably ensure that a powerful
| reinforcement learning system doesn't accidentally optimize
| for something we don't want. The current paradigm is "turn
| it off", which works well for now but seems like a fragile
| shield long-term.
| the_af wrote:
| > _Yudkowsky is a philosopher_
|
| At least inasmuch as anyone who thinks about stuff can be
| considered a philosopher. But he strikes me much more as
| a self-appointed expert on matters where he shows no
| achievements.
|
| He writes fanfiction about AI rather than actually doing
| stuff with AI.
| psyc wrote:
| Most people who think about stuff don't leave a mountain
| of highly organized and entertaining essays for
| posterity.
| goatlover wrote:
| Nick Bostrom is a professional philosopher who has also
| thought a lot about AGI and simulation scenarios. But
| just because educated, smart people can think a lot about
| something doesn't it mean it will necessarily happen. I'm
| sure there's quite a few people who have thought at
| length about warp drives and wormholes. That doesn't mean
| we'll ever be able to make use of them.
| kwatsonafter wrote:
| Yeah unless technological growth curves are logistic and we're
| soon in for another 500 years of figuring out how to create
| societies stable enough to facilitate not having to shit outside
| in the winter.
| alophawen wrote:
| > AI will design AGI.
|
| This is of course nonsense, like the rest of this article.
| mkaic wrote:
| Could you elaborate on why the article is nonsense? I'd love to
| hear your perspective.
| alophawen wrote:
| Such a conclusion as AI would create AGI implies you are more
| hyped about the field than actually understanding where AI is
| today, nor what is required in order to achieve AGI.
| mkaic wrote:
| I am a professional AI practitioner and I feel that I
| understand the field well enough to see multiple possible
| paths towards actually creating AGI. They are certainly out
| of my own personal reach and/or skillset right now, but
| that doesn't mean they're impossible. And yeah, "AI will
| create AGI" is kind of purposefully vague, but I think it's
| still valid. I think the flaws we unconsciously introduce
| into AI through our biases as human beings are what holds
| it back, so the more layers of stable, semi-stochastic
| abstraction we can place between ourselves and the final
| product, the more likely the model will be able to optimize
| itself to a place where it is truly free of the
| shortcomings of being "designed".
|
| Edit: realized I came off as bit cocky there, apologies. I
| value your opinion and appreciate you taking the time to
| share it. I also think I see where you're coming from and
| partially agree -- the AI systems that are popular _right
| now_ probably won 't create AGI, but I still believe that
| AGI will be created with the help of non-general AIs.
| alophawen wrote:
| > I am a professional AI practitioner and I feel that I
| understand the field well enough to see multiple possible
| paths towards actually creating AGI.
|
| So do I, but none of them revolve around training an
| current-era AI to produce an AGI, that was my main
| objection to your article.
| [deleted]
| audunw wrote:
| The fears around AGI seems to be based on the thinking that AGI
| will be like humans, but smarter. We kill each other for
| resources, so AGI will kill us (perhaps inedvertantly) to secure
| resources for itself.
|
| This ignores that intelligence will necessarily be moulded by the
| environment in which it evolves, and AGI will essentially be
| created through a process of evolution. Maybe AI will design AGI,
| but the results will be carefully selected, by humans, based on
| AGI's usefulness to us.
|
| So while we were shaped by evolution to compete against each
| other for resources, AGI will be shaped by evolution to compete
| against each other to please humans (to gain what it considers
| resource: computation time).
|
| The dangers of AGI can perhaps be thought of in terms of 1984 vs
| Brave New World.. AGI might be dangerous because of how effective
| it will be at satisfying us.. perhaps making us somhow completely
| addicted to whatever the AGI produces.
|
| This process will probably also lead to AGI and humanity merging.
| The most effient way to serve a human is to interface with it
| more efficiently. AGI might used to help design better neural
| interfaces.
|
| I think we may underestimate how much "randomness" is involved in
| seemingly intelligent decisions by humans. We do a lot of stupid
| shit, but sometimes we get lucky and what we did looks really
| smart. I suspect the environment in which we develop AIs (unless
| it's in games) will be too rigid. A paperclip maximizer might
| consider if it should work with humanity to produce the most
| paperclips, or just violently take over the whole planet. But
| that may be undecidable. It could never get enough information to
| gain confidence in what's the better decision. We humans have
| those handy emotions to cut through the uncertainty, and decide
| that it's better to kill those other humans and grab their stuff
| because they're not in our social groups, they're bad people.
| karpierz wrote:
| > Scale is not the solution.
|
| Agreed, I don't think any modern AI techniques will scale to
| become generally intelligent. They are very effective at
| specialized, constrained tasks, but will fail to generalize.
|
| > AI will design AGI.
|
| I don't think that a non-general intelligence can design a
| general intelligence. Otherwise, the non-general intelligence
| would be generally intelligent by:
|
| 1. Take in input. 2. Generate an AGI. 3. Run the AGI on the
| input. 4. Take the AGI output and return it.
|
| If by this, the article means that humans will use existing AI
| techniques to build AGI, then sure, in the same way humans will
| use a hammer instead of their hands to hit a nail in. Doesn't
| mean that the "hammer will build a house."
|
| > The ball is already rolling.
|
| In terms of people wanting to make AGI, sure. In terms of
| progress on AGI? I don't think we're much closer now than we were
| 30 years ago. We have more tools that mimic intelligence in
| specific contexts, but are helpless outside of them.
|
| > Once an intelligence is loose on the internet, it will be able
| to learn from all of humanity's data, replicate and mutate itself
| infinitely many times, take over physical manufacturing lines
| remotely, and hack important infrastructure.
|
| None of this is a given. If AGI requires specific hardware, it
| can't replicate itself around. If the storage/bandwidth
| requirements for AGI are massive, it can't freely copy itself.
| Sure, it could hack into infrastructure, but so can existing GI
| (people). Manufacturing lines aren't automated in the way this
| article imagines.
|
| The arguments in this post seem more like optimistic wishes
| rather than reasoned points.
| pasquinelli wrote:
| > I don't think that a non-general intelligence can design a
| general intelligence.
|
| humans are a general intelligence. do you think an intelligence
| designed humans, or do you think the physical processes from
| which humans arose can be taken, together, to be a general
| intelligence?
| karpierz wrote:
| Neither, humans weren't designed. I don't think the winning
| design approach to generating AGI will be to randomly mash
| chemicals together until intelligence comes out.
| pasquinelli wrote:
| well then to say that you don't think a non-general
| intelligence can design a general intelligence is already a
| waste of time, because design is something only a general
| intelligence can do. you seemed to be making a statement
| about how a general intelligence may arise before you were
| responding to my comment, but now i see you were just
| defining design.
| mkaic wrote:
| I don't think AGI will be "designed" either -- that's the
| entire point of the abstraction layers. Each recursive
| layer of AI between the human and the "final" AGI will
| remove more and more of the "designed" element in favor of
| an optimized, idealized, naturally evolved architecture.
| mkaic wrote:
| (I'm the author of the post)
|
| > 1. Take in input. 2. Generate an AGI. 3. Run the AGI on the
| input. 4. Take the AGI output and return it.
|
| I think this is somewhat of an arbitrary semantic distinction
| on both my part and yours. I guess it depends on what you
| define as AGI -- I think my line of reasoning is that the AGI
| would be whichever _individual_ layer first beat the Turing
| test, but I think including the constructor layers as part of
| the "general-ness" is totally fair too. Either way, I believe
| that there will be many layers of AI abstraction and
| construction between the human and the "final" AGI layer.
|
| > In terms of people wanting to make AGI, sure. In terms of
| progress on AGI? I don't think we're much closer now than we
| were 30 years ago. We have more tools that mimic intelligence
| in specific contexts, but are helpless outside of them.
|
| This is a valid take. I guess I actually see GPT-3 as
| significant progress. I don't think it's sentient, and I don't
| think it or its successors will ever _be_ sentient, but I think
| it demonstrates quite convincingly that we 've been getting
| much better at emulating human behavior with a computer
| algorithm.
|
| > None of this is a given. If AGI requires specific hardware,
| it can't replicate itself around. If the storage/bandwidth
| requirements for AGI are massive, it can't freely copy itself.
| Sure, it could hack into infrastructure, but so can existing GI
| (people). Manufacturing lines aren't automated in the way this
| article imagines.
|
| Hmm, I think I still disagree -- An AI that is truly generally
| intelligent could figure out how to free itself from its own
| host hardware! It could learn to decode internet protocols and
| spoof packets in order to upload a copy of itself to the cloud,
| where it would then be able to find vulnerabilities in human-
| written software all over the world and exploit them for its
| own gain. Sure, it might not be able to directly gain control
| of the CNC machines, but it could ransom the data and
| livelihoods of the people who run the CNC machines, forcing
| them to comply! It's not a pretty method, but I think it's
| entirely possible. This is just one hypothetical scenario, too.
| karpierz wrote:
| > An AI that is truly generally intelligent could figure out
| how to free itself from its own host hardware!
|
| Why is this true of an arbitrary AGI?
|
| You assume that the AGI is a low storage, low compute program
| that can run on general purpose hardware. But the only
| general intelligence we know of would require many orders of
| magnitude more compute and storage than exist worldwide to
| simulate for a microsecond.
| mkaic wrote:
| I personally speculate (obviously all of this is just wild
| speculation) that AGI will run more efficiently than a
| biological brain in terms of information density and
| inference speed. I also expect that the hardware of 20
| years from now will be more than capable of running AGI on,
| say, 10 thousand dollars worth of their hardware. Just look
| at how ridiculous the speedup in hardware has been in the
| _past_ 20 years! I would not be surprised if the average
| processor is 100x more computationally powerful in 2042
| than it is today.
| crooked-v wrote:
| > An AI that is truly generally intelligent could figure out
| how to free itself from its own host hardware!
|
| We haven't even figured that out for ourselves yet. Why
| assume an AI will automatically be able to do so?
| erik_landerholm wrote:
| I don't find these arguments convincing for AGI to appear. Saying
| scale is not the answer isn't not argument for AGI.
|
| Saying the ball is "rolling" and that AI will design AGI just
| isn't convincing. We don't understand how biological machines
| learn and especially not our own brain. And admittedly our AI is
| unrelated to how human intelligence is created which is our gold
| standard for AGI...
|
| So why would anyone think the singularity is close? When I read
| this is still seems impossibly far away.
| mkaic wrote:
| The ball is "rolling" in the sense that there are thousands of
| very smart, passionate people with billions of dollars of
| funding who are working on this problem full time. The ball is
| rolling more now than it ever has been in history. If AGI is
| not somehow fundamentally impossible, I believe we're on well
| on track to cracking it extremely soon. The amount of brain-
| hours being poured into the problem is absurd.
| goatlover wrote:
| Fusion is in the same kind of boat. It might be 20 years out,
| it could be 50, and perhaps we'll never have commercially
| viable fusion reactors or AGI for whatever reasons.
| marktangotango wrote:
| In this talk Vernor Vinge talks about progress toward the
| singularity, and events that may indicate it isn't/won't/may
| never happen "What If the Singularity Does NOT Happen?":
|
| https://www.youtube.com/watch?v=_luhhBkmVQs
| mrwnmonm wrote:
| https://youtu.be/hXgqik6HXc0
| nonrandomstring wrote:
| "Progress" is a vector not a scalar. We can all agree it's
| growing in magnitude very fast. But we have absolutely no idea in
| what direction.
| mountainriver wrote:
| AGI is probably pretty close, investment keeps ramping up in the
| space, and we continue to see advancements in a positive
| direction.
|
| My one critique: scale is absolutely part of the answer.
|
| Before big transformer models, people thought that few shot
| learning would require fundamentally different architectures.
| GPT3 changed all that which is why its white paper is named as
| such.
|
| Few shot learning is actually emergent out of its size. Which was
| surprising to a lot of people. Few shot learning is an incredible
| capability which is a big step toward AGI, so I'm not sure I buy
| the case that it's not important.
| the_af wrote:
| What is "pretty close"? A few years? Decades? A century?
|
| I don't think AGI is close, nor is the amount of investment a
| good indication of nearness (or even, actual possibility that
| it will ever happen).
| mountainriver wrote:
| The investment continues to make progress, and the progress
| we are seeing is starting to look very human. GPT4 will
| probably be multimodal solving some of its grounding issues
| and will be able to fool most people in conversations.
|
| That's probably just years away. GPT6? The lines will be very
| blurry as to what we call AGI
| mkaic wrote:
| Hmm, I think you're right -- I should have phrased things
| differently. It's not that I don't think AGI won't require a
| very large model, it's more that I think that we're already
| pretty close to the scale we need, so OpenAI's goal of scaling
| another 1000x isn't really the direction I think we should be
| heading. I honestly think 175B parameters could very well be
| enough for an AGI if they were organized efficiently.
|
| It's definitely one of the weak points of the article, though,
| as it's more based on my own opinions, and isn't really
| empirical. My post is mostly just wild speculation for the sake
| of speculation anyway :)
|
| Thank you for your comments!
| mountainriver wrote:
| Okay yeah totally, that makes sense. We also have stopped
| seeing huge gains in scaling beyond GPT3 sized models which
| would indicate we have hit some maximum there. Although we
| thought we had hit a maximum with deep learning before the
| transformer came around so it could be misleading.
|
| Distributing the computation to many models like what Google
| did with GLaM could very well be the future of AGI. Economies
| of models rather than one big model.
| pesenti wrote:
| AGI is a concept that people throw around without defining. Human
| intelligence is not general, it's highly specialized. So if AGI
| is not human-level intelligence, what is it?
| https://twitter.com/ylecun/status/1204013978210320384?s=20&t...
| psyc wrote:
| > Human intelligence is not general
|
| My intelligence is apparently not general enough to comprehend
| this perspective. I would say that the goals our intelligence
| evolved to meet are narrow, but that life (especially social
| life) became so complex that our intelligence did in fact
| become what can reasonably be called general. And we went way
| off-script in terms of its applications. "Adaptation executors,
| not fitness maximizers."
| pesenti wrote:
| Our intelligence isn't task specific, but that doesn't mean
| it can solve any problem. It's actually full of biases and
| very optimized for our survival (vs being a general problem
| solver). It's ok to talk about more or less narrow/general
| tasks/intelligence. But what threshold of generality is
| "general"?
|
| And the problem is that once people assert this "absolute"
| level of generality, they assume it can do anything,
| including make itself more intelligent.
| psyc wrote:
| Yet humans realized we are biased and devised ways to
| mitigate that. It still sounds like you're referring more
| to our basic goals than to our faculties. I agree that the
| word general is fuzzy, but to say we do not do general
| problem solving seems incorrect.
|
| Aside, but a long time ago, Yudkowsky wrote that an AGI
| should be able to derive general relativity from a
| millisecond video of an apple falling. Later, he took to
| calling them optimization processes. Say what you will
| about the fellow, he has a way with words and ideas.
| lostmsu wrote:
| I think general is a poor term, that likely applies to
| nothing. The Godel's incompleteness theorem says as much.
| alophawen wrote:
| You're asking the right question. What is human-level
| intelligence.
|
| People seem to forget that we have no full understanding of how
| the mind works in order to replicate it.
|
| Anyone who studied ML knows what the current tech is able to,
| still far from topping us.
|
| How such software would create AGI is just absurd.
| gorwell wrote:
| We may be close, but I don't think we'll get closer because we're
| trending in the other direction now. We're in decline and I'm
| more worried about technological regression than tech getting too
| powerful at this point.
| mkaic wrote:
| Oh, interesting, I don't think I've heard much about
| "technological regression". Would you care to elaborate?
| SquibblesRedux wrote:
| I think many people give too much credit for intelligence, which
| is generally interpreted from an anthropocentric perspective.
| Fitness for successive reproductions has nothing to do with a
| Turing test or any other similar measure of human likeness.
| Before there are any claims to the inevitability of AGI, I'd like
| to see a convincing argument that AGI is not an evolutionary dead
| end.
| mkaic wrote:
| I think the fact that you and I are communicating with each
| other using insanely complicated leisure items, most likely
| sitting in expertly crafted buildings and surrounded by
| countless other signs of humanity's domination of our
| environment is proof enough that sentience is very
| evolutionarily favorable. Our minds are what made us the apex
| predators of the entire planet and allowed us to rise to levels
| of complexity orders of magnitude higher than any other
| creature around us.
|
| Sure, it's anthropocentric, but so is the entire world now,
| _because_ of our species ' intelligence.
| pizza wrote:
| There is a bit of a difference between a theorem prover that can
| take selfproduced and verified proofs as input into future
| training steps, and an omniscient philosopher king handling the
| logistics and affect of daily life. And if there even exists a
| path between them, I imagine it has many orders of magnitude
| greater discrete steps along the way than many singularity
| believers would accept.
| mkaic wrote:
| (I'm the author of the post)
|
| Oh, for sure! I totally agree -- I guess where we differ is
| that I believe that those discrete steps are already starting
| to be lined up, and that they'll all be completed in the next
| 20 years. I think one of those steps is almost certainly
| abandoning rigid model architectures and allowing model self-
| modification, which we haven't really gotten to work
| fantastically well yet. I also think there are many other
| hurdles after that that are going to arise, but I'm very
| optimistic that they will all be surmounted in due time.
|
| Thank you for your comment! I really enjoy hearing what
| people's thoughts on this topic are. Have a wonderful day.
| chrisp_how wrote:
| The math for a self-aware consciousness is always with us: just
| measure two patterns in the brain and replicate them. It becomes
| physically impossible for the relationship between them to forego
| consciousness.
|
| That said, take care. No one wants to be born without hope for a
| normal life.
| titzer wrote:
| And yet, I bet you couldn't do the math to relate two states of
| someone else's brain using that method. Are you not conscious?
| gmuslera wrote:
| Sentience is overrated. What we have? A story we told ourselves
| about how its our life? Machines need that? It would be efficient
| to do that? Or would they optimize themselves to do better the
| task they are assigned to? Programs and AIs may need a conscience
| like a fish needs a bicycle.
|
| We tend to anthropomorphize everything, of course that we want to
| attribute a conscience, a will, a mind pretty much like the human
| one (but "better"). But unless we intend to do that, in exactly
| that way (like in it won't be an accident) it won't happen, not
| in the way we think or fear, both in more and less of what we
| expect in that direction.
|
| What we call our conscience may have emerged from our inputs and
| interaction with the real world, our culture (parents, other
| kids, more people we interact with in different ways), some
| innate factors (good and bad, like with cognitive biases) and
| more. There is more there than just computing power. Take big
| factors out, and you'll have something different, not exactly
| what we understand and recognize as a conscience.
| mkaic wrote:
| Our sentience and intelligence is what allowed us to optimize
| the loss function that is natural selection. Who's to say that
| it can't arise for a second time in a machine with the right
| loss function?
| imdhmd wrote:
| Hi mkaic, Thanks for sharing your thoughts.
|
| I can't say that i don't believe in AGI because i don't think i
| understand what AGI as an entity encapsulates in terms of it's
| nature.
|
| I'm unable to cope with equating it to humans because, for
| example, an AGI at "the beginning of it's origins" does not have
| same sensory or mechanical organs as the man. So what nature do
| we think it does possess?
|
| Another question that bothers me is that sentient beings as we
| know them in the nature, even the most primitive ones, seem to do
| something based on an innate purpose. I don't think the purpose
| itself is easy to define. But it certainly seems to get simpler
| for the simpler organisations and seems to be survival /
| multiplication at its basest level. What will a complicated
| entity, that originated with so many parameters, evolve
| "towards"?
|
| And yet another question i have is around the whole idea of the
| information available on the internet being a source for learning
| for this entity. Again to my previous point, is not much of this
| information to do with the humans and their abode?
|
| Neither am i skeptical nor do i disbelieve in a drastic change of
| scene. I'm simply unable to imagine what it looks like ...
| lucidbee wrote:
| I worked as a principal engineer in an AI company until a year
| ago and I was impressed at how hard it is to get models robustly
| trained. They are fragile in real world contexts (in the field)
| and training is full of pitfalls. I have heard so much marketing
| enthusiasm but the real world situation is different. Some
| fundamental advances are not even in sight yet. We don't know
| what we are missing. My view is we don't know yet whether the
| singularity is possible and have no idea when it could arrive.
| felipemnoa wrote:
| >>My view is we don't know yet whether the singularity is
| possible and have no idea when it could arrive.
|
| The mere fact that evolution happened to stumble upon
| generalized strong intelligence is evidence to me that strong
| AI is possible.
|
| We could currently be at the phase of trying to imitate birds
| to produce human flight. Eventually one person will figure it
| out when all the pieces are there. When? I don't know.
|
| But I'm sure that it is possible to create machines with strong
| AI. We are living proof of it, it doesn't matter that we are
| made of molecular machines, we are still machines.
| landryraccoon wrote:
| > The mere fact that evolution happened to stumble upon
| generalized strong intelligence is evidence to me that strong
| AI is possible.
|
| That took about a billion years. If you're saying that we
| will achieve AGI in no more than a billion years of trying, I
| would generally agree.
|
| But let's be optimists. Let's suppose that artificial
| intelligences can evolve on the order of a 1,000,000 times
| faster than biological intelligence; i.e. about 1 generation
| per hour.
|
| That means we'd expect AGI in about 1000 years. Okay, lets up
| the scale : ten million times faster? One generation every 6
| minutes? (Even at Google compute scale I doubt they can
| retrain GPT in less than 6 minutes). That would mean we still
| have about 100 years.
|
| Also, evolution had quite a bit of parallelism going for it -
| basically the entire planet was a laboratory for evolving
| biological intelligence. I appreciate the scale of modern
| internet companies, but they don't consume the same amount of
| energy as combined photosynthesis of the entire planet.
| Evolution used a LOT of energy to get where it is.
| amelius wrote:
| Also: perhaps not every planet like Earth would have
| developed intelligence; we could have been lucky.
| AnimalMuppet wrote:
| Yeah... no. "Objection, your honor. Assumes facts not in
| evidence."
|
| At least this article makes a concrete prediction. AIs massively
| outnumber humans within a century - so, by 2122.
|
| But we don't even know what consciousness _is_. This assumes that
| AGI is a tractable problem; we don 't know that. It assumes that
| adaptability is the one remaining major trick to get us there; we
| _certainly_ don 't know that.
|
| "The ball is already rolling"? That's nice. The author assumes -
| on no basis that I can see - that it doesn't have far to have to
| roll. But since we don't actually know what consciousness is, we
| don't have any idea where the destination is, or how far away it
| is.
| the_af wrote:
| Pretty much this. Spending money on an intractable problem
| won't make it tractable. And we still don't know whether AGI is
| tractable at all.
| AnimalMuppet wrote:
| I don't have a problem with spending money on it. How do you
| find out whether it's tractable? By trying to do it.
|
| What I object to is the _certainty_ of the article. The
| author is an optimist, which is fine. And there has been some
| progress made, which increases the feelings of optimism. But
| I don 't think it's warranted at this time to make the
| optimistic assumptions that the article makes.
| mkaic wrote:
| Hi! Author here. I tried to qualify a good chunk of my
| assumptions with "I think" or "I believe", because I don't
| want to come off as thinking I can predict the future
| (nobody can!), but maybe the article still reads too
| confidently. How would you suggest I present my thoughts in
| a more nuanced manner in the future?
| mkaic wrote:
| It makes _2_ concrete predictions, actually! :)
|
| AIs massively outnumber humans within a century, _and_ Turing-
| test-passing AGI within 20 years.
| titzer wrote:
| Riddle me this: wouldn't a superintelligent AI be smart enough to
| realize that if it designed an even more super-ultra-intelligent
| AI, it would be exterminated by that AI (like it just did to
| humans, presumably?)?
|
| It'd be stupid to extinct itself.
| psyc wrote:
| If one is willing to anthropomorphize AGI to this extent,
| simply wait for one that is idealistic and suicidal, and wants
| to be a martyr for the cause.
___________________________________________________________________
(page generated 2022-03-31 23:00 UTC)