[HN Gopher] Thousands of AI Authors on the Future of AI
___________________________________________________________________
Thousands of AI Authors on the Future of AI
Author : treebrained
Score : 58 points
Date : 2024-01-08 21:23 UTC (1 hours ago)
(HTM) web link (arxiv.org)
(TXT) w3m dump (arxiv.org)
| bbor wrote:
| Very interesting, especially the huge jump forward in the first
| figure and a possible _majority_ of AI researchers giving >10%
| to the Human Extinction outcome.
|
| To AI skeptics bristling at these numbers, I've got a potentially
| controversial question: what's the difference between this and
| the scientific consensus on Climate Change? Why heed the latter
| and not the former?
| lainga wrote:
| A climate forcing has a physical effect on the Earth system
| that you can model with primitive equations. It is not a social
| or economic problem (although removing the forcing is).
|
| You might as well roll a ball down an incline and then ask me
| whether Keynes was right.
| michael_nielsen wrote:
| We have extremely detailed and well-tested models of climate.
| It's worth reading the IPCC report - it's extremely
| interesting, and quite accessible. I was somewhat skeptical of
| climate work before I began reading, but I spent hundreds of
| hours understanding it, and was quite impressed by the depth of
| the work. By contrast, our models of future AI are very weak.
| Something like the scaling laws paper or the Chinchilla paper
| are far less convincing than the best climate work. And
| arguments like those in Nick Bostrom or Stuart Russell's books
| are much more conjectural and qualitative (& less well-tested)
| than the climate argument
|
| I say this as someone who written several pieces about xrisk
| from AI, and who is concerned. The models and reasoning are
| simply not nearly as detailed or well-tested as in the case of
| climate.
| kranke155 wrote:
| Because the AI human extinction idea is entirely conjecture
| while climate change is just a progression on current models.
|
| What's the progression that leads to AI human extinction ?
| blamestross wrote:
| Profit motive
| idopmstuff wrote:
| > If science continues undisrupted, the chance of unaided
| machines outperforming humans in every possible task was
| estimated at 10% by 2027, and 50% by 2047.
|
| Maybe I'm overly optimistic (or pessimistic depending on your
| point of view, I suppose), but 50% by 2047 seems low to me. That
| just feels like an eternity of development, and even if we
| maintain the current pace (let alone see it accelerate as AI
| contributes more to its own development), it's difficult for me
| to imagine what humans will still be better able to do than AI in
| over a decade.
|
| I do wonder if the question is ambiguously phrased and some
| people interpreted it as pure AI (i.e. just bits) while others
| answered it with the assumption that you'd also have to have the
| sort of bipedal robot enabled with AI that would allow it to take
| on all the manual tasks humans do.
| sdenton4 wrote:
| Yeah, it really comes down to the question of how we advance on
| just-bits vs constrained-environment robotics vs open-domain
| robotics...
|
| Some interesting work here on using LLMs to improve on open-
| domain robotics: https://arstechnica.com/information-
| technology/2023/03/embod...
| bcrosby95 wrote:
| What is the "current pace". Last year? Last 5 years? Last 20
| years?
|
| If you mean the last year, is that pace maintainable?
| Workaccount2 wrote:
| If you gave print outs of discussions with GPT-4 to AI
| researchers 5 years ago, they would have told you conversation
| like that is 10 or 20 years out.
| drtz wrote:
| I'm of the opposite opinion. I think there's some Dunning-
| Kruger-like effect at play on a macro scale and it's causing
| researchers to feel like they're closer than they are because
| they're in uncharted territory and can't see the complexity of
| what they're trying to build.
|
| Or maybe I'm just jaded after a couple decades of consistently
| underbidding engineering and software projects :)
|
| edit: Fix typo
| mistrial9 wrote:
| a single number as percentage is not useful here.. Intense
| video games ? of course.. plumbing professionals in a city ?
| not even close.. etc
| jelsisi wrote:
| As a 20 something this makes me so excited to be alive.
| teddyh wrote:
| "I've come up with a set of rules that describe our reactions
| to technologies:
|
| 1. Anything that is in the world when you're born is normal and
| ordinary and is just a natural part of the way the world works.
|
| 2. Anything that's invented between when you're fifteen and
| thirty-five is new and exciting and revolutionary and you can
| probably get a career in it.
|
| 3. Anything invented after you're thirty-five is against the
| natural order of things."
|
| -- Douglas Adams, The Salmon of Doubt
| vladms wrote:
| That's really a characteristic of being 20 (with the "this"
| being different things of different people). Good for everybody
| that manages to keep the feeling later as well, but it is
| definitely easier at 20...
| endisneigh wrote:
| Maybe I'm too pessimistic, but I doubt we will have AGI by even
| 2100. I define AGI as the ability for an intelligence that is not
| human to do anything any human has ever done or will do with
| technology that does not include itself* (AGI).
|
| * It also goes without saying that by this definition I mean to
| say that humanity will no longer be able to meaningfully help in
| any qualitative way with respect to intellectual tasks (e.g. AGI
| > human; AGI > human + computer; AGI > human + internet; AGI >
| human + LLM).
|
| Fundamentally I believe AGI will never happen without a body. I
| believe intelligence requires constraints and the ultimate
| constraint is life. Some omniscient immortal thing seems neat,
| but I doubt it'll be as smart since it lacks any constraints to
| drive it to growth.
| Nevermark wrote:
| If we consider OpenAI itself, a hybrid corporation/AI system,
| it's constraints are obvious.
|
| It needs vast resources to operate. As the competition in AI
| heats up, it will continually have to create new levels of
| value to survive.
|
| Not making any predictions about OpenAI, except that as its
| machines get smarter, they will also get more explicitly
| focused on its survival.
|
| (As apposed to the implicit contribution of AI to its creation
| of value today. The AI is in a passive role for the time
| being.)
| murderfs wrote:
| > I define AGI as the ability for an intelligence that is not
| human to do anything any human has ever done or will do with
| technology that does not include itself* (AGI).
|
| That bar is insane. By that logic, _humans_ aren 't
| intelligent.
| endisneigh wrote:
| What do you mean? By that same logic humans definitionally
| already have done everything they can or will do with
| technology.
|
| I believe AGI must be definitionally superior. Anything else
| and you could argue it's existed for a while, e.g. computers
| have been superior at adding numbers basically their entire
| existence. Even with reasoning, computers have been better
| for a while. Language models have allowed for that reasoning
| to be specified in English, but you could've easily written a
| formally verified program in the 90s that exhibits better
| reasoning in the form of correctness for discrete tasks.
|
| Even with game playing Go, and Chess, games that require
| moderate to high planning skills are all but solves with
| computers, but I don't consider them AGI.
|
| I would not consider N entities that can each beat humanity
| in the Y tasks humans are capable of to be AGI, unless some
| system X is capable of picking N for Y as necessary without
| explicit prompting. It would need to be a single system. That
| being said I could see one disagreeing haha.
|
| I am curious if anyone has different definition of AGI that
| cannot already be met now.
| dogprez wrote:
| The comparison of the accomplishments of one entity versus
| the entirety of humanity is needlessly high. Imagine if we
| could duplicate everything humans could do but it required
| specialized AIs, (airplane pilot AI, software engineer AI,
| chemist AI, etc). That world would be radically different
| than the one we know and it doesn't reach your bar. So, in
| that sense it's a misplaced benchmark.
| endisneigh wrote:
| I imagine AGI to be implemented something similar to MoE,
| so it seems fair to me.
| novagameco wrote:
| I'm optimistic in that I hope we don't have AGI by 2100 because
| it sounds like a truly dystopian future even in the best case
| scenario
| albertzeyer wrote:
| You say that a single AGI model should be as powerful as doing
| everything what the whole humanity has done in the last 100k
| years or so?
|
| Or a group of millions of such AGI instances in a similar time
| frame?
| endisneigh wrote:
| No, not a single model. A single _system_. Based off of
| nothing I expect AGI to basically be implemented like MoE.
| alanbernstein wrote:
| > Fundamentally I believe AGI will never happen without a body
|
| I'm inclined to believe this as well, but rather than "it won't
| happen", I take it to mean that AI and robotics just need to
| unify. That's already starting to happen.
| bufferoverflow wrote:
| That's an unreasonable metric for AGI.
|
| You're basically requiring AGI to be smarter/better than the
| smartest/best humans in every single field.
|
| What you're describing is ASI.
|
| If we have AGI that is on the level of an average human (which
| is pretty dumb), it's already very useful. That gives you
| robotic paradise where robots do ALL mundane tasks.
| breck wrote:
| > I doubt we will have AGI by even 2100...Fundamentally I
| believe AGI will never happen without a body.
|
| I think this is very plausible--that AI won't really be AGI
| until it has a way to physically grow free from the umbilical
| chord that is the chip fab supply chain.
|
| So it might take Brainoids/Brain-on-chip technology to get a
| lot more advanced before that happens. However, if there are
| some breakthroughs in that tech, so that a digital AI could
| interact with in vitro tissue, utilize it, and grow it, it
| seems like the takeoff could be really fast.
| light_hue_1 wrote:
| I got this survey; for the record I didn't respond.
|
| I don't think their results are meaningful at all.
|
| Asking random AI researchers about automating a field they have
| no idea about means nothing. What do I know about the job of a
| surgeon? My opinion on how current models can automate a job I
| don't understand is worthless.
|
| Asking random AI researchers about automation outside of their
| area of expertise is also worthless. A computer vision expert has
| no idea what the state of the art in grasping is. So what does
| their opinion on installing wiring in a house count for? Nothing.
|
| Even abstract tasks like translation. If you aren't an NLP
| researcher who has dealt with translation you have no idea how
| you even measure how good a translated document is, so why are
| you being asked when translation will be "fluent"? You're asking
| a clueless person a question they literally cannot even
| understand.
|
| This is a survey of AI hype, not any indication of what the
| future holds.
|
| Their results are also highly biased. Most senior researchers
| aren't going to waste their time filling this out (90% of people
| did not fill it out). They almost certainly got very junior
| people and those with an axe to grind. Many of the respondents
| also have a conflict of interest, they run AI startups. Of course
| they want as much hype as possible.
|
| This is not a survey of what the average AI researcher thinks.
| sveme wrote:
| Does anyone know potential causal chains that bring about the
| extinction of mankind through AI? Obviously aware of terminator,
| but what other chains would be possible?
| kristianc wrote:
| I'm fascinated by this as well. There's a lot of conjecture
| around what if, but I'm yet to really hear much about the how.
| Nevermark wrote:
| Machines won't need the biosphere to survive.
|
| If they accelerate the burning of fossil fuels, extract and
| process minerals on land and in the ocean without concern for
| pollution, replace large areas of the natural world with solar
| panels, etc., the world could rapidly become hostile for large
| creatures.
|
| An ocean die out as a result of massive deep sea mining would
| be particularly devastating. It's very hard to contain
| pollution in the ocean.
|
| Same for lakes. And without clean water things will get bad
| everywhere.
|
| Ramping up the frequency of space launches a few orders of
| magnitude into the solar system for further resources could
| heavily pollute the atmosphere.
|
| Microbes might be fine, and be able to evolve to changes, for
| much longer.
| Vecr wrote:
| I'm going to take that to mean "P(every last human dead) > 0.5"
| because I can't model situations like that very well, but if
| for some reason (see Thucydides Trap for one theory,
| instrumental convergence for another) the AI system thinks the
| existence of humans is a problem for its risk management, it
| would probably want to kill them. "All processes that are
| stable we shall predict. All processes that are unstable we
| shall control." Since humans are an unstable process, and the
| easiest form of human to control is a corpse, it would be
| rational for an AI system that wants to improve its prediction
| of the future to kill all humans. It could plausibly do so with
| a series of bioengeneered pathogens, possibly starting with
| viruses to destroy civilization then moving on to bacteria
| dropped into water sources to clean up the survivors (as they
| don't have treated drinking water anymore due to civilization
| collapsing). Don't even try with an off switch, if no human is
| alive to trigger it, it can't be triggered, and dead man's
| switches can be subverted. If it thinks you hid the off switch
| it might try to kill everyone even if the switch does not
| exist.
| jetrink wrote:
| To borrow a phrase from Microsoft's history, "Embrace, Extend,
| Extinguish." AI proves to be incredibly useful and we welcome
| it like we welcomed the internet. It becomes deeply embedded in
| our lives and eventually in our bodies. One day, a generation
| is born that never experiences a thought that is not augmented
| by AI. Sometime later a generation is born that is more AI than
| human. Sometime later, there are no humans.
| mnky9800n wrote:
| I often think about this from a standpoint of curiosity. I am
| simply curious about how the universe works, how information is
| distributed across it, how computers use it, and how this all
| connects through physics. If I'm soon to be greeted by an AI
| friend who shares my interests then that's a welcome addition to
| my friend and colleagues circle. I'm not really sure why I
| wouldn't continue pursuing my interests simply because there's
| someone better at doing it then me. There are many people that I
| know better at this than me. Why not add a robot to the mix?
| jjcm wrote:
| A really simple approach we took while I was working on a
| research team at Microsoft for predicting when AGI would land was
| simply estimating at what point can we run a full simulation of
| all of the chemical processes and synapses inside a human brain.
|
| The approach was tremendously simple and totally naive, but it
| was still interesting. At the time a supercomputer could simulate
| the full brain of a flatworm. We then simply applied a Moore's
| law-esque approach of assuming simulation capacity can double
| every 1.5-2 years (I forget the time period we used), and mapped
| out different animals that we had the capability to simulate on
| each date. We showed years for a field mouse, a corvid, a chimp,
| and eventually a human brain. The date we landed on was 2047.
|
| There are so many things wrong with that approach I can't even
| count, but I'd be kinda smitten if it ended up being correct.
| tayo42 wrote:
| Is there something to read about simulating a worm brain.
| Neurons aren't just simply on and off? They grow and adapt
| physically along with their chemical signals. Curious how a
| computer accounts for all of that.
| jncraton wrote:
| You might be interested in OpenWorm:
|
| https://openworm.org/
|
| This paper might be helpful for understanding the nervous
| system in particular:
|
| https://royalsocietypublishing.org/doi/10.1098/rstb.2017.037.
| ..
| Keloo wrote:
| How does it compare to the progress? Is the progress
| faster/slower?
|
| Any links to read?
| shpongled wrote:
| To be pedantic, I would argue that we aren't even close to
| being able to simulate the full brain of a flatworm on a
| supercomputer at anything deeper than a simple representation
| of neurons.
|
| We can't even simulate all of the chemical processes inside a
| _single_ cell. We don 't even _know_ all of the chemical
| processes. We don 't know the function of most proteins.
| minroot wrote:
| Does AGI must needs to be brain-like?
| parl_match wrote:
| No, but the simple approach here was "full simulation".
|
| And "brain in a jar" is different from "AGI"
| dragonwriter wrote:
| The human brain is the only thing we can conclusively say
| _does_ run a general intelligence, so, its the level of
| complexity at which we can say confidently that its just a
| software /architecture problem.
|
| There may be (almost certainly is) a more optimized way a
| general intelligence could be implemented, but we can't
| confidentally say what that requires.
| consumer451 wrote:
| > We can't even simulate all of the chemical processes inside
| a single cell. We don't even know all of the chemical
| processes. We don't know the function of most proteins.
|
| Brain > Cell > Molecules(DNA and otherwise) > Atoms > Sub-
| atomic particles...
|
| Potentially dumb question, but how deeply do we need to
| understand the underlying components to simulate a brain?
| throwup238 wrote:
| The vast majority of the chemical processes in a single cell
| are concerned with maintaining homeostasis for that cell -
| just keeping it alive, well fed with ATP, and repairing the
| cell walls. We don't need to simulate them.
| glial wrote:
| > We don't need to simulate them.
|
| You might be right, but this is the kind of hubris that is
| often embarrassing in hindsight. Like when Aristotle
| thought the brain was a radiator.
| throwup238 wrote:
| Another approach: the adult human brain has 100 (+- 20) billion
| or 10^11 neurons. Each neuron has 10^3 synapses and each
| synapse has 10^2 ion channels, amounts to 10^16 total channels.
| Assuming 10 parameters is enough to represent each channel
| (unlikely), that's about 10^17 (100 quadrillion) total
| parameters. Compare that to GPT4 which is rumored to be about
| 1.7*10^12 parameters on 8x 80GB A100s.
|
| log(10^17/10^12)/log(2) = 16.61 so assuming 1.5 years per
| doubling, that'll be another 24.9 years - December, 2048 -
| before 8x X100s can simulate the human brain.
| WhitneyLand wrote:
| And then how long until it runs on 20 watts of power? ;)
| fasterik wrote:
| It's interesting data on what AI researchers think, but why
| should we think that AI researchers are going to have the most
| accurate predictions about the future of AI? The skills that make
| someone a good AI researcher aren't necessarily the same skills
| that make someone a good forecaster. Also, the people most likely
| to have biased views on the subject are people working within the
| field.
___________________________________________________________________
(page generated 2024-01-08 23:00 UTC)