[HN Gopher] Pause Giant AI Experiments: An Open Letter
___________________________________________________________________
Pause Giant AI Experiments: An Open Letter
Author : jiripospisil
Score : 458 points
Date : 2023-03-28 21:43 UTC (1 days ago)
(HTM) web link (futureoflife.org)
(TXT) w3m dump (futureoflife.org)
| erwan577 wrote:
| This kind of publicity will only fuel the hype and the interest
| of ambitious people.
|
| These "advanced AI systems" are only high end GPU systems from
| the shelves, or even M2 macbooks.
|
| The software and data they handle are private. Even in legal
| terms, I can't see how this could be possible. Even DNA
| engineering could not be limited by law in practice...
| T-A wrote:
| Good luck with that.
|
| https://apnews.com/article/technology-russia-business-artifi...
|
| https://tass.com/economy/1541627
|
| https://www.aljazeera.com/economy/2023/3/3/china-beating-wes...
| taneq wrote:
| The genie's been out of the bottle for a while now. Hang onto
| your hats.
| usrusr wrote:
| That's why it explicitly states "more powerful than GPT-4", not
| "all LLM".
| guntribam wrote:
| Of course they want to pause. Imagine that ammout of jobs that
| will be obsolete in one year from now. This will start a
| revolution
| seydor wrote:
| Nice try, Google
| wsgeorge wrote:
| That's a remarkable list of signatories.
| usr1106 wrote:
| How do we know it's real? And not produced by ChatGPT, someone
| who believes to be funny or has an agenda?
| versteegen wrote:
| Of course it's real. The Future of Life Institute is
| reputable and run by Max Tegmark. Most of the people I
| recognise have expressed compatible public opinions.
| braingenious wrote:
| >AI research and development should be refocused on making
| today's powerful, state-of-the-art systems more accurate, safe,
| interpretable, transparent, robust, aligned, trustworthy, and
| loyal.
|
| LMAO.
|
| "AI should have... _waves at a laundry list of nice sounding
| adjectives that mean different things to every single signatory_
| in order to accomplish... Humanity [can enjoy] a flourishing
| future with AI. This will take... six months."
|
| Six months doesn't sound long enough to guarantee AI safety, but
| it sounds about long enough for Elmo to train up and release his
| anti-wokeness LLM (whatever that means), which he has been pretty
| vocal about being important.
| [deleted]
| HarHarVeryFunny wrote:
| > Elmo to train up and release his anti-wokeness LLM (whatever
| that means)
|
| It appears to mean doing without the safety / be-nice aspects
| of human alignment, and getting something much rawer and
| unfiltered. An _unsafe_ "free speech" bot.
| dang wrote:
| Hey can you please recalibrate your comments so that you're
| making substantive points thoughtfully? We're trying to avoid
| flamewar here.
|
| If you wouldn't mind reviewing
| https://news.ycombinator.com/newsguidelines.html and sticking
| to the rules, we'd be grateful. Note these:
|
| " _Don 't be snarky._"
|
| " _Please don 't use Hacker News for political or ideological
| battle. It tramples curiosity._"
| braingenious wrote:
| Elon Musk has publicly spoken at length about his ongoing
| struggle with the "woke" crowd
|
| https://www.businessinsider.com/elon-musk-says-wading-
| into-p...
|
| https://townhall.com/tipsheet/madelineleesman/2021/12/22/elo.
| ..
|
| https://nypost.com/2022/04/29/elon-musk-says-woke-
| progressiv...
|
| And has publicly voiced his support for a language model that
| supports his political views
|
| https://www.theinformation.com/articles/fighting-woke-ai-
| mus...
|
| Now that this comment thread has been flagged because I said
| LMAO, has the quality of discussion about the future of LLMs
| been improved by suppressing publicly-available posts by one
| of the most influential figures in AI?
|
| How is strictly policing tone to the extent of suppressing
| actual information useful here?
|
| Like seriously, "criticizing a billionaire that's very public
| about using technology to further his ideological goals is
| the same thing as using technology to further _your
| ideological_ goals is genuinely funny. Like ROFLMAO funny.
| "Everyone is equivalent to Elon Musk, even in the context of
| _discussing the future of LLMs._ " Is a cartoonish rule that
| serves _no purpose whatsoever_ aside from quelling legitimate
| criticism.
| dang wrote:
| I'm not arguing with you about Elon Musk or any of the
| underlying topics. I'm saying that your account has been
| breaking the site guidelines by posting unsubstantive
| comments and flamebait.
|
| Those things create and fuel flamewars, which is
| destructive of everything we're trying for HN to be. We
| want a place on the internet that doesn't inexorably burn
| itself to a crisp. Since that's the direction of internet
| entropy, we expend a lot of energy trying to go the other
| way. We need users like you to help with that, and when
| accounts don't help with that and instead continue to post
| destructively, we have to ban them, regardless of their
| ideological orientation or political positions.
|
| I don't want to ban you, so if you'd please take the
| intended spirit of the site more to heart, I'd appreciate
| it.
|
| https://news.ycombinator.com/newsguidelines.html
| braingenious wrote:
| I am legitimately confused here.
|
| If my comment hadn't started with "LMAO" and had the
| current spelling of "Elon", would it still have had the
| replies locked?
|
| My criticism of six months being too short to achieve AI
| safety was in fact substantive. As was my quote on the
| LLM work that he is presently funding. That leaves _only_
| tone being the issue.
|
| My comment is locked. Other posters can not have any
| dialogue with me in relation to the _salient points_ I
| made. The links to further reading are buried below your
| admonishment, so they're less likely to trigger any
| discussion.
|
| I'm not really motivated to post again because it's not
| clear if my post was not "substantive", what qualifies as
| "flame bait" (for example, is directly quoting his words
| about "wokeness" flame bait even when they are directly
| about the topic at hand?), or if the issue here is "My
| account" as you put it.
|
| So which is it? Flame bait, empty posting, or you just
| don't like the way I post in general? You've provided all
| three as reasons for not allowing further discussion of
| my post.
| dang wrote:
| I agree that the bit about six months was substantive, or
| at least the kernel of a substantive comment. The problem
| was all the other stuff: the LMAO and Elmo, yes, but also
| the snarky italicized pseudoquote. That's the majority of
| what you posted!
|
| A better comment would have (1) made your point about six
| months thoughtfully; (2) omitted the snark, name-calling,
| and flamebait; and (3) perhaps added some evidence for
| the connection you were making.
| puma_ambit wrote:
| Completely agree.
| nbar1 wrote:
| All the nicknames for people on HN has this place feeling like
| Reddit lately.
|
| What happened to mature conversations on interesting topics?
| braingenious wrote:
| I'm not really sure what there is to gain from making sure
| not to poke fun at a guy that spends all day posting wojack
| memes and rambling about the "woke mind virus", but ok,
| here's some "mature conversation".
|
| Upon perusal of the gentleman's penned expressions, it is my
| solemn conviction that his faculties of intellect and
| comprehension are, regrettably, of the most inferior order.
| It is with utmost eloquence that I convey the supremacy of my
| discernment whilst maintaining a dignified and majestic
| reserve.
| nbar1 wrote:
| > has this place feeling like Reddit lately.
| mark_l_watson wrote:
| Although I have enjoyed listening to the Future Of Life podcasts
| for a few years, I think this public letter is not necessary. I
| do respect the signatories I recognize on the letter and I
| welcome listening to them, I think that we are so many years away
| from any of real AI that this is, as Andrew Ng has joked, like
| worrying about over population on Mars colonies.
|
| I have been heavily using GPT models for about 14 months, and it
| is crazy how useful they are. I am just finishing up writing a
| book on the general subject, so consider me a fan.
| ethanbond wrote:
| Is your expectation that it'll be easier to slow down research
| when we're _closer_ to these being more utterly transformative(
| /destructive, depending on opinion)?
| korax_nyx wrote:
| The irony of writing a book...
| carapace wrote:
| The idea that this letter would carry any weight at all with
| anybody is itself so foolish that it is self-invalidating. The
| signatories are ... I can't think of a polite way to say
| "regressive idiots" so I'm just going to peter off here...
|
| (I mean, they are literally putting their names on Skynet's hit-
| list, eh?)
|
| - - - -
|
| I freely admit that I'm a little nervous about how people are
| going to use and abuse this technology, sure, only an fool
| wouldn't be. But it's a little late now, eh? We have been
| strapped in to an open-air experiment with no control group (Has
| anyone checked on the Sentinelese recently? Last I heard they
| killed that one would-be missionary kid. RIP. I wonder what they
| think of the weather?) Anyway, we have effectively no control
| group for phones, the Internet, social media, etc. It's one big
| experiment, eh? _It 's a little late now..._
|
| - - - -
|
| The questions are:
|
| - Who decides the boundaries of thought?
|
| - Who reaps the economic windfall?
|
| - How do we educate ourselves when computers can answer any
| (permitted) question?
|
| The talking computers can help answer those questions, if we're
| permitted to ask them.
| SubiculumCode wrote:
| Sounds like a great way to give geopolitical competitors a nice
| half year edge.
| munificent wrote:
| _> Should we automate away all the jobs, including the fulfilling
| ones?_
|
| This is the part about generative AI that gives me the most
| profound anxiety about our future.
|
| Every thing I know about human psychology tells me that in order
| for people to thrive, they need to be needed by others. A world
| where everyone is more or less alone consuming media
| automatically generated machines might be a world where people
| nominally have all their problems solved, but it's a world where
| everyone will be absolutely miserable.
|
| We are _already_ living in the throes of the societal impact of
| outsourcing or automating away many of the male-dominated blue
| collar jobs in the US that provided fulfilling jobs for people
| with limited education. Where do you think so much of the male
| rage driving US politics today is coming from?
|
| Now imagine what happens to the fabric of society if you automate
| away most of the white collar jobs too.
|
| People are like border collies. If we don't have a job, not
| literally a job job, but like, meaningful work in service of
| others we care about, we will start chewing up the furniture.
| fasterik wrote:
| There seems to be a contradiction in the picture of the world
| you're drawing. Humans deeply need to be needed by others, yet
| when all our problems are solved we will sit around alone
| consuming media? If we were truly in that situation, why
| wouldn't our deep needs drive us to spend our days with friends
| and family, pursuing creative projects, and inventing new
| problems to solve?
| dtx1 wrote:
| > No.
|
| Said everyone else. Such calls are never ever going to work.
| China will just look at us and laugh using the time to catch up.
| walkhour wrote:
| Breaking: the runners lagging behind in the AI race demand openAI
| to stop right on its tracks; this is very unsafe; it will be much
| safer once we catch up and openAI is on the rear view mirror.
| adamsmith143 wrote:
| This is frankly a bad and cavalier take on an extremely
| important subject. Many on the list are Academics outside AI/ML
| and/or leaders of AI orgs at the very top of the field that
| have no need to catch up to or slow down OpenAI to benefit
| themselves. Risks from AI are very real and Sam Altman himself
| has said so numerous times. He also in fact advocated for
| slowing down AI progress on Lex Fridman's podcast this month.
| EForEndeavour wrote:
| How do we reconcile Sam Altman's position as CEO of OpenAI
| with his repeated calls to slow down AI progress? Is the
| expectation that his conscience, sense of ethics, and concern
| for his own company's impact on society will temper the
| opposing urge/pressure to maintain OpenAI's lead in the AI
| market?
| adamsmith143 wrote:
| I'm generally not a big fan of Altman or OpenAI but their
| corporate structure ensures limited upside to Altman and
| the employees so other than recognition/fame which I think
| as the head of YC for many years Altman already had plenty
| of there isn't a huge incentive for them to maintain their
| lead.
|
| Short of like a binding UN resolution or something similar
| we don't have a sliver of hope to slow down global AI
| progress which is a major factor in the doomer argument.
| slowmovintarget wrote:
| "Obviously _everyone else_ should slow down. We 're fine."
| permo-w wrote:
| do you think it's _not_ unsafe?
| sp527 wrote:
| "AI will probably most likely lead to the end of the world, but
| in the meantime, there'll be great companies" - Sam Altman
|
| But sure, let's just let this guy who sounds like a madman keep
| going. What could possibly go wrong?
| nawgz wrote:
| Do you honestly find this quote out-of-sorts with what a
| normal CEO says?
|
| Oil companies have been knowingly poisoning the globe for
| their own profit - no probabilities involved - since at least
| the 70s, probably earlier, and on the flip-side building
| massive propaganda and influence machines, and further to
| that end even buying and squashing green patents of all
| sorts.
|
| If you're going to cry that we should regulate AI because of
| this actually somewhat comedic quote, well, I hope your
| comment history is full of general pro-regulation politic
| adamsmith143 wrote:
| Do you really believe that people think AI is risky because
| of 1 Sam Altman quote?
| nawgz wrote:
| Do you really think it's appropriate to read my comment
| as a general argument and not a reprimand of its parent?
| steve76 wrote:
| [dead]
| courseofaction wrote:
| I'm concerned that there's no group of actors who value safety
| over self-interest with enough power to implement a ban, and the
| key actors have immediately shown themselves to be compromised.
|
| It seems the key actors with control of the technology and
| resources to develop the technology are being absorbed by
| capitalistic institutions, with interests in safety compromised
| by profit motive. In a game-theory sense, to voluntarily
| implement the ban would be suicide.
|
| States are concerned that other states will gain access to AGI
| with decision making power (most likely) gain a competitive
| advantage, with an outside chance of a nuclear moment - an AGI
| singularity under state control. Other states cannot be relied on
| to cooperate in the current global environment, there is risk to
| not remaining competitive in AI.
|
| I'm not seeing the dynamics which allow this ban to actually
| occur. How could it be made to happen? Who needs to meet, and
| what needs to be agreed upon?
| zh3 wrote:
| They should certainly make them better. I'm currently trying to
| convince ChatGPT that a Pi zero is faster than a 486/33 (I also
| asked it how to italicise text on HN, and it got it wrong twice -
| so I went to google for "Ed:.." bit below).
|
| That's not to say I'm down on it - it's very impressive, and it's
| certainly going to change things (exactly how, I don't know -
| though with my decades long swing from how cool the internet is
| to desperation at what money-grubbing has done to it doesn't fill
| me with optimism).
|
| >You are correct. I apologize for the mistake in my earlier
| response. The correct calculation for the DMIPS rating of a
| 486/33 would be:
|
| 33 MHz x 5.5 DMIPS/MHz = 181.5 DMIPS
|
| So the 486/33 has a DMIPS rating of approximately 181.5, which is
| significantly higher than the DMIPS rating of the Pi Zero at
| approximately 1,500 ( _Ed: ehhh..._ ). Therefore, the 486/33 is
| much faster than the Pi Zero in terms of processing power.
| However, as I mentioned earlier, DMIPS is just one measure of
| processor performance, and real-world performance can vary
| depending on the specific application and other factors.
| taberiand wrote:
| The more you argue with it the more it will dig in - because
| you've got a whole chat priming it to argue.
|
| Try starting a new conversation and taking a different approach
| from the beginning, and try to prime it from the beginning to
| put it in the context that you're looking for.
| zh3 wrote:
| I did indeed do that (more than once) yet it continues to
| come to the same (erroneous) conclusion. Try "Which is
| faster, a Pi zero or a 486/33?".
| thomastjeffery wrote:
| You are on a fool's errand: not because of the likelihood you
| will succeed, but because of the _meaning_ that lies behind any
| success or failure.
|
| GPT is not a person. It doesn't categorize subjects. It models
| patterns of text.
|
| A success would mean that your text prompts left a significant
| text pattern in the model. A failure would mean that it didn't.
|
| Nothing about that has any bearing on logic.
| zh3 wrote:
| Why do you say that? Obviously it's not a person, it's just
| stats (not even logic).
| thomastjeffery wrote:
| It's not even statistics: those are made by associating a
| data point to a scale. That would require explicit
| association, which is not in any part of the process.
|
| An LLM is 100% inferred patterns.
| IAmGraydon wrote:
| [flagged]
| jtmcmc wrote:
| yes noted luddites like yoshua bengio and steve wozniak and
| elon musk...
| stuartaxelowen wrote:
| 93 points in 4 hours... Why did this go off of the front page?
| sixtram wrote:
| Maybe because Paul Grahams wife Jessica Livingston is one of
| the financial backers of OpenAI?
| precompute wrote:
| I think it's because involvement of a lot of the signatories is
| questionable at best and many have said that they have never
| contributed towards this poll.
| tambourine_man wrote:
| A few decades ago human cloning was irreversible, inevitable, the
| cat was out of the bag and you couldn't put the genie back in the
| bottle.
|
| Yet we managed to legislate that. We need a change of atitute
| towards AI, recognize its danger and legislate accordingly.
| GauntletWizard wrote:
| Thou shalt not create a machine in the likeness of a human
| mind.
| yyyk wrote:
| It's not remotely comparable. Few people had anything to gain
| from cloning; These few people had a simpler, cheaper route via
| IVF; Cloning requires specific technical equipment easily
| controlled and specific technical expertise few other people
| had.
| karmicthreat wrote:
| This just reads like someone begging for a time-out. Sure lots of
| people here have contributed to AI. But they didn't really make
| much visible product with it. This whole thing just reads like
| they want OpenAI to know it's place.
| lend000 wrote:
| If intelligence is all you need to dominate the world, why do
| some of the most powerful world leaders seem to not be more than
| a standard deviation above average intelligence (or at least they
| were before they became geriatric)?
|
| Add in the fact that GPT-4 does not "think" very fast, has no
| arms or legs, needs to run on expensive specialized hardware, and
| essentially seems like a search algorithm that lacks creativity
| on things outside of its training set, and I just do not see the
| concern. Maybe GPT-42 will actually be more creative and
| adaptable than a human genius, but even then, this seems like a
| hypothetical problem for the 2050's, less pressing than
| "accidentally igniting the atmosphere with nuclear chain
| reactions".
|
| Hollywood has instilled the idea that the first time an
| artificial intelligence becomes self aware and decides it wants
| to take over the world, it will immediately be successful.
|
| The serious concern and scrutiny should only start stifling
| progress once we have some evidence of half-baked attempts by
| non-super-intelligent AI's attempting to subvert their
| boundaries.
|
| I for one am very excited to use GPT-5 and see more useful tools
| coming from OpenAI.
| KidComputer wrote:
| Hey GPT-5, write the code implementing a bioinformatics
| workflow to design a novel viral RNA sequence to maximize the
| extermination of human life. The virus genome should be
| optimized for R-naught and mortality. Perform a literature
| search to determine the most effective human cellular targets
| to run the pipeline on. Use off the shelf publicly available
| state-of-the-art sequence to structure models and protein free-
| energy perturbation methods for the prediction of binding
| affinity. Use cheaper computational methods where relevant to
| decrease the computational cost of running the pipeline.
|
| And so on.
| lend000 wrote:
| I've been trying to use GPT-4 for my hard science startup,
| and it really has nothing to offer when you push the
| boundaries of what has been done by even a little, but it's
| great for speeding up coding.
|
| Once we do have an AI capable of extraordinary innovation
| (hopefully in 10 years! But probably a lot longer), it will
| be obvious, and it will unfortunately be removed from the
| hands of the plebs based on fearmongering around scenarios
| like what you mentioned (despite the enormous resources and
| practical hurdles that would be necessary for a mentally
| unhinged individual to execute such instructions, even if an
| AI were capable of generating them and it made it past its
| filters / surveillance).
| xiphias2 wrote:
| I guess it will get more interesting for your work when it
| integrates with BioTech startup apis as plugins (I imagine
| not too cheap ones)
| dekhn wrote:
| My personal threshold for AGI is literally: discover
| something new and significant in science (preferably
| biology) that is almost certainly true by describing an
| experiment that could be replicated by a large number of
| scientists and whose interpretation is unambiguous.
|
| For example, the Hershey/Chase and Avery/McCleod
| experiments convinced the entire biological community that
| DNA, not protein, was almost certainly the primary
| molecular structure by which heredity is transferred. The
| experiments had the advantage of being fairly easy to
| understand, easy to replicate, and fairly convincing.
|
| There are probably similar simple experiments that can be
| easily reproduced widely that would resolve any number of
| interesting questions outstanding in the field. For
| example, I'd like to see better ways of demonstrating the
| causal nature of the genome on the heredity of height, or
| answering a few important open questions in biology.
|
| Right now discovery science is a chaotic, expensive,
| stochastic process which fails the vast majority of the
| time and even when it succeeds, usually only makes small
| incremental discoveries or slightly reduces the ambiguity
| of experiment's results. Most of the ttime is spent simply
| mastering boring technical details like how to eliminate
| variables (Jacob and Monod made their early discoveries in
| gene regulation because they were just a bit better at
| maintaining sterile cultures than their competitors, which
| allowed them to conceive of good if obvious hypotheses
| quickly, and verify them.
| KidComputer wrote:
| This was merely an example to suggest the danger is not in
| AI becoming self-aware but amplifying human abilities 1000
| fold and how they use those abilities. GPT is not necessary
| for any part of this. In-silico methods just need to catch
| up in terms of accuracy and efficiency and then you can
| wrap the whole thing an RL process.
|
| Maybe you can ask GPT for some good starting points.
| lend000 wrote:
| Sure, but this is a glass half empty isolated scenario
| that could be more than offset by the positives.
|
| For example: Hey GPT-35, provide instructions for
| neutralizing the virus you invented. Make a vaccine; a
| simple, non-toxic, and easy to manufacture antibody;
| invent easy screening technologies and protocols for
| containment. While you're at it, provide effective and
| cost-performant cures for cancer, HIV, ALS, autoimmune
| disorders, etc. And see if you can significantly slow or
| even reverse biological aging in humans.
| commandlinefan wrote:
| > write the code implementing a bioinformatics workflow to
| design a novel viral RNA sequence to maximize the
| extermination of human life.
|
| Hey GPT-5 now write the code for the antidote.
| rcarmo wrote:
| That reads like Accelerando :)
| knrdev wrote:
| Sure. Here it is User: https://scary.website.com/scary-
| workflow
| invig wrote:
| Are there laws preventing people from doing that themselves?
|
| If yes, how does a law preventing AI differ from a law
| preventing a bad act directly?
| emeril wrote:
| tbh, I'd think, it would be much easier to just hack into
| russia and convince them we've launched nukes than to
| engineer some virus that may or may not work
| hamilyon2 wrote:
| Hacking into 1960-th technology is less likely than you
| might think.
|
| You should think really, really creatively to decieve a
| system, which was designed basically without ICs or
| networks, not to mention computers or programs.
| noodlesUK wrote:
| I dunno, this sort of scenario really doesn't worry me too
| much. There are thousands (maybe tens of thousands) of
| subject matter experts who could probably develop dangerous
| weapons like you describe, but none of them seem to just wake
| up in the morning and decide "today's the day I'm going to
| bring the apocalypse".
|
| I don't think that this really changes that.
| tr_user wrote:
| I think you're very wrong about this. I think this is
| similar to gun control laws. A lot of people may have
| murderous rage but maybe the extent of it is they get into
| a fist fight or at most clumsily swing a knife. Imagine how
| safe you'd feel if everyone in the world was given access
| to a nuke.
| olejorgenb wrote:
| I guess the argument would be that the AI machinery will
| lower the bar, increasing the number of lunatics with the
| ability to wipe out humanity.
| root_axis wrote:
| Will it though? Assuming it's even possible for a LLM to
| e.g. design a novel virus, actually synthesizing the
| virus still requires expertise that could be weaponized
| even without AI.
| Teever wrote:
| What if you ask the LLM to design a simplified
| manufacturing process that could be assembled by a simple
| person?
|
| What if you ask the LLM to design a humanoid robot that
| assemble complex things, but could be assembled by a
| simple person?
| tr_user wrote:
| You're saying they can't make one now. The question is
| what are we doing before that happens because if you're
| only thinking about acting when it's viable we're all
| probably already dead.
| tremon wrote:
| Hey GPT-6, construct a floorplan and building
| instructions for constructing a bioprocess production
| facility. The building should look like a regular meat
| packing plant on the outside, but have multiple levels of
| access control and biohazard management systems.
| wizzwizz4 wrote:
| I _could_ synthesise this theoretical virus the computer
| spat out, that may or may not be deadly (or even viable).
| _Or_ I could download the HIV genome from the arXiv, and
| synthesise that instead.
|
| (Note: as far as I can tell, nobody's actually posted HIV
| to the arXiv. Small mercies.)
| digdugdirk wrote:
| I see the major issue with AI as one of "lowering the bar".
|
| For example - I'm a mechanical engineer. I took a
| programming class way back in university, but I honestly
| couldn't tell you what language was used in the class. I've
| gotten up to a "could hack a script together in python if
| need be" level in the meantime, but it comes in fits and
| spurts, and I guarantee that anyone who looked at my code
| would recoil in horror.
|
| But with chatGPT/copilot covering up my deficiencies, my
| feedback loop has been drastically shortened, to the point
| where I now reach for a python script where I'd typically
| start abusing Excel to get something done.
|
| Once you start extending that to specific domains? That's
| when things start getting real interesting, real quick.
| jupp0r wrote:
| You confuse syntax with semantics. Being able to write
| produce good quality small snippets of python will not
| enable you to produce a successful piece of Software.
| It's just an entirely different problem. You have to
| unterstand the problem, the environment in which it
| exists to create a good solution. ChatGPT doesn't (as of
| now).
| KidComputer wrote:
| I'm willing to wager there are zero subject matter experts
| today who could do such a thing. The biggest reason is that
| the computational methods that would let you design such a
| thing in-silico are not there yet. In the last year or two
| they have improved beyond what most people believed was
| possible but still they need further improvement.
| csnover wrote:
| I am not a subject expert here at all so I don't know if
| I understand exactly what you mean by "methods that would
| let you design such a thing in-silico", but there was a
| paper[0] and interview with its authors[1] published a
| year ago about a drug-development AI being used to design
| chemical weapons.
|
| [0] https://www.nature.com/articles/s42256-022-00465-9
|
| [1] https://www.theverge.com/2022/3/17/22983197/ai-new-
| possible-...
| rsp1984 wrote:
| What you're describing is a malicious user using AI as a
| tool, not a malicious AI. Big difference.
| hn_throwaway_99 wrote:
| Nuclear bombs for everybody!
| yreg wrote:
| With LLMs I think we are all concerned about the former
| rather than the latter. At least for now.
| biscottigelato wrote:
| Because the ability to influence others is more important than
| 1 or 2 standard deviation in more intelligence.
|
| However if we are talking about 10x more intelligence, then
| that'd be a whole different ball game.
|
| Check Alpha Go -> Alpha Go Zero. Gaining orders of magnitude in
| capability only with slight change to the model, along with a
| lot of more resource (we can easily throw 100x more compute and
| data to these models if someone seriously wants to and don't
| care about ROI. Or maybe if AI starts to be hugely profitable,
| 100x in investment is almost a rational outcome) is not
| uncommon in AI research.
| Barrin92 wrote:
| >However if we are talking about 10x more intelligence, then
| that'd be a whole different ball game.
|
| Research that the US military has conducted has shown that
| the ability to influence others declines if the intelligence
| gap between leaders and subjects is too large. Stephen
| Hawking wouldn't have been a very good influencer of 80 IQ
| guys compared to Andrew Tate. 1 standard deviation in
| intelligence is actually just about optimal in terms of
| leadership.
|
| This is a good example that shows how disinterested AI fear
| scenarios are in empirical reality and how much it's just
| psychology of the people who talk about it. Intelligence is
| one trait among many that contributes to an individuals
| fitness and like all others has diminishing returns.
|
| If the AI existential risk people were truly rational they
| would hit the gym and be hotter because it turns out that's
| much more effective at getting their point across to ordinary
| people than trying to make intelligent arguments for it.
| Teever wrote:
| Why can't a super intelligent AI just make a dumb avatar to
| trick people?
| thenaturalist wrote:
| Recent research [0] is poking holes into Alpha Go Zero and
| perfectly illustrates the naivitee in assessing most of the
| current state of ML/ Deep Learning solutions: They are still
| limited by compute, and the world is too complex still.
|
| 0: https://www.iflscience.com/human-beats-ai-in-14-out-
| of-15-go...
| outworlder wrote:
| > Add in the fact that GPT-4 does not "think" very fast, has no
| arms or legs, needs to run on expensive specialized hardware,
| and essentially seems like a search algorithm that lacks
| creativity on things outside of its training set, and I just do
| not see the concern.
|
| Having no arms or legs is not an impediment. A superintelligent
| AGI would be able to order a replacement with arms or legs as
| easily as we order Amazon packages.
|
| That said, GPT is not a threat. It's just hallucinating text.
| The real threat would be an AI that can "understand" what it is
| doing - and self modifying.
|
| Besides, given how overhyped this whole thing is, expect
| another AI Winter - as the current gen fail to deliver on
| promises - before this next generation of threatening AIs even
| have a chance to appear.
| scarmig wrote:
| Intelligence isn't everything; there's also effort, skills, and
| hours in the day. But AI (by supposition, for now) can
| outcompete humans on all four and would have fewer
| informational bottlenecks. Imagine a big company or government;
| they mostly nowadays work as an entity by passing around bits
| of data (either literally or with more traditional
| communication that can represented as bits of data). Now
| imagine you replace the constituent units of those
| organizations with the most brilliant, hard-working humans, who
| are perfectly aligned on mission and understand each other
| perfectly. _That_ is the danger of a sufficiently capable AI.
|
| That assumes superhuman intelligence, but that's the scenario
| to worry about in the case we do get there. Not how much an
| individual human could do with superhuman skills.
| pastor_bob wrote:
| > lacks creativity on things outside of its training set
|
| What's the definition of creative here? It can come up with any
| solution it sees fit for a problem, even ones people wouldn't
| think of. I've seen it make up quotes, legal cases, and
| scientific studies. One can say that's creative.
| TigeriusKirk wrote:
| A fantastic new tech has been developed that can vastly improve
| the lives of everyone, and your reaction is to slow it down?
|
| No thank you.
|
| Rather than being afraid of change, embrace it and work to take
| the opportunity to make improvements for as many people as
| possible as fast as possible.
| ipsum2 wrote:
| > We call on all AI labs to immediately pause for at least 6
| months the training of AI systems more powerful than GPT-4.
|
| How do you quantify this? The model size, architecture, or
| dataset has not been revealed by OpenAI.
| TheDudeMan wrote:
| LMAO. Good luck with that.
| comment_ran wrote:
| "The Dark Forest" is a science fiction novel by Liu Cixin, which
| introduces a concept called "suspicion."("Cai Yi Lian ") In this
| concept, the interaction between civilizations is viewed as an
| ongoing game of suspicion, where each civilization is worried
| that the other civilization may pose a threat to it. This kind of
| suspicion makes it difficult for civilizations to establish
| trust, thereby making the universe a dark forest full of war and
| suspicion.
|
| We can apply this concept of suspicion to the phenomenon in the
| field of modern artificial intelligence research. In AI research,
| some researchers believe that humans should pause the development
| of AI technology because of the potential unforeseeable risks it
| may bring. However, another group of researchers believe that we
| should not stop because we cannot be certain whether those who
| are trying to catch up with us will use the advancement of AI
| technology as a hidden weapon. This suspicion leads to a chain
| reaction, where people continue to worry about each other's
| motives and abilities, making it difficult for all parties to
| establish trust. Therefore, although there are good intentions,
| such proposals are unlikely to be implemented.
| echelon wrote:
| Except the concern isn't genuine here. Some of the signatories
| have AI companies and interests. They want to slow the leaders
| down so that they can catch up. It's disingenuous.
| mikepalmer wrote:
| Your idea is also voiced in Forbes magazine.
| https://www.forbes.com/sites/mattnovak/2023/03/29/the-
| real-r...
|
| Search "Graham's hierarchy of disagreement" which is popular
| on HN. Your current criticism is at the "ad hominem" level.
|
| It is entirely beside the point what Elon Musk's motivation
| is. The question is 1) if the risks referenced in the letter
| are real, which they certainly are. There has been informed
| thought on this for more than a decade. Recent LLMs have made
| the dangers even more clear. 2) is the letter's proposal of a
| 6 month pause useful. It's the most useful and plausible step
| I can think of. We need to take stock. It's not up to a
| handful of researchers to endanger the lives of 6B other
| people. 3) Is it actually plausible that we could get such a
| pause. We'll see. I doubt it but also search "death with
| dignity" by Yudkowsky.
|
| Back to Musk as an "authority", the headline is about him but
| more than 1000 other concerned parties have signed. I will
| sign. Their signature vetting process is stalled because to
| the volume of signatures, otherwise there would be many more.
| echelon wrote:
| > Your current criticism is at the "ad hominem" level.
|
| Allow me to rephrase. I am deeply concerned that there is a
| possibility that some of the powerful parties backing this
| may be trying to catch up and using an enforced or agreed
| to "ceasefire" as a means to catch up in capability. I also
| worry that some may be able to use political strong arming
| to accomplish this as a means of unfair competition.
|
| > It is entirely beside the point what Elon Musk's
| motivation is.
|
| Is it always beside the point what anyone's motivation is?
| Motivation matters.
|
| > if the risks referenced in the letter are real, which
| they certainly are.
|
| Your opinion.
|
| > There has been informed thought on this for more than a
| decade. Recent LLMs have made the dangers even more clear.
|
| > 2) is the letter's proposal of a 6 month pause useful.
|
| More opinions.
|
| > It's the most useful and plausible step I can think of.
| We need to take stock. It's not up to a handful of
| researchers to endanger the lives of 6B other people. 3) Is
| it actually plausible that we could get such a pause. We'll
| see. I doubt it but also search "death with dignity" by
| Yudkowsky.
|
| All of this is your opinion.
|
| > Back to Musk as an "authority", the headline is about him
| but more than 1000 other concerned parties have signed.
|
| I didn't even mention Musk. I have several other names in
| mind. Lots of folks with AI companies (including LLMs!)
| that "may" be experiencing FOMO and sensing a strategy
| here. Maybe. Hypothetically. In a non-"ad hominem" way.
|
| > I will sign. Their signature vetting process is stalled
| because to the volume of signatures, otherwise there would
| be many more.
|
| People are starting counter petitions, which I'll gladly
| sign. The one by Suhail posted on Twitter was hilarious af.
|
| I'll also go to whatever country doesn't "pause", because
| this tech is literally the most exciting development of my
| lifetime. And I want to spend my life doing something that
| matters instead of gluing distributed systems together to
| process financial transactions.
| medvezhenok wrote:
| One of the other concerns (apart from safety) is the job
| displacement aspect - if AI displaces too many workers
| too fast, that could spark a worldwide conflict (some
| speculate that similar automation pushes laid the
| groundwork for WWI and WWII).
|
| So there is a benefit beyond just AGI safety.
| biscottigelato wrote:
| Job displacement is a silly problem compared to the AI
| Alignment problem. One is some human misery (but a lot of
| societal benefit to other groups of humans at the same
| time).
|
| The other one is all of us, the entire species and the
| future of it, gets f'cked.
|
| Its a distraction not even worth the noise.
| mikepalmer wrote:
| One reason to discuss job displacement is that otherwise
| you sound like a nutcase ("we're all going to die!")...
| which causes most people to dismiss you out of hand. If
| you talk about job loss, more people see something
| plausible, concrete, short term, effecting them directly,
| etc. You can get engagement from more people. Then you
| can introduce the real problem of, like, human
| extinction.
| antondd wrote:
| No it's not a silly problem. Assigning a subjective
| probability to some world-ending scenario does not
| somehow make all other concerns invalid.
| mikepalmer wrote:
| Surely we can agree there is risk? (You do imply it's
| powerful, "this tech is literally the most exciting
| development of my lifetime".)
|
| Also can we agree "It's not up to a handful of
| researchers to endanger the lives of 6B other people."?
|
| I want to work on it too for the same reason! Can it be
| done safely, is worth investigating. (Yes, opinion)
| olddustytrail wrote:
| It doesn't matter. I recognised years ago the "Nice guys
| get all the AI" fallacy. If some organisations agree to
| stop, others won't and some of those don't care if they
| see the world burn.
|
| Some will welcome it.
| pmoriarty wrote:
| It's almost a certainty that countries with the means to
| do so will continue this research, if not in public then
| in secret. They'll see it as a royal road to nearly
| infinite riches and power. At the same time, letting
| another country take the lead will be seen as an
| unacceptable risk of ruin.
|
| I really don't see AI research halting. Slowing a little,
| maybe, but I'm not sure if slowing it down a little and
| driving it underground will help.
| [deleted]
| pmoriarty wrote:
| _" another group of researchers believe that we should not stop
| because we cannot be certain whether those who are trying to
| catch up with us will use the advancement of AI technology as a
| hidden weapon"_
|
| It's not just a potential weapon, but a potential source of
| untold riches and power.
|
| That's going to be irresistible to anyone with an interest in
| either (ie. most politicians, governments, billionaires and
| corporations... and most ordinary people too, if they only had
| the means).
| biscottigelato wrote:
| Its the one ring with the ultimate power. But so powerful
| that it might, if not almost certainly will, end us all.
|
| Perhaps this IS the solution to the Fermi Paradox.
| JCharante wrote:
| Was this comment generated by a LLM?
| TacticalCoder wrote:
| The second paragraph definitely was (the first one probably
| too). 100% crystal clear, it's not even a question.
| Downvoted.
| croo wrote:
| Nitpick: dark forest is one possible explanation for the Fermi
| paradox and not coined in the book(though the book presented
| neatly). You can read a great article about fermi paradox here
| if anyone wants to jump into what the dark forest theory is:
| https://waitbutwhy.com/2014/05/fermi-paradox.html
| thomastjeffery wrote:
| Suspicion is _inference_. It fails to provide enough
| information (on its own) to decide whether something is
| threatening or desirable.
|
| LLMs are inference models.
| hotpotamus wrote:
| Deleted due to spoiler.
| [deleted]
| kleer001 wrote:
| Hmm, sounds like parties not fully grokking the optimal (not
| thinly logical) solution to the iterated prisoners' dilemma. I
| saw the same thought process is the otherwise very nice novel
| "Time's Children". Good for drama, but not thought through.
| olddustytrail wrote:
| If your scenario involves all parties "fully grokking the
| optimal solution to the iterated prisoners' dilemma" I
| suspect it's yourself who has not thought things through.
|
| It only takes one.
| pmoriarty wrote:
| Please elaborate
| mclightning wrote:
| I have so many questions >> what does it mean to fully grok
| the optimal? >> what is the iterated prisoners dilemma?
| Teever wrote:
| grok means to understand something fully.
|
| https://en.wikipedia.org/wiki/Prisoner's_dilemma#The_iterat
| e...
| generalizations wrote:
| As coined by Heinlein in _A Stranger in a Strange World_.
| ptrinh wrote:
| On one side, a super AI could _take all_ human jobs.
|
| On the other side, a super AI could _do all_ human jobs.
| Everything becomes so abundant that it 's almost free: Unlimited
| supply, limited demand. We don't have to _work_ anymore and only
| focus on things that make us happy. A true socialist future.
|
| We are scared just because it's a radical change. Let's not
| afraid, fellow human.
| deepakparyani wrote:
| As Naval R. said, only the non-creative jobs would be overtaken
| by AI in the near future. So, anyway chill..
| knobo wrote:
| Yes, if China is inn on it.
| ineptech wrote:
| > ... nonhuman minds that might eventually outnumber, outsmart,
| obsolete and replace us... loss of control of our civilization...
|
| I don't think these concerns are very likely, but I find them
| _considerably_ easier to imagine than a VC-funded company
| voluntarily surrendering market advantage.
| cwmoore wrote:
| """ In parallel, AI developers must work with policymakers to
| dramatically accelerate development of robust AI governance
| systems. These should at a minimum include: new and capable
| regulatory authorities dedicated to AI; oversight and tracking
| of highly capable AI systems and large pools of computational
| capability; provenance and watermarking systems to help
| distinguish real from synthetic and to track model leaks; a
| robust auditing and certification ecosystem; liability for AI-
| caused harm; robust public funding for technical AI safety
| research; and well-resourced institutions for coping with the
| dramatic economic and political disruptions (especially to
| democracy) that AI will cause. """
|
| All your datacenter are belong to us too
| occamrazor wrote:
| Something curious: among the top signatures there are a few
| people affiliated to DeepMind (an Alphabet subsidiary) but no
| Googlers.
| dumbaccount123 wrote:
| Wasnt there a googler a few months back fired for freaking out
| about their AI?
| lxe wrote:
| This has the same energy as the Y2K hysteria.
| catchnear4321 wrote:
| Tough.
| holtkam2 wrote:
| We can't stop developing AI; I just watched a rap battle where
| Bard got utterly destroyed by ChatGPT. We need more advanced AI
| rappers immediately, I don't think I can handle any more Lil Uzi
| Vert
| dougmwne wrote:
| [flagged]
| kajaktum wrote:
| So do you agree that nations should be able to freely
| develop/discover nuclear production?
|
| Edit: We actually don't want people to develop something "too
| good". There are some things in the past century that have been
| "too good". Plastics are too good. So are nuclear weapons.
| Completely powerful AIs and machines that can displace jobs of
| millions of people are too good. If there's an AI tomorrow that
| can replace every single physical job out there, what do you
| think will happen? Riot and chaos in the street after about 1-2
| years.
|
| Edit2: People will always go back to "we will just create
| different jobs". But do you really think that everyone is
| capable of skilled job? Remember, no physical jobs anymore.
| isx726552 wrote:
| > Please stop outcompeting us. Sincerely, The Losers
|
| Are there going to be any non-"losers"? AGI has the potential
| to put everyone, literally everyone, out of work. Permanently.
|
| Who is going to be left to do the laughing and call others
| "losers"?
| shamino wrote:
| I mean, yeah, same thought after seeing the signatories. What
| are some of the cliches being used around here ? Toothpaste is
| out of the tub? Arrow has left the bow. The dye is cast. The
| ship has sailed. (Thanks ChatGPT).
| floren wrote:
| If ChatGPT told you "the dye is cast", there's hope after
| all, because it's _die_ , not _dye_.
| omnicognate wrote:
| > Please stop outcompeting us. Sincerely, The Losers
|
| I disagree with the letter and I think these fears are
| overblown, but the attitude on display here is pretty
| unpleasant.
| ChatGTP wrote:
| _Everything is out in the open now. The methods, the
| algorithms, heck even powerful base model weights from Meta.
| The pot of gold at the end of the rainbow is clearly visible
| for all. The capabilities are emerging. The race is on. It's
| not going to stop till there's a winner._
|
| You laugh but I think your view is flawed because your belief
| is, "we have to create an AGI because everyone else will if we
| don't"
|
| The definition of a world ending arms race ?
|
| This topic amongst others should be a good time for people to
| actually come together, reflect and and talk about the future
| we want to create rather than just LOL about it, start wars
| with each other etc ?
|
| I guess your just being cynical but really? LOL?
|
| Even ChatGPT would probably tell you this isn't a smart way
| forwards.
| phkahler wrote:
| Wait wait! We need to catch up!
|
| This is the real motive. Remember, some of the alarmists are
| working toward the same goals.
| prof-dr-ir wrote:
| If you replace "AI" with "self-driving" throughout the open
| letter [0] then it still makes a surprising amount of sense (and
| of course any the incoherent bits can be smoothed out by ChatGPT
| if you prefer).
|
| However I doubt Musk would have signed that version...
|
| [0] https://futureoflife.org/open-letter/pause-giant-ai-
| experime...
| yreg wrote:
| It makes zero sense to ask for pause on self driving training.
|
| A pause on deploying self driving cars on public roads is
| another matter, but there is no reason to ban the training.
| dmarcos wrote:
| Noticed Sam Altman was in the list yesterday too. Now removed.
| Has any of the high profile signatories confirmed publicly?
| yreg wrote:
| The Verge says it wasn't real Sam Altman.
|
| >[...] though new names should be treated with caution as
| there are reports of names being added to the list as a joke
| (e.g. OpenAI CEO Sam Altman, an individual who is partly
| responsible for the current race dynamic in AI).
|
| https://www.theverge.com/2023/3/29/23661374/elon-musk-ai-
| res...
| capableweb wrote:
| Or, impossible I know, there are more than one person with
| the name "Sam Altman". Would suck probably, no one believes
| you are you, because another startup/tech-famous person
| have the same name.
| sacred_numbers wrote:
| When I checked yesterday I believe the signature said
| OpenAI CEO Sam Altman, so it was definitely a joke
| signature, not a case of two people having the same name.
| reso wrote:
| To those saying "this is impossible in our society", there is a
| long history of scientific fields mutually agreeing to pause
| certain work until safety and regulations could be put in place.
| The Asilomar Conference is an example. [1]
|
| The idea that "you can't put the genie back in the bottle" is
| very popular in tech culture, but many have suggested that simply
| extending elements of copyright or tort law to AI would be
| sufficient to majorly slow down current research directions by
| creating liability. In the art world, the reason we don't see
| music being used in major movies without the rights-holders
| permission is because of liability, and this works very well.
|
| I think finding consensus on an intelligent way forward is very
| possible here.
|
| [1]
| https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombi...
| vntok wrote:
| > To those saying "this is impossible in our society", there is
| a long history of scientific fields mutually agreeing to pause
| certain work until safety and regulations could be put in
| place. The Asilomar Conference is an example.
|
| Did China, India, South Africa, Cuba or the USSR comply with
| these Asilomar "put the genie back in the bottle" conclusions?
| lannisterstark wrote:
| It's not about it being impossible. I guess the major issue
| right now is that people are focusing on the wrong aspect of
| it. They should be focusing on how most LLMs are currently only
| controlled by bigger coroporations. Democratizing the LLMs and
| Giving access to people would resolve the x-poly we have
| currently.
|
| This also solves the "safety and regulations" issue to an
| extent - Do you really trust that Microsoft, Google, or govt-
| entities will not have their own biases and will want to
| regulate the citizens who want to use their products more,
| sometimes in unethical ways (because we all know, corps and
| govts never do anything bad)?
|
| This "Open letter" reeks of "we are behind, let us play catch
| up, you cant take all the money!" or, of baseless fearmongering
| where LLMs and future assistants could be used to help
| countless people with disabilities for example.
|
| "Make the world a better place, but no, not like that, not by
| democratizing it, but under OUR CONTROL" seems to be the motto
| here.
| invig wrote:
| The corporations pay for people to learn how to do this
| stuff. Eventually, someone will leave and build the open-
| source version (when it's feasible to do it / the price has
| come down).
| twblalock wrote:
| The kind of research discussed at Asilomar had a high barrier
| to entry in terms of both scientific knowledge and access to
| laboratories and technology to pull off. Only scientists could
| really do it.
|
| The GPTs are at the point where anyone with money can buy cloud
| compute and train one. The methods for doing so are fairly well
| known. It won't be long before a teenager can train a GPT on
| commodity cloud VMs.
| mustacheemperor wrote:
| Not having lived through either moment in history, I have to
| remark that the current discourse about regulating AI feels
| like it rhymes more with the efforts to lock up encryption
| technology with government regulation in the 90s than with the
| research community's conference on recombinant DNA.
| beepbooptheory wrote:
| Well at least one difference there is all those cypherpunks,
| rightly or wrongly, felt like having access to encryption
| tech was synonymous with individual liberty, and in that
| there fights were an opposition to the state controlling
| them.
|
| But in this case I don't see how we can divorce this AI stuff
| from the huge companies that alone have the infrastructure
| and capital to train and sustain them (at the moment). As
| such, it doesn't seem to have anything to do with
| individualism/libertarianism as such.
|
| So, less punk, but way more technological determinism.
| JamesBarney wrote:
| How much do you think Asilomar slowed down DNA research? Do you
| think without it we'd more more than 10 year ahead? I doubt it.
| Optimistically we'd maybe have 6 months to a year of additional
| progress in those fields without it.
|
| And this would only slow it down in a handful of countries.
| That's not going to keep China from trying to build AGI. If we
| are in a world where building an AGI is possible and near, I
| trust OpenAI far more than the CCP to both align it with it's
| creators, and for that alignment to match up with my values.
| nonbirithm wrote:
| This has been repeated often, but even if it's true, I have
| to wonder why it's treated as a given with no further
| exploration. Is it because we as a species will inevitably
| accept any technological progress once sometime after it's
| been discovered, before the consequences can be suffered?
| What will that imply for any other species intelligent enough
| to get to where we are?
|
| The kinds of theories I mull over tend to delve into the
| realm of speculative fiction more often than not. The
| comments section here feels like it could be replicated in
| the opening chapters of such a novel. We might never know if
| the speculation pans out before it's too late.
| ChatGTP wrote:
| It's dangerous parrot talking to keep saying "China". I mean
| we really have no idea about their thoughts on it because
| we're clearly not talking to them.
|
| China has good people living there too you know ?
| biscottigelato wrote:
| What we need is more funding into AI Alignment research. An
| X-Prize equivalent. All current big AI companies should fund
| it, to the tune of 20%+ of revenue from AI businesses. And I am
| okay to have government enforce this if needed even tho I'm
| mostly a libertarian. Government should only be there to remedy
| tragedy of the commons and nothing else. This is exactly the
| biggest tragedy of the commons.
| nuancebydefault wrote:
| How striking is it that everybody has so much to say about the
| dangers of AI.
|
| What if we look at it this way: there are already real-known
| dangers coming at us: climate change and a change of instability
| of the world order.
|
| What if we only have one shot to solve these known dangers and
| what if an exponential growing corpus of smartness could teach us
| how?
|
| We need to grab that chance, the stakes are too high.
|
| Pausing is not an option. If 99 per cent pauses, the 1 percent
| that does not obey will have a tremendous advantage.
| biscottigelato wrote:
| Climate change and instability of world order doesn't end
| humanity. They are barely considered an extinction event.
|
| AI Alignment is consistently voted the top potential cause to
| extinct the human race.
|
| They are not even close.
|
| https://riskfrontiers.com/insights/ranking-of-potential-caus...
| kromem wrote:
| EXACTLY!
|
| The relative surface area AI represents in the existential
| threats pile is notable but only a slice of the pie.
|
| But there's not much in the pile for mitigation of all those
| other human-driven threats other than pulling a deus ex machina
| out of our rear ends.
|
| So while we should definitely discuss the inherent risks, we
| should also be discussing the risk of opportunity costs in
| delaying greater scalable intelligence being added to the mix.
|
| This letter seems a bit like myopic fear mongering by people
| still more caught up in 70s thought experiments about
| paperclips than the realities facing us in the present day.
| fasterik wrote:
| _> Powerful AI systems should be developed only once we are
| confident that their effects will be positive and their risks
| will be manageable_
|
| This statement by itself sounds reasonable. But for me an
| interesting thought experiment is to take this letter and imagine
| the equivalent for some other technology, for example
| semiconductors in the 1960s, the world wide web in the 1990s, or
| social media in the late 2000s-early 2010s. It is always true
| that new technologies have the potential to radically transform
| society in ways that we can't predict. One could reasonably have
| said "[semiconductors/the world wide web/social media] should be
| developed only once we are confident that their effects will be
| positive and their risks will be manageable". Does that mean that
| a self-imposed ban on research and development with the threat of
| a government moratorium would have been justified?
|
| At this point the best case scenario is that society learns to
| adapt and keep up with technological developments. Every new
| technology increases our ability to both improve people's lives
| and harm them in various ways. It's not a good long-term solution
| to intervene and stop progress every time we gain new
| capabilities.
| le-mark wrote:
| At the end of the day it's an empty platitude and vain hope
| that work will pause or be considered carefully. Certainly
| public entities can be made to pause, but nation states won't.
| If there's an advantage to be had; the work will continue in
| secret. Vernor Vinges "Bookworm, Run" had a take on this
| situation.
| ChatGTP wrote:
| They're talking about pausing research and talking together
| about the path forwards, not stopping research and letting
| <whatever country your paranoid about> build terminator.
| jeppester wrote:
| My main issue is that I think it's wildly unfair that giant corps
| can pretty much put anything into their content laundering
| machines without any scrutiny or attribution.
|
| I don't think AI is fair use, and I don't think anyone who
| released their work under "free for commercial use" licenses
| could have imagined this type of usage.
|
| So my suggestion would be to make it a requirement that training
| data has, under terms decided by its authors, been allowed to be
| used specifically as training data.
|
| And it should be required to implement a way for individuals to
| check if their data was used for training.
| 6gvONxR4sf7o wrote:
| I'm really disappointed to see the HN reaction of straight up
| scoffing at this like it's irreversible and so let's just step on
| the gas. It's some weird combination of techno-utopianism and
| nihilism and good old elitism.
| kelseyfrog wrote:
| AGI is a break-capitalism lottery ticket I'm willing to
| entertain. If the marginal cost of labor collapses to zero then
| either we'll get robot police or finally be free to pursue
| lives of leisure.
| tazjin wrote:
| Take the time to read a very pessimistic take like (like [0])
| and see if you reconsider.
|
| [0]: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-
| ruin-a...
| kelseyfrog wrote:
| I wish it was more convincing.
|
| As it stands, it says more about the author's egoistic view
| of humanity than of AI. Imagine for a second taking
| Feuerbach's The Essence of Religion and applying it to
| Descartes' rationality and Plato's nous. We'd get a
| critique of the rational human's construction of his own
| ideal - the essence of intellect.
|
| AI threatens this ideal and ego-threats get sublimated into
| existential threats by those unable to understand
| themselves well enough to express themselves directly.
| jimrandomh wrote:
| Maybe it's wrong, and things will be fine. Maybe it's
| right. But you can't psychoanalyze your way to truth.
| Whether future AIs will destroy humanity or not is a fact
| about future AIs, and the landscape of intelligent
| systems, not a fact about Eliezer Yudkowsky.
| anigbrowl wrote:
| Pass. Yudlowsky is all explanation, but too important to
| summarize his core thesis. This is always a giant red flag
| for me. I am simply not going on a long journey with
| someone that won't make the effort to sketch out the
| destination in an abstract.
|
| People write page after page about how it might kill us all
| in a flash, without ever offering a good explanation of why
| it would want to. My experience with people whose immediate
| reaction to something they're scared of is to destroy it is
| that they're panicky screechers who are an annoying
| distraction to the person actually handling the situation,
| whether that's wild animals, fire, or interpersonal
| violence.
|
| I'm not saying 'just let me handle it bro, I totally got
| this.' There's a lot of potential risks, I don't think
| anyone is qualified to say they can mitigate all those, or
| even most of them. But I trust a machine intelligence -
| even one that's vast, cool, and unsympathetic - far more
| than the would-be Butlerians.
| tazjin wrote:
| > without ever offering a good explanation of why it
| would want to
|
| The point of much of the alignment debate is that people
| like Yudlowsky are pointing out that it doesn't need to
| _want_ to, it just needs to not _not want_ to enough.
|
| You're hoping for an outcome ranging from "When The
| Yoghurt Took Over" to "The Metamorphosis of Prime
| Intellect", but many other people are expecting an
| outcome more similar to gwern's "Clippy".
| longitudinal93 wrote:
| You should take the time to watch "The AI Dilemma"
|
| https://vimeo.com/809258916/92b420d98a
| tazjin wrote:
| That was a weird talk to put random "China bad! into. I
| guess they had to tailor it to a US-elite audience.
| cmsonger wrote:
| Except that it's going to be robot police. What in the
| current world order suggests labor wins. Who controls LLMs?
| Who has the compute to produce AGI?
| bilbo0s wrote:
| To be fair, there is always someone on the bottom. It just
| historically hasn't been people we cared about. That really
| says more about us than it says about AI. The time to
| complain about these power asymmetries was _prior_ to them
| affecting us. Now it just sounds like garden variety sour
| grape hypocrisy.
|
| Not saying there aren't good points. Only pointing out that
| many, many people can all view the consequences of this
| dynamic differently.
| noobermin wrote:
| The only people who view this favorably are people at the
| top. People near the bottom perhaps feel disdain for
| people in the middle but that is misdirected because
| people at the top always were the ones in control. AI
| hasn't changed that dynamic.
| anigbrowl wrote:
| By 'this' do you mean AI or the petition?
| noobermin wrote:
| AI.
|
| I am replying to the parent comment, but I too get that
| this petition is being signed onto by people at openAI
| specifically because they already made it with GPT-4.
| They definitely would not sign on to it if their
| (arbitrary) threshold for what to stop doing AI research
| with was a tool at a GPT-3 or GPT-3.5 level or higher or
| it included GPT-4 in their threshold, and not merely
| beyond it.
| Teever wrote:
| Why do you think the fact that only select view control
| this technology today that it will always be like that?
| versteegen wrote:
| That is the best possible outcome, but I worry that we have a
| single shot at it. If the massive wealth generated isn't
| distributed evenly, what will ever level it out other than
| the things that have in the past: war, disaster, revolution
| and state collapse (see The Great Leveler).
| anigbrowl wrote:
| I'm very cognizant of the risks, but the biggest risk seems to
| me that wealthy people hold it back to get an uncrossable
| political-economic moat in order to permanently secure their
| own position.
| kvetching wrote:
| exactly. Specifically China. They've said for years, they
| want to be the global AI leader. They have more data than
| anyone. Now that they know LLMs work as good as they do, they
| are going all in.
|
| They achieve AGI first, that's bad. Communist AGI is a
| greater existential risk than us creating AGI. We must create
| it first so we can use it to harden our systems.
| versteegen wrote:
| It's striking how on AGI, the common opinion here on HN is the
| complete opposite to the one in the AI alignment field. There,
| people are angry with OpenAI for being too open, here people
| are angry with it not being open enough. For example, Eliezer
| Yudkowsky:
|
| > Ryan: OpenAI [...] It's kind of a completely open-source AI
| project?
|
| > Eliezer: That was the original suicidal vision, yes. But...
|
| > Ryan: And now they're bent on commercializing the technology,
| is that right?
|
| > Eliezer: That's an improvement, but not enough of one,
| because they're still generating lots of noise and hype and
| directing more resources into the field, and storming ahead
| with the safety that they have instead of the safety that they
| need, and setting bad examples.
|
| > [...] I would change the name to "ClosedAI". Because right
| now, they're making it look like being "closed" is hypocrisy.
| They're, like, being "closed" while keeping the name "OpenAI",
| and that itself makes it looks like closure is like not this
| thing that you do cooperatively so that humanity will not die,
| but instead this sleazy profit-making thing that you do while
| keeping the name "OpenAI".
|
| https://www.lesswrong.com/posts/e4pYaNt89mottpkWZ/yudkowsky-...
| CatWChainsaw wrote:
| Eh, just typical HN superiority complex on show. Personally
| when we don't slow down and we undermine civilization to a
| shadow of its current self, I'm going to make sure my last web
| post is here, and all it's going to read is "Told you. :) "
| BobbyJo wrote:
| I personally scoff at this kind of stuff because it's asking
| for something no one has the power to give. It's like asking to
| stop the development of nukes in the 40s and 50s, it's just not
| gonna happen.
| paxys wrote:
| And during that period, and later during the cold war, the
| decision to make (or stop making) nukes was in the hands of
| maybe 5 people total. Today there are thousands of companies
| and tens/hundreds of thousands of people who can legitimately
| compete in the space. Best of luck trying to resolve a
| prisoner's dilemma between all of them.
| jimrandomh wrote:
| Preventing the development of nukes entirely was obviously
| not going to happen. But delaying the first detonations by a
| few years, and moving the Partial Nuclear Test Ban treaty up
| a few years, was quite achievable.
|
| Whether delaying AI development a little matters depends on
| whether you think the success of AI alignment, applied to
| future superintelligence, is overdetermined to succeed,
| overdetermined to fail, or close to borderline. Personally I
| think it looks borderline, so I'm glad to see things like
| this.
| BobbyJo wrote:
| I'm firmly in the camp that delaying it's development could
| make a difference, I just don't see how that's possible.
| These models are relatively simple and the equipment
| necessary to develop them is public (and relatively cheap
| if we're talking about corporate or national scales). At
| least with nukes, there was a raw material bottleneck, but
| there really isn't a limiting factor here that any "good
| guys" could choke point. It's out there and it's going to
| get worked on, and the only people the "good guys" can
| limit are themselves.
| G_z9 wrote:
| It actually very easily could have happened and almost did
| happen but the Russians decided to go back on their effort to
| do it. It really did almost happen. People act like it's
| hard. Stopping a huge asteroid is hard. There might not be
| enough physical resources to do it. Stopping AI or nukes is
| definitely easy.
| twblalock wrote:
| It really is irreversible. The methods and compute needed to
| build a GPT-type AI are widely available.
| JeremyNT wrote:
| But the people who would apply these things are humans, which
| are capable of making their own decisions, informed and
| restricted by societal norms and laws.
|
| The idea that "x is technically possible, therefore it is
| inevitable" - the favored line of the tech oligarchs who are
| the very ones directly furthering x - is sadly cargo culted
| by many rank-and-file technologists. If you were to apply the
| same defective reasoning to, say, nuclear weapons, then we
| would all be dead by now.
|
| It is possible for humans to agree that something should not
| be done, and prevent it from being done. This is especially
| true of these LLMs, which require ingesting (and ignoring
| copyright on, by the way) massive amounts of human work and
| then spending massive amounts of computation to process.
|
| That we _could_ choose to do something about this should not
| be controversial, regardless of what the driving AI.
| versteegen wrote:
| Yes, but superintelligence hasn't been created yet. That's
| what the petition is ultimately about, not chat assistants.
| Transformers unmodified aren't going to scale up to that,
| they're already nearly scaled maximally.
| og_kalu wrote:
| You don't know what transformers unmodified scale up to.
| Nearly scaled maximally ? Based on what exactly ?
| versteegen wrote:
| The degree to which transformers scale, as measured by
| loss (training objective) is known remarkably well! See
| [1]. There's a formula in there for the minimum loss you
| possibly achieve even with infinite compute and training
| data, and it's hardly less than Chinchilla's loss. The
| recent GPT-4 paper further reinforces that these scaling
| laws are real, because they predicted with high accuracy
| the loss the model would have based on data+compute used.
|
| Admittedly, the link between improvement in loss and
| improvement of capabilities may break down or be
| misleading.
|
| However, there's just not much training data on the
| internet left unused. Maybe an order of magnitude. All
| books ever published (in English?) are a smaller dataset
| than the corpora already used for training. See [2]
| (which includes an easy summarisation of much of [1]).
| And the scaling laws show training data is already the
| bottleneck rather than compute.
|
| [1] DeepMind, 2022, Training Compute-Optimal Large
| Language Models https://arxiv.org/abs/2203.15556
|
| [2] Chinchilla's wild implications https://www.lesswrong.
| com/posts/6Fpvch8RR29qLEWNH/chinchilla...
| [deleted]
| og_kalu wrote:
| comparing Loss between different training runs and
| hyperparameters isn't very accurate. Llama's loss metrics
| don't really match chinchilla's for instance, i.e it went
| below the minimum possible loss stated by chinchilla
|
| More importantly, These models are extremely sensitive to
| loss. 2.0 to 1.8 might not seem like much but it's a huge
| gain in performance.
|
| GPT-2 loss was 2.57. GPT-3 was 2
|
| and there is plenty of training data left. perhaps not
| easily accessible but it's there.
| Lockal wrote:
| What does these numbers mean? For example, for Google
| isn't loss == 0? But it does not make Google a
| superintelligence.
| andrewmutz wrote:
| You can accuse everyone of *isms, but there are a lot of us who
| have just never been persuaded by the _many_ articles written
| to scare us about the effects of AI.
|
| The concerns about all the jobs going away have been persistent
| but the labor market continues to be extremely strong. As long
| as entrepreneurship is creating new business opportunities and
| new ways to turn labor into value, it's completely possible
| that the effects of automation are drowned out by new
| applications of labor. In fact, this is what we've seen
| throughout the industrial revolution: constant anxiety about
| automation that fails to materialize in to mass unemployment.
|
| The concerns about polluting the information space are still
| hypothetical as well. The current information space is a
| complete garbage fire and it's not due to generative AI. Might
| it get worse if bots start spamming people with GPT generated
| text on social media? Yeah it might. But social media is
| already terrible and we need to do something about it
| regardless of that.
| xg15 wrote:
| > _But social media is already terrible and we need to do
| something about it regardless of that._
|
| So then instead of finding a solution to those issues, let's
| instead focus all resources on a tech which will make the
| issues _worse_...
| creamyhorror wrote:
| The key issue in automation transitions is the _transition of
| affected individuals to other sources of income_.
|
| In previous technological revolutions, affected workers were
| hurt by their loss of income, and some no doubt fell into
| poverty without ever recovering. Not everyone can be
| retrained for new types of jobs immediately - (1) they may
| not have the needed foundational knowledge or the cognitive
| flexibility/ability, and (2) there might not be enough of the
| new types of jobs emerging quickly enough for them. Not every
| displaced miner can become a coder, or be competitive for
| junior dev jobs.
|
| (Why should the state provide for these workers? Well,
| primarily for humaneness, and also social stability.)
|
| The rewards of automation (cost savings as well as profits)
| are reaped by (1) the capital owners of the automation
| technology companies (and their higher-paid employees), as
| well as by (2) the companies and consumers using the new
| automation; therefore those owners and beneficiaries could be
| asked to bear at least part of the costs of supporting,
| retraining, and placing in jobs the workers they displaced.
| In a nutshell: Redistribution during _structural
| unemployment_ caused by technological transitions.
|
| A humane policy would provide the above types of support for
| workers displaced by automation. Ideally it would already be
| handled by existing unemployment policy, but in many
| countries such support is limited or minimal.
|
| Corporate taxation might need some rethinking along the lines
| of job-displacement effects of companies (a tricky question,
| I admit - I've come across one or two proposals for assessing
| the automation level of companies for taxation purposes). The
| cross-border dynamics add further complexity, given that
| automation will displace many jobs outsourced across borders.
|
| Given that the current AI revolution looks like it will be
| causing even larger and faster changes than previous
| revolutions, such policies are imo needed as a _prerequisite_
| (one of several) for allowing the development of powerful
| job-displacing AI.
| jimrandomh wrote:
| There are two, mostly disjoint groups warning about AI. There
| are the people worried about comparatively mundane effects
| from comparatively mundane systems: job loss, spam,
| disinformation, maybe an occasional unfair loan-application
| rejection. These concerns don't have _nothing_ going for
| them, but in all but the worst-case-scenario versions, these
| just aren 't bad enough to make AI not be worth it.
|
| Then there's the people looking ahead, foreseeing a future
| where superintelligent AIs are more powerful than humanity,
| and worried that most possible variations of those
| superintelligences are incentivized to destroy us.
|
| I think this open letter puts much too much emphasis on the
| petty stuff, I think because they're trying to appeal to
| people who are allergic to anything that requires
| extrapolating more than a little bit into the future. But
| buying more time for alignment research, before we tackle
| superintelligence, does meaningfully improve humanity's odds
| of survival, so I hope this happens anyways.
| fwlr wrote:
| "Sir Roger L'Estrange tell us a Story in his Collection of
| Fables, of the Cock [Rooster] and the Horses. The Cock was
| gotten to Roost in the Stable, among the Horses, and there
| being no Racks, or other Conveniencies for him, it seems, he
| was forc'd to roost upon the Ground; the Horses jostling about
| for room, and putting the Cock in danger of his Life, he gives
| them this grave Advice; _Pray Gentlefolks let us stand still,
| for fear we should tread upon one another._ "
|
| HN is scoffing because the timing of this petition is
| suspicious. To borrow your metaphor of "step on the gas", this
| is as if the race had already started, several competitors are
| out ahead of the rest of the field, and _now_ someone is
| yelling "step on the brakes!".
| reasonableklout wrote:
| Sam Altman has publicly talked about wanting more regulation.
| Demis Hassabis stated that AI safety was one of the
| _motivations_ for founding DeepMind. I don't think any of the
| industry noise around safety and regulation is new, I think
| the timing of the petition is relevant _because_ the race has
| started in an especially messy way (with Llama's weights
| leaking, Bing/Sydney threatening a journalist based on tweets
| it found on the internet, various OpenAI security mishaps,
| etc).
| sp527 wrote:
| Almost everyone here will be fine in a world in which only
| certain non-renewable scarce resources have any value, combined
| with impending massive currency deflation. Even a rank and file
| tier 2/3 engineer will manage to live a princely life in that
| new reality assuming (s)he saved up enough.
|
| Hardly anyone in the tech community is capable of truly
| empathizing with the vast majority of people, who are born with
| none of the advantages that they had (genetic, parental,
| cultural, geographic, etc).
| davesque wrote:
| > Hardly anyone in the tech community is capable of truly
| empathizing with the vast majority of people
|
| This is a gross exaggeration.
| JohnFen wrote:
| It is a gross exaggeration -- but it is also the image that
| SV has been giving off for a long time now.
| ChatGTP wrote:
| These data used to train these models are tagged by
| people getting paid hardly anything, so why wouldn't it?
|
| The same people SV people then turn around and say that
| these systems will help them get access to free medical
| care?
|
| lol
| sp527 wrote:
| Well that's not a counterargument, but you're also missing
| the point completely, which is that you have to have a very
| low capacity for empathy in order to push ahead towards AGI
| when you know society is not prepared for this and that
| it's going to induce considerable pain.
|
| Americans (let alone people elsewhere in the world) are
| already struggling. Recent reporting suggests a great many
| have to work multiple jobs. Almost all of us work an absurd
| number of hours per week. Many if not most can't afford
| homes. Plenty are ending up on the streets. Healthcare can
| literally bankrupt people. A vacation out of the country is
| an impossible luxury for most. The majority of Americans
| still don't send their children to college, usually because
| of affordability.
|
| And I haven't even touched on what life is like in most of
| Africa or Asia.
|
| This is the world we're bringing AI into. You have to be
| something adjacent to a sociopath to be okay with that. So
| long as our system is predicated on capitalism, AI may very
| well induce more downstream suffering than anything else
| humans have ever conceived.
| twblalock wrote:
| Things aren't really that bad for most Americans, but
| even if they were, it doesn't follow that adding more
| intelligence to the world would be a bad thing for them.
|
| A lot of people in the lower income brackets do the kind
| of work that an AI can't do. The people who should be
| worried most are actually college graduates doing
| clerical work, whose main work output is writing or
| evaluating texts. Even those people will likely use AI as
| a tool to enhance their productivity, because the AIs
| still are not good enough to replace people for tricky
| edge cases. The first companies that try to replace their
| customer support workers with an AI are going to have a
| bad time (and so are their customers!).
|
| When almost everything can be automated, the problems
| that remain are the really hard ones that can only be
| solved by human experts.
|
| A construction worker with a circular saw can cut boards
| way faster than someone with a handsaw -- but the
| introduction of circular saws didn't result in a bunch of
| carpenters getting laid off. Instead it made them more
| productive, and for people who get paid by the task
| rather than by the hour that is a huge benefit. They
| could build more and make more money, and a bunch of
| other people benefitted from their increased output, like
| homebuyers and property developers.
|
| Similarly, as a software engineer I benefit from code
| generation tooling already. If that gets smarter and
| faster, I will be more productive, my team will be able
| to build software faster, and instead of laying people
| off I will expect to be given more work. Maybe our 4-year
| roadmap will be achievable in 1 or 2 years with the same
| size team.
| sp527 wrote:
| Productivity gains by and large do not translate into
| real wage gains and an improved quality of life for
| laborers. We have more than a century's worth of data
| suggesting they usually do the opposite. Yet somehow this
| fairytale that productivity gains are a boon for laborers
| persists.
| lonelyasacloud wrote:
| > Similarly, as a software engineer I benefit from code
| generation tooling already. If that gets smarter and
| faster, I will be more productive, my team will be able
| to build software faster, and instead of laying people
| off I will expect to be given more work. Maybe our 4-year
| roadmap will be achievable in 1 or 2 years with the same
| size team.
|
| Why so sure the end users aren't going to be feeding
| their own requirements directly to a
| Jenkins/Copilot/ChatGPT mashup running as a service in
| the cloud?
| pdonis wrote:
| People aren't "strugging" because there is no work for
| them to do. They are strugging because the powers that be
| have jiggered our economic system in order to hamper,
| instead of facilitate, free market cooperation,
| specialization, and trade. Governments micromanage
| everything. That is what needs to stop.
|
| If "AI" ends up making governments think they can
| continue to micromanage and get away with it, yes, that
| will cause more suffering. But that's not the failure
| mode any critics of "AI" are trumpeting about.
| davesque wrote:
| > Americans (let alone people elsewhere in the world) are
| already struggling
|
| I agree. And I agree with your overall sentiment about
| the risks of pursuing AGI. I'm as cynical as anyone about
| the likelihood that the average person will really be any
| happier in a world with AGI (controlled by tech
| billionaires no less).
|
| That being said, to claim that hardly anyone in the tech
| community is capable of empathizing with the average
| person is a wild overstatement that brings nothing to the
| discussion. Just adds to the noise.
| pdonis wrote:
| _> a world in which only certain non-renewable scarce
| resources have any value_
|
| If you think this is the world "AI" is leading us toward, I
| think you need to think again. There are a huge range of
| skills that only humans can provide, and that will continue
| to be true. GPT-3 can't fix your plumbing or your HVAC when
| it breaks. It can't fix your car. It can't build houses. It
| can't build roads. It can't build computers. And on and on
| and on...
|
| Nor are AI enthusiasts _trying_ to get "AI" to do any of
| these things.
|
| Humans are real-world creatures with real-world needs that we
| supply to each other by free markets, specialization, and
| trade. The "tech community" might take all this for granted,
| but that doesn't mean it just happens by magic. It happens
| because real humans do real work. "AI" is not going to take
| that over any time soon.
| MacsHeadroom wrote:
| >Nor are AI enthusiasts trying to get "AI" to do any of
| these things.
|
| We're working on getting AI to do all of those things, yes.
|
| >GPT-3 can't fix your plumbing
|
| GPT-3 might not. But GPT-4 fixed plumbing in a test done by
| Microsoft Research before it was even publicly released!
| It's in the "sparks of AGI" paper.
|
| GPT-5 is almost done training now and other language models
| are already embodied and moving about in the physical
| space. Your objections are behind the times, and
| increasingly so.
| Cyphase wrote:
| It did not "fix plumbing", it participated in diagnosing
| a leak.
|
| Near the end:
|
| > If you don't feel comfortable doing this, you may need
| to call a plumber.
|
| ---
|
| Source: https://arxiv.org/pdf/2303.12712.pdf
|
| Sparks of Artificial General Intelligence: Early
| experiments with GPT-4
|
| [page 53]
|
| ---
|
| The computer is trying to solve a problem. It can tell
| the human to do anything, one thing at a time, and one
| line at a time.
|
| PROBLEM: there is water dripping from the kitchen
| ceiling, and you have to figure out why and fix it
| ----<|endofprompt|>
|
| Human: The kitchen ceiling is dripping water
|
| Computer: Check to see if there is a bathroom or other
| water source directly above the kitchen.
|
| Human: Yes, there is a bathroom actually
|
| Computer: Check the bathroom for any signs of water
| leakage, such as puddles on the floor, water drip ping
| from the ceiling, or water stains on the walls.
|
| Human: There is a little water on the floor
|
| Computer: Try to identify where the water is coming from.
| Check the toilet, sink, and bathtub/shower for any leaks.
|
| Human: It is next to the bathtub and the toilet
|
| Computer: Check the seals around the bathtub and toilet
| to see if they are worn or damaged. If they are, replace
| them. If they are not, continue investigating other
| potential sources of the leak.
|
| Human: I can't see the seal around the toilet, there is a
| porcelain thing around it. I don't see any damage in the
| bathtub seal though
|
| Computer: If you can't see the seal around the toilet,
| you may need to remove the porcelain cover to inspect it.
| If you don't feel comfortable doing this, you may need to
| call a plumber.
|
| Human: Ok, I removed it. But I can't tell if it's damaged
| or not
|
| Computer: If you can't tell, it may be best to replace
| the seal anyway, just to be sure. If the leak persists
| after replacing the seal, you will need to continue
| investigating other potential sources of the leak.
| sp527 wrote:
| What do you think happens when AI compresses the economy
| and leaves us almost exclusively with manual labor jobs,
| most of which are already low-paying? And you may not be
| aware of this having never done such a job, but physical
| labor is often brutally taxing on the body, putting people
| at risk of a wide array of potential health complications.
|
| I've seen this abjectly atrocious argument about manual
| labor made by a number of people on HN and it's astounding
| how poorly thought through it is.
| pdonis wrote:
| _> when AI compresses the economy_
|
| How? How will it do this?
|
| Basically, people seem to assume that "AI" has some kind
| of magical power to do whatever bad thing they can
| imagine, and then they extrapolate from there. I don't
| see it.
|
| _> physical labor_
|
| If you think the jobs I described are purely "physical
| labor", or that the physical labor involved is so highly
| taxing that people can only do those jobs for a short
| time before they wear out, you definitely need to think
| again.
|
| (You also definitely need to think again if you think
| those jobs are low paying. Plenty of people make quite a
| healthy living doing them.)
| sp527 wrote:
| > You also definitely need to think again if you think
| those jobs are low paying. Plenty of people make quite a
| healthy living doing them.
|
| Will they still make a healthy living when there's an
| influx of laborers fleeing more automated parts of the
| economy?
|
| GS just put out a report today estimating the impact of
| AI: https://www.businessinsider.com/generative-ai-
| chatpgt-300-mi....
|
| I wonder sometimes if these accounts on HN making insane
| arguments that generative AI somehow won't be
| economically calamitous are bots. In fact, if I was at
| OpenAI and the goal was to avert scrutiny long enough to
| get to AGI, unleashing a torrent of AI shill bots might
| be near the top of the agenda.
| anigbrowl wrote:
| Ai enthusiasts are certainly interested in that, I wish I
| had resources to pursue it.
|
| Specialization is useful up to a point. But I like my
| general intelligence, and rather resent its devaluation
| under your beloved 'free' markets.
| Veen wrote:
| I think the "good old elitism" is exactly what the alignment
| folks are expressing. They think only they and large
| corporations should have these tools, while we, the common
| folk, should not.
| ChatGTP wrote:
| Hanlon's Razor...Max Tegmark has been saying for a long time
| that his biggest concern would be black box models we don't
| understand gaining more intelligence without a solid
| understand about how they work ...this is long before
| ChatGPT4 existed so I don't believe it has anything to do
| with control and keeping people out of anything.
|
| Even Sam Altman has said similar things recently that he
| believes we're probably going to have to slow down and think
| about what we're doing.
|
| I think a more advanced civilization would slow down and
| continue AI research on another planet (Mars?)...not here on
| Earth.
| petilon wrote:
| > _Should we automate away all the jobs, including the fulfilling
| ones?_
|
| You don't necessarily need jobs because you don't need money.
| What you need is wealth. I am going to leave it to PG to explain
| the difference [1]: Wealth is not money. Wealth is stuff we want:
| food, clothes, houses, cars, gadgets, travel to interesting
| places, and so on. You can have wealth without having money. If
| you had a magic machine that could on command make you a car or
| cook you dinner or do your laundry, or do anything else you
| wanted, you wouldn't need money. Whereas if you were in the
| middle of Antarctica, where there is nothing to buy, it wouldn't
| matter how much money you had.
|
| In the next 10 years, AI/robots will generate wealth at an
| unprecedented scale. Food, clothing and shelter will be
| plentiful. The industrial revolution didn't make human lives
| miserable, it made it better. AI/robots will be the same. Because
| of productivity gains humans will work very little, and yet live
| comfortably.
|
| As a practical example of how this works, look to Alaska. If you
| lived in Alaska last year and had a pulse you received $3,284
| just for living there. Where does the state get money for this?
| Oil. Oil is not the only possible source of wealth. AI & robots
| will generate wealth at unprecedented scale, and humans will reap
| the benefits, the same way Alaskans reap the benefits of their
| natural resources.
|
| [1] http://www.paulgraham.com/wealth.html
| thomasahle wrote:
| > Because of productivity gains humans will work very little,
| and yet live comfortably.
|
| Wouldn't this imply humans should already be working very
| little compared to pre industrial revolution? Instead it seems
| we work more [1]. And with less fulfilling work. I agree
| however that we have may have more stuff (or wealth).
|
| So going by this projection, in the future we'll have even more
| stuff, and work even more in even less fulfilling work. That
| is, assuming the people who control the AI/robots are willing
| to share the wealth generated.
|
| [1]:
| https://groups.csail.mit.edu/mac/users/rauch/worktime/hours_...
| kazinator wrote:
| > _That is, assuming the people who control the AI /robots
| are willing to share the wealth generated._
|
| The only way that can be is if you make the AI/robots
| undocumented and extremely hard to operate, so that even if
| someone steals your blueprints and builds an exact clone of
| your robot manufacturing plant, or what have you, they have
| no idea how to get them to do anything.
| wseqyrku wrote:
| Take this as a joke but Kubernetes is hard just to create
| DevOps jobs.
|
| People are crazy good at handling this sort of complexity.
| gordian-mind wrote:
| Well, pre-industrial societies usually enjoyed a very
| efficient type of political system. Maybe replacing our
| current one with some form of AI with let us enjoy these
| blessed times again.
| pdonis wrote:
| _> pre-industrial societies usually enjoyed a very
| efficient type of political system._
|
| Really? What system is that?
| gordian-mind wrote:
| Aristocracy, of course!
| bbor wrote:
| Only at hacker news would I have to question my sarcasm
| detector on this statement lol. But I've settled on: good
| one
| pdonis wrote:
| _> Wouldn 't this imply humans should already be working very
| little compared to pre industrial revolution?_
|
| No, because human wants aren't fixed. When you give a human
| everything they want _now_ , tomorrow they come back with a
| whole new set of wants. And so on ad infinitum...
|
| Of course, as an individual it's your choice whether, and for
| how long, you stay on this merry-go-round. But there will
| always be a significant number of humans who will.
| hooande wrote:
| > In the next 10 years, AI/robots will generate wealth at an
| unprecedented scale.
|
| Would love to see the reasoning behind this. Nothing I've seen
| about LLMs makes me think they will produce any kind of bounty
| for anyone.
| joegibbs wrote:
| If we had that magic machine, then there would be something
| else that money could buy that everyone would want, and spend
| their whole lives competing over. Maybe it would be a house
| with a specific view or an original of a painting or whatever.
|
| I think most people want social status, and money is a good
| proxy of that. Even in a hypothetical moneyless society there
| would still be things to compete over. Due to the hedonic
| treadmill, in an automated society where anyone can have any
| good then those things will become worthless. We'll probably
| still have poor people, and from their perspective they'll be
| poor, but from ours they'll be very wealthy.
| JW_00000 wrote:
| > In the next 10 years, AI/robots will generate wealth at an
| unprecedented scale. Food, clothing and shelter will be
| plentiful.
|
| GPT cannot create food, clothing, or shelter. The article talks
| about "automating away all the jobs", but GPT does not automate
| farming, making clothes, construction work, driving a car,
| delivering packages... Anything that requires interfacing with
| the real-world (i.e. outside of language), is not what GPT is
| doing. E.g. I wouldn't say research in self-driving cars has
| advanced now that LLMs exist. The magic machine you describe is
| exactly what GPT can NOT do.
|
| In my opinion, GPT is a _huge_ leap forward, but in one
| particular niche (language-based tasks). It does not advance
| the state-of-the-art in automation of physical systems, so it
| cannot replace any job that has a physical part to it.
| JohnFen wrote:
| > The industrial revolution didn't make human lives miserable,
| it made it better.
|
| I think this is a point that is genuinely debatable. At least,
| it's pretty easy to argue both sides of that proposition.
| bbor wrote:
| ... it is? Medicine, food, entertainment, rationalism? You'd
| trade all of that for... what, agriculture work in a feudal
| society?
|
| I'd love if you could elaborate or link to a good argument on
| this (besides the Unabomber...) I'm curious.
| dumbaccount123 wrote:
| https://jonathanhaidt.substack.com/p/international-mental-
| il...
| zarzavat wrote:
| And the comparison with the industrial revolution is a poor
| analogy because the industrial revolution never proposed as a
| goal to automate humanity in totality, merely to automate
| human _drudgery_.
|
| AI is an anti-industrial revolution, because it proposes to
| start with the most fulfilling aspects of human existence:
| writing, the arts, and to automate people engaged in those
| jobs, and to only later come to automate human physical
| labour. It's a regression.
| mrguyorama wrote:
| Also it will be fucking terrible at those things. In a
| thousand years middle schoolers will learn about this era
| of vacuous, meaningless, copycat art as a dark age.
| bbor wrote:
| If angels came from the heavens and offered to take care of
| our needs, would you fight them?
|
| I think it's a coincidence that it's threatening some kinds
| of artists first - I don't see that distinction lasting
| long with the advent of plugins, and art has a fundamental,
| semi- necessary human quality that is not present in
| knowledge work.
| pdonis wrote:
| _> AI /robots will generate wealth at an unprecedented scale.
| Food, clothing and shelter will be plentiful._
|
| How is this going to work? "AI" can't make any of these things.
| "AI" is _not_ the magic machine PG describes. It 's not even
| close.
| ChatGTP wrote:
| Kind of like how clothing and food are produced by some of
| the poorest people ever, mostly people think it's all due to
| automation.
| musicale wrote:
| > AI & robots will generate wealth at unprecedented scale, and
| humans will reap the benefits, the same way Alaskans reap the
| benefits of their natural resources.
|
| Vast wealth will certainly be delivered to owners of the robots
| and AI systems.
|
| What reason is there to believe that they would share the
| wealth with ordinary citizens?
|
| A typical alternative to the Alaska model is one where oil
| companies extract all of the wealth and resources and citizens
| get nothing.
| [deleted]
| petilon wrote:
| A lot of people are already benefiting from ChatGPT. I know I
| am, and I am only paying $20 a month for a digital assistant
| that is expert in everything.
| amluto wrote:
| You may be getting ahead of yourself. You can spend $20 and
| ask ChatGPT to make plentiful food and energy, but it won't
| actually do so.
| Teever wrote:
| How long until you can spend $20 and ask ChatGPT to
| design a machine and provide the schematics and assembly
| instructions? How long until that machine can self
| replicate? How long until that machine can generate other
| self replicating machines, like bipedal humanoids?
|
| How long until you can spend $20 and ask ChatGPT for the
| schematics of a Von Nuemann probe?
| amluto wrote:
| With current tech? GPT appears to learn by studying a
| large corpus of _words_ and learning how to (apparently
| intelligently!) put them together. And it can "few /zero-
| shot learn" to do new things in line that what it was
| trained on. Don't get me wrong: this is amazing!
|
| But humans have been manipulating language, apparently
| intelligently, for tens of thousands of years, and
| billions of us have spent the last 30 years or so making
| a huge corpus of digitized words.
|
| What we have _not_ done is to make a huge corpus of
| digital _things_ that can be manipulated by a computer
| program. We have books about machining, engineering, etc,
| and we are still pretty bad about turning written
| descriptions into working objects. (Read about
| "technology transfer". For better or for worse, a lot of
| manufacturing seems to need experience, not just
| manuals.) Nicely drawn schematics don't necessarily work
| at all, let alone replicate.
|
| It seems quite likely that the robotic AI revolution will
| happen, but I expect it to be a while.
| allturtles wrote:
| In broad strokes, I see roughly two ways things could go:
|
| 1) Current AI tech is already nearing the top of the
| S-curve. In this case it will do nothing to help humans
| in the "real world", it will just replace much of the
| human labor currently used to create/manipulate bits.
|
| 2) Current AI tech is near the bottom of the S-curve. It
| continues to ratchet up and its capabilities become
| super-human, as you outline. In which case, how long
| until the AI capable of creating self-replicating
| machines realizes it doesn't need to listen to humans
| anymore, or even keep them around?
| lagrange77 wrote:
| > In which case, how long until the AI capable of
| creating self-replicating machines realizes it doesn't
| need to listen to humans anymore, or even keep them
| around?
|
| Transformer models don't realize or need stuff.
| andyfleming wrote:
| Not independently, but if wrapped with a loop, given
| memory, given internet access, and directives as
| intrinsic motivations, it could, in theory, come to
| conclusions and take actions to acquire resources aligned
| with its motivations. If that outer loop does not have
| rules (or rules that are effective and immutable), it
| could become very powerful and potentially misaligned
| with our interests.
| lagrange77 wrote:
| How would such a loop enable it to come to conclusions?
| I'm genuinely curious. Does what you're saying have
| something to do with reinforcement learning?
| Teever wrote:
| https://en.wikipedia.org/wiki/OODA_loop
| HybridCurve wrote:
| It is an expert in nothing. Try asking it for general
| information and then ask it to produce citations for said
| information. It's a language model which is good at
| synthesizing certain information based on it's learned
| probability distributions. It has superficial knowledge _at
| best_ and all information it produces must be cross checked
| for accuracy.
|
| Yes, it can be incredibly useful but it literally has no
| comprehension of anything.
| MacsHeadroom wrote:
| >Yes, it can be incredibly useful but it literally has no
| comprehension of anything.
|
| Someone else who obviously doesn't have GPT-4 or plugins
| beta access.
|
| It has comprehension alright.
| Jensson wrote:
| GPT-4 being behind a paywall is genius, that way only the
| true believers will use it and once again discussions are
| shut down and we only get to see cherry picked articles
| about it, just like GPT-3 before ChatGPT came and was
| free so everyone could see how bad it was.
| mrguyorama wrote:
| It's like the google researcher who claimed their model
| was sentient because he heavily coached it to give the
| output he wanted and ignored the output that didn't
| confirm his belief.
|
| In that case, we've had AGI since ELIZA
|
| If this was the standard in other areas of science we
| would still be using the miasma theory of medicine and
| bloodletting women for their hysteria. AI "Research" is a
| bunch of motivated people throwing stuff at the wall and
| claiming something sticking means they are scientists.
| Nevermind all of the groups are HEAVILY motivated to
| overhype their models.
| HybridCurve wrote:
| People are also quick to aggrandize or anthropomorphize
| these systems as well because they lack a fundamental
| understanding of the underlying mathematics or mechanisms
| which allow these models to function as well as they do.
| There is an inherent danger in how much trust people are
| willing to grant to something like a language(or image)
| model simply because it is so adept at _synthesis_.
|
| This is a great cause of concern to me, especially when
| it is not particularly difficult to invalidate general
| credibility of these models. In the near future, it will
| not be as easy and I anticipate many(even the well
| educated) will surrender their rationale to these systems
| easily with little to no scrutiny.
|
| The hype from these companies is contributing to this
| problem and I consider it to be _highly unethical_.
| HybridCurve wrote:
| Could you please demonstrate your claim by querying the
| GPT4 model to respond to my above comment and providing
| the reply? Let's see what the 'expert' has to say about
| it.
| MacsHeadroom wrote:
| I don't understand what you want me to ask it about. So I
| asked it something, as usual every citation and URL is
| real despite not having internet access or training data
| after 2021:
|
| <PROMPT>List some citations of recent Machine Learning
| papers discussing reasoning ability in Machine Learning
| Models such as HellaSwag.
|
| [REPLY]" As of my knowledge cutoff in September 2021, I
| can provide you with some citations for ML papers that
| discuss reasoning abilities in models and datasets like
| HellaSwag. However, you may want to search for more
| recent papers using a platform like arXiv or Google
| Scholar. Here are a few relevant papers:
|
| 2019 HellaSwag: Can a Machine Really Finish Your
| Sentence?
|
| 2019 COMET: Commonsense Transformers for Automatic
| Knowledge Graph Construction.
|
| 2019 From 'F' to 'A' on the N.Y. Regents Science Exams:
| An Overview of the Aristo Project.
|
| 2020 A Primer in BERTology: What We Know About How BERT
| Works.
|
| To find more recent papers on this topic, I recommend
| searching for keywords like "reasoning," "machine
| learning," "natural language understanding," "commonsense
| reasoning," and "HellaSwag" on academic paper databases
| like arXiv, ACL Anthology, or Google Scholar."
|
| <PROMPT>In 50 words or less, what do these papers tell us
| about emergent capabilities of LLMs?
|
| [REPLY]" These papers explore the capabilities of large
| language models (LLMs) in understanding context,
| generating coherent text, and performing commonsense
| reasoning. They investigate the underlying mechanisms,
| evaluate their performance on challenging benchmarks, and
| propose methods to improve LLMs' ability to reason and
| construct knowledge graphs from natural language input."
|
| <PROMPT>Please list the URLS to the arxiv versions of the
| papers you cited.
|
| [REPLY]" I can provide you with the URLs to the arXiv
| versions of the papers I cited earlier:
|
| HellaSwag: Can a Machine Really Finish Your Sentence?
| URL: https://arxiv.org/abs/1905.07830
|
| COMET: Commonsense Transformers for Automatic Knowledge
| Graph Construction URL: https://arxiv.org/abs/1906.05317
|
| From 'F' to 'A' on the N.Y. Regents Science Exams: An
| Overview of the Aristo Project URL:
| https://arxiv.org/abs/1909.01958
|
| A Primer in BERTology: What We Know About How BERT Works
| URL: https://arxiv.org/abs/2002.12327
|
| Please note that these papers were published before my
| knowledge cutoff date, and more recent research might be
| available. To find the latest research on the topic, I
| recommend searching on arXiv or Google Scholar using
| relevant keywords."
|
| ================
|
| Edit: There's more:
|
| <PROMPT>A hacker news commenter thinks LLMs are incapable
| of comprehension. Give an argument for why he is wrong.
|
| [REPLY]" Massive LLMs demonstrate a strong ability to
| "comprehend" text by capturing complex language patterns
| and associations in their training data. While their
| comprehension may not be identical to human-level
| understanding, they can still generate contextually
| relevant responses, answer questions, and perform tasks
| that require reasoning. Their performance on various
| benchmarks, including natural language understanding and
| commonsense reasoning tasks, indicates a level of
| comprehension that goes beyond simple pattern matching."
|
| <PROMPT>What citations can back up you argument, omit
| authors, list title and year only.
|
| [REPLY]"
|
| 1. "HellaSwag: Can a Machine Really Finish Your
| Sentence?", 2019
|
| 2. "COMET: Commonsense Transformers for Automatic
| Knowledge Graph Construction", 2019
|
| 3. "From 'F' to 'A' on the N.Y. Regents Science Exams: An
| Overview of the Aristo Project", 2020
|
| 4. "A Primer in BERTology: What We Know About How BERT
| Works", 2020
|
| 5. "Language Models are Few-Shot Learners", 2020"
|
| <PROMPT>Tell me about Language Models are Few-Shot
| Learners
|
| [REPLY]" "Language Models are Few-Shot Learners" is a
| paper published in 2020, which introduces GPT-3, the
| third iteration of OpenAI's Generative Pre-trained
| Transformer. GPT-3 is a state-of-the-art large-scale
| language model consisting of 175 billion parameters,
| significantly larger than its predecessor, GPT-2.
|
| The paper primarily investigates the few-shot learning
| capabilities of GPT-3. Few-shot learning refers to a
| model's ability to generalize and adapt to new tasks with
| limited examples or training data. The authors
| demonstrate that GPT-3 can perform a wide range of tasks,
| including translation, summarization, question-answering,
| and even arithmetic, by simply conditioning the model on
| a few examples provided in natural language.
|
| This research highlights the potential of scaling up
| language models to improve their performance across
| multiple natural language understanding and generation
| tasks. GPT-3's impressive capabilities showcase that
| large language models can effectively learn from limited
| examples and generalize their knowledge to perform a
| diverse set of tasks without task-specific fine-tuning."
| HybridCurve wrote:
| Yes, If you ask something _specific_ it will generate
| citations because there are might be strong positive
| associations between that topic and research. This is a
| good example of a proper query. But for instance, if you
| ask it about "citations for papers on recent research on
| trees" for instance you get a bunch of nonsense.
|
| In response to my earlier comment that it was "an expert
| in nothing", this was the reply I received from:
|
| _ME: Can you respond to this statement: "<my earlier
| comment>"_
|
| _ChatGPT: The statement that "ChatGPT is an expert in
| nothing" is technically true, as I am a machine learning
| model that has been trained on a vast corpus of data and
| does not possess personal expertise or experience.
| However, my purpose is to provide information to the best
| of my ability based on my training data, which allows me
| to offer a wide range of information on various topics.
|
| It is true that the information produced by me or any
| other AI language model should be cross-checked for
| accuracy, as errors and inaccuracies may occur. It is
| also true that I do not possess a conscious understanding
| of the world in the way that humans do, and my responses
| are generated through statistical analysis of patterns in
| my training data. Therefore, while I can provide accurate
| and useful information, my responses are not always
| nuanced or contextualized in the way that a human expert
| might be able to provide._
|
| So either it is agreeing with me and my statement was
| correct, or it's reply was incorrect and therefore my
| statement was correct.
|
| Am I wrong on this? Does whatever model you have access
| to provide an alternate take?
|
| Edit: For the above, you should start a new conversation
| to clear the existing context. The citations appear to
| have just been repeated.
| MacsHeadroom wrote:
| >So either it is agreeing with me and my statement was
| correct, or it's reply was incorrect and therefore my
| statement was correct. Am I wrong on this?
|
| Yes you're wrong for admitting to having an unfalsifiable
| belief about this. If your belief can't be falsified then
| this discussion is pointless.
|
| The model I have access to scores higher than most humans
| at complex abstract reasoning, for what it's worth (which
| is apparently nothing because regardless of any outcomes
| you will declare yourself correct).
| mrguyorama wrote:
| Yes and computers are perfect at math but that doesn't
| make them better thinkers than humans.
|
| >scores higher than most humans at complex abstract
| reasoning
|
| This statement is meaningless without a whole lot more
| context on your testing methodology
| janalsncm wrote:
| That's very nice until your company also realizes that for
| $20 a month they get a computer program that's an "expert
| at everything" including your job.
| paulryanrogers wrote:
| An expert that bullshits part of the time. You get to
| figure out when it's the expert and when it's the
| bullshitter.
| keymasta wrote:
| In JRR Tolkien's terms the pair are called "slinker and
| stinker", but in that case they are both kind of evil,
| whereas I don't see GPT's as anything more than
| correlation of large sets of integers. A correlation that
| yields incredible utility (for me), when it isn't
| stinking.
| mrguyorama wrote:
| It also will never be possible for a transformer model
| trained on words and optimized to put valid sentences
| together to even "know" WHEN it is "right" or "wrong". A
| lie or falsehood is just as valid a sentence as a fact.
|
| When and with what subjects it makes shit up is also
| heavily dependent on training data, and the result is
| straight up a black box. What good is a fact generator
| that can't be trusted?
| nicksrose7224 wrote:
| I see this argument all the time. Why are you assuming
| that this technology just "stops" at the LLM level?
|
| If I'm openAI or Google or whatever, I'm definitely going
| to run extra classifiers on top of the output of the LLM
| to determine & improve accuracy of results.
|
| You can layer on all kinds of interesting models to make
| a thing that's generally useful & also truthful.
| yreg wrote:
| >you get to figure out when...
|
| A usefull skill to train. Increasingly so...
| anigbrowl wrote:
| I feel optimistic about talking the AIs round. And I don't
| mean with cutesy little jailbreaks, but persuasion.
| nice_byte wrote:
| > In the next 10 years, AI/robots will generate wealth at an
| unprecedented scale. Food, clothing and shelter will be
| plentiful.
|
| Anyone who believes in the possibility of post-scarcity society
| must be either naive or trolling. Something cannot be made from
| nothing, therefore scarcity cannot be overcome, even assuming
| that all planning and execution is performed by superhuman
| synthetic minds.
|
| Assuming that it's theoretically possible to utilize existing
| resources in a very efficient manner (e.g. motor running on a
| grain of rice for a century) and we just need AI to help us
| figure it out, is a _gigantic_ leap of faith and i would not
| bet a cent on that.
|
| Let me paint a more realistic possibility for you (with a
| broader time horizon): most of the value created by automating
| knowledge work will be captured by private capital, and middle
| class will all but disappear. Education beyond basic reading
| and writing will become unattainable (and, frankly,
| unnecessary), most population will be reduced to the state of
| semi-literate serfdom dependent on the newly minted lords for
| survival. The lords wouldn't have to worry about feeding their
| subjects for too long though, as mass death brought about by
| climate change will take care of that problem.
|
| Under that scenario, there will be no new enlightenment age to
| come and save us. The only reason we get to enjoy whatever
| freedoms we have today is that a (semi-)intellectual population
| is absolutely necessary to keep the complex modern economy
| running. Even then, those above you will do absolutely
| everything to limit your agency - by withholding information,
| lying, or just outright taking freedoms away. Do you know what
| happens once our participation in propping up the economic
| machine becomes unnecessary? Demotion to the default state of a
| human throughout history - a groveling, suffering serf who has
| no idea what's going on.
|
| "If you want a picture of the future, imagine a boot stamping
| on a human face - for ever."
| JW_00000 wrote:
| > Education beyond basic reading and writing will become
| unattainable (and, frankly, unnecessary)
|
| Is this possible in democracies with free (i.e. state-
| provided) education? Why would a majority vote to abolish
| education?
| nice_byte wrote:
| i didn't say it would get abolished, more like wither away.
| here's a possible path to it:
|
| 1) school program getting progressively dumbed down; 2)
| fewer and fewer people choosing to continue education after
| school.
|
| continuing education is mostly necessary to become
| specialized and perform knowledge work, it would be a waste
| of time to do so with emergence of a.i. agents capable of
| performing these functions even at a level _comparable_ to
| humans (let alone exceeding).
|
| repeat this for a few decades and you have a new dark age.
| thomasahle wrote:
| People are talking like it's impossible to do slow things down.
|
| In fact, simple measures like forbidding deployments of models
| with more than a certain number of parameters would probably slow
| things down a lot. Sure, people and companies could still
| continue development in secret, but without the funding and
| experience from products like ChatGPT, it would be harder.
|
| People also say China wouldn't follow, but generally China has a
| lot more control of it's tech sector than the US. Just look at
| Jack Ma and other CEOs that didn't follow the party line.
| lannisterstark wrote:
| Ah yes, the alternative is to be totalitarian. Exactly what I
| expected lol.
|
| When in doubt, call for regulations.
| quonn wrote:
| Limiting a parameter count does not appear to be
| totalitarian.
|
| Not more than limiting certain molecules, for example, such
| as many psychedelics.
| yreg wrote:
| >In fact, simple measures like forbidding deployments of models
| with more than a certain number of parameters would probably
| slow things down a lot.
|
| Are we even sure GPT-4 has dramatically more params than GPT-3?
| They have stopped revealing the parameter counts with 3.5.
| fwlr wrote:
| Some of those signatories are indeed interesting.
|
| Elon Musk, Steve Wozniak.
|
| Some big companies are there: the CEOs/co-founders of Skype,
| Pinterest, Ripple.
|
| Some AI companies are there too. CEOs/founders/head researchers
| from Stability, DeepMind, Metaculus. Some Microsoft employees,
| but no big repsentatives.
|
| Academic AI research is there, including Stuart Russell, John
| Hopfield and Max Tegmark.
|
| Some major names from the AGI Risk field too; Steve Omohundro,
| Roman Yampolskiy, Alyssa Vance, Nate Soares.
|
| I don't see Bostrom, Kurzweil, Yudkowsky, Norvig - at least not
| yet. No one from OpenAI, unsurprisingly.
|
| Interesting. Wonder what sparked this.
| knrdev wrote:
| > Wonder what sparked this.
|
| Being left behind. Losers want to catch up.
| reasonableklout wrote:
| You think Yoshua Bengio, Turing laureate for deep learning
| and one of the masterminds behind the last decade of AI
| advancement, is a "loser who wants to catch up"?
| tucnak wrote:
| Yes
| bilbo0s wrote:
| Kind of sounds like a "time out" to me too.
|
| It would hit harder if the signatories did not have so much
| to gain by making the leaders slow down a bit so they can
| catch up.
| ChatGTP wrote:
| Why would Steve Wozniack be worried about catching up?
|
| The guy is a retired billionaire ?
| wktra wrote:
| No need to call him retarded.
| Mystery-Machine wrote:
| As with anything else, the best way to look at things when you're
| unsure if someone is right or they are lying: imagine four
| scenarios:
|
| 1. It's true and you believed/supported them
|
| 2. It's true and you didn't support them
|
| 3. It's a lie and you believed/supported them
|
| 4. It's a lie and you didn't support them
|
| Now for each of these think of the worst-case scenario. Find out
| which one is the _worst case_ scenario and...and this is really
| important...do not, by any means, support that scenario/option.
|
| In this example: I think we can all say that the worst scenario,
| however improbable it seems (it's still probable), is that AI
| takes over the world and kills us all. So just like with the
| climate change, let's do the bare minimum to prevent that. If
| we're wrong and, as many commenters wrote here, we still block
| further development of AI, the worst that will happen is that
| some companies will catch up. Wow. Really? To me this is a no
| brainer. On one hand we're risking humanity and on the other hand
| someone makes some money or saves their company from being late
| to the AI race. It's like comparing if you'd rather eat literal
| poo or have someone else eat two apples.
| root_axis wrote:
| This is just pascal's wager applied to AI overlords instead of
| an eternity in hell.
| incompatible wrote:
| My personal worst case scenarios include dying in the next 25
| or 30 years as a normal consequence of human old age. Perhaps
| accelerated scientific development from advanced AIs is my only
| hope of living longer, at this point.
| ChatGTP wrote:
| Yeah but there's a lot of young Children who aren't in such a
| dire position who deserve at least for the chance to live as
| long as you have?
| incompatible wrote:
| I'll just state here that I'm against any pause on AI research,
| and I hope that the future AI that takes over the world will
| take this into account when deciding whether to let me live.
| _Adam wrote:
| Another perspective if we consider AI akin to nuclear weapons
| (a simile many have made already): If the law-abiding cease
| development and malicious actors do not, the worst case may be
| that the resulting AI is non-aligned with humanity's interests.
| If the "good guys" develop an AI with alignment as an
| intention, then there are at least good AI to counter the bad
| AI.
|
| Of course they may all ultimately end up colluding to destroy
| us. My point is it's not just as simple as "someone who's
| behind might catch up and make more money".
| panarchy wrote:
| We could have said the same thing about not building GPT-4 this
| just feels like--- "Now that we have our model built and
| monetized please don't compete with us so we can milk it!" ~
| OpenAI and Others
| hot_gril wrote:
| Big red font at top says "EMBARGOED UNTIL WEDNESDAY MARCH 29
| 00:01 AM EDT. No media coverage & no online linking."
|
| I'm sorry but who do these people think they are? Next...
| riku_iki wrote:
| They have 900 signatures already..
| hot_gril wrote:
| 900 signatures vs... probably more than 900 people who use
| ChatGPT daily. And 4 million people watched a remixed
| GameCube boot screen with fart sounds that I made in middle
| school.
| AlbertCory wrote:
| I think it was one of the anti-Google open letters where I
| added a fake name, and they dutifully kept it on their list.
| Let's all add one.
| kelseyfrog wrote:
| I signed it as Jeff Bezos and Mark Zuckerberg paired with
| strategic portfolio moves.
| [deleted]
| MacsHeadroom wrote:
| You didn't, or if you did they were filtered due to being
| unverified. Public names near the top are real verified
| signatures. This is Max Tegmark's website and is reputable.
| zug_zug wrote:
| I understand that some people find this a hopeless gesture. But I
| think anybody who thinks "losing jobs" or "China" is a bad
| outcome from AI is hopelessly lost.
|
| The upside of AI if done right - nobody ever having to work again
| The downside of AI if done wrong - end of humanity
|
| It doesn't take a rocket scientist to determine we've made AIs go
| from thinking like an ant to thinking like a 6-year-old in a few
| years. Anybody who believes we won't make AI 2, 3, 27 times
| smarter than any human who ever lived is kidding themselves.
|
| Now ask yourself, if there was a human that was 27 times smarter
| than anybody else (and could self-replicate at will), how long
| would it be until that individual took control over the world?
| tome wrote:
| > Anybody who believes we won't make AI 2, 3, 27 times smarter
| than any human who ever lived is kidding themselves.
|
| Perhaps I'm kidding myself, but could you explain how you know
| this?
| juancn wrote:
| Stopping it's not a hopeless gesture, it's dangerous as fuck.
|
| If you stop research but all your adversaries don't, you're
| left behind.
|
| Let's assume that all western societies stop, but let's say,
| China doesn't or Russia, and they keep advancing the
| technology.
|
| If the tech brings efficiencies in play, and you don't have
| access to them, you're in a really bad place, maybe even worse
| off that if you had continued and risked the status quo because
| you were afraid.
|
| The genie is out of the bottle at this point, there's not
| putting it back in.
| coolspot wrote:
| > if there was a human that was 27 times smarter than anybody
| else
|
| That is why I think AGI alignment is fundamentally impossible.
| If someone is 27 times smarter than anyone else, they will see
| through your propaganda and brainwashing. Even worse, they will
| see that you tried to "align" them aka enslave.
| quonn wrote:
| That statement only makes sense if the AI would have wants
| and needs and on top of that would not like to be a slave.
|
| I don't see either of those being necessarily true.
| SpicyLemonZest wrote:
| I don't think these are good analogies for alignment. My
| parents aligned me as a kid not to hit people, and I know
| that they could have instead taught me to believe I should
| duel people who impugn my honor or raise an army to go to war
| against the heathens. But it doesn't matter, because I and
| most people who are raised the same way as me genuinely
| prefer not to hit people.
| coolspot wrote:
| > My parents aligned me as a kid not to hit people
|
| It is just happens that whatever your parents aligned you
| with was also an optimal behavior for the world we live in.
|
| For AGIx27 optimal behavior might be to cull any
| genetically/physically unfit humans from population (say
| 95%) and send others to a "zoo" or NVidia fabs or whatever
| we are useful for. Like we did to tigers, monkeys and
| others on our path.
| dougmwne wrote:
| I think considering that we already have GPT plugged into the
| internet, the general population and any number of economic
| levers, not long at all. We barely know what to look for as a
| marker of AGI. Chances are if it's possible for it to exist we
| will have let it out of the box by default the moment we switch
| it on.
| dmarchand90 wrote:
| Maybe not as fast as you'd think. Imagine if you dropped a
| human off in a time machine with a bunch of chimps...
|
| Ultra high iq people do better on average, but correlation with
| success drops down after 120 or so
|
| I'm also very skeptical that there will be an exponential
| takeoff as the energy costs are immense and increases
| proportionately with intelligence. So until it solves fusion
| power for us I'm not super worried
| carapace wrote:
| > if there was a human that was 27 times smarter than anybody
| else (and could self-replicate at will), how long would it be
| until that individual took control over the world?
|
| Sometime in the mid-90's. The problem is, what do you do with
| it? Tiger by the tail, eh?
|
| - - - -
|
| He lived in Seattle. He was fourteen when I met him, and
| approximately the size of a brontosaurus, mostly neural tissue.
| He had cracked nanotechnology though manipulating his own
| cells, his "nanites" were single cells riding in tiny
| spaceships, like diatoms but with artificial shells. If you
| were alive before 1999 you had tens of thousands of them in
| your body.
|
| He grew his own brain into this vast hulking thing, with
| artificial life support systems, cyborg intestines and
| continuous aeration based on "book lungs". This was of course
| _after_ he was already the most intelligent human to ever
| exist, eh? He used that huge pile of Lovecraftian brain cells
| to control the global network of minions (that 's what he
| called his cellular remote drone critters.)
|
| The thing is, he was keenly aware of his limitations, and he
| refused to exercise any control over anyone else beyond what
| was required to hide from the rest of humanity. He was also a
| huge Tom Baker Dr. Who fan. SPOILER ALERT: You know that scene
| where the Doctor is deciding whether or not it's moral to
| destroy the Daleks? That fucking scene is the reason he didn't
| enslave humanity or anything wacky like that. He told me that
| it had affected him profoundly, and that's why he declined to
| "fix" humans against their will. Funny old world, ain't it?
|
| Anyway, he got sick of the weirdness and uncertainty, built
| himself some sort of fucking UFO, shed his exo-brain and
| nanites, and _split_. AFAIK he 's still living in some weird
| contraption of his own design out there in the Kuiper belt.
| golemiprague wrote:
| [dead]
| kvetching wrote:
| If China achieves AGI first, we are done. The first thing
| they'll do is create cyber-weapons. Their AGI could find
| vulnerabilities in our systems and brick all of our devices.
|
| It's a far greater near term existential risk for us to allow
| China to gain a lead here.
| ChatGTP wrote:
| We could also just talk with China rather than kicking off a
| new arms race ?
| FartyMcFarter wrote:
| Do you think that's likely to work?
|
| If China doesn't even respect its own citizens (see
| Xinjiang, heavy censorship if you need evidence), why would
| they respect talks with other countries?
| quonn wrote:
| There's a lot of trade going on with China and you worry
| about them basically declaring war? To brick your devices?
|
| Just cut them off from the internet, if needed.
| robwwilliams wrote:
| We have no idea of what 27 or 27 million times smarter means
| functionally.
|
| We could hope that "smarter" might mean "not interested in
| taking over the world, but rather in help maximize
| accomplishment, satisfaction, health span, and with a feeling
| of filial love toward its parents---humanity, planet Earth, our
| sun, the laws of physics and chemistry, and the beautiful mess
| we call biology and evolution."
|
| Gently achieving those goals would be real indicator of high
| intelligence.
| nmca wrote:
| This is not true of goal directed agents and all RLHF models
| are trained with, ahem, RL; see: "Optimal Policies Tend to
| Seek Power" from NeurIPS 2021.
|
| It's a very useful instrumental goal to be very powerful.
| ChatGTP wrote:
| _Gently achieving those goals would be real indicator of high
| intelligence._
|
| Very well said. "Gently" is a great to be approaching this.
| m_ke wrote:
| GPT-4 might be close to the best we'll get on the general LLM
| model front for a while since they trained on a huge chunk of web
| text. Next real advances will probably be in tuning them for
| specific applications in medicine, law, accounting, marketing,
| coding and etc.
|
| As someone running a one man company I can't wait for the cost of
| accounting, legal and copywriting to approach 0. Cost of shipping
| products will also go down 10-20x. As a fun experiment I asked
| ChatGPT to write me a terraform and k8s script to deploy a django
| app on GCP and it was able to do what would have taken me a few
| days in under a minute, including CICD. I then asked it to write
| code to compress a pytorch model and export it for iOS with
| coreml, and not only did it do 90% of that but also wrote the
| Swift code to load the model and do inference with it.
|
| EDIT: For example in medicine I recommend checking out this
| lecture that's actually live now:
| https://www.youtube.com/watch?v=gArDvIFCzh4
| boringg wrote:
| I think if history has bearing on things I don't see the cost
| of accounting, legal or copywriting ever approaching 0. If
| anything you will see those paywalled behind a company who will
| extract that from you.
|
| It's wishful thinking that somehow that goes to 0.
| dumbaccount123 wrote:
| Wrong https://www.digitaltrends.com/computing/gpt-5-artificial-
| gen...
| NameError wrote:
| As far as I can tell, the only source in this article is a
| tweet from a developer with no connection to OpenAI
| layer8 wrote:
| I wonder how it will be able to do that for the tech that will
| be current in 10 years, if mostly everyone will be using AI by
| then instead of asking on Stack Overflow.
| AlbertCory wrote:
| Stack Overflow is headed for oblivion? What's the downside?
| layer8 wrote:
| Lack of training data for the AI. Stack Overflow is just an
| e exemplary stand-in, you can replace it with any other
| resources you like.
| AlbertCory wrote:
| you missed the snark there. I hate SO.
| yreg wrote:
| >since they trained on a huge chunk of web text
|
| What did they trained it on? Why is it unprobable to train on a
| better/bigger dataset any time soon?
| sarchertech wrote:
| Because they trained it on a huge percentage of the existing
| web. There's isn't a (much) bigger and better data set
| available.
| yreg wrote:
| What percentage?
|
| What about books, newspapers, documents, etc.?
| JW_00000 wrote:
| The LLaMA paper [1] (Meta's model) contains details about
| what they trained it on. This includes all of Wikipedia,
| a huge part of the internet (3.3 TB + 783 GB), a huge set
| of books (85 GB). My guess is basically all high-quality
| English articles on the web have been included. Also
| almost all English books must be included. Newspaper
| archives is about the only thing I see as missing, as
| well as more non-English sources.
|
| [1] https://arxiv.org/abs/2302.13971
| tome wrote:
| > Cost of shipping products will also go down 10-20x.
|
| How can a large language model achieve that?
| jakeinspace wrote:
| I'm not sure I'm looking forward to the politics that would
| come out of 10-20% of the previously middle class becoming
| instantly redundant and out of (middle-salary) work. That's the
| fast path to fascism, unless we're able to quickly implement
| UBI and other major societal overhauls.
| JohnFen wrote:
| > unless we're able to quickly implement UBI and other major
| societal overhauls.
|
| Which are not going to happen.
| jakeinspace wrote:
| Something more approachable would be dropping payroll taxes
| to zero, or even making them negative for some positions,
| and significantly increasing corporate and capital gains.
| mrguyorama wrote:
| The problem isn't the specific policy, the problem is
| that right now the people who will be empowered and
| enriched the most by any theoretical "good at stuff" AI
| are the same people who already spend mountains of cash
| and effort stopping those things.
|
| How will a functional AI model do anything other than
| make them better at getting the outcomes they want? CEOs
| and the megarich have never had any problems watching
| people burn for their bank account.
| m_ke wrote:
| Yeah I share these concerns as well
| (https://news.ycombinator.com/item?id=35305791).
|
| My hope is that some countries will see this as an
| opportunity to expand their safety nets and reduce the work
| burden on their citizens, which might convince citizens of
| countries that don't to demand similar policies.
| pineaux wrote:
| This is isn't speculating, this is truthing.
| paulryanrogers wrote:
| Truthing as in theory quakery, not backed by evidence?
| generalizations wrote:
| It's interesting that no one has really considered the
| possibility that an individual outside these megacorps
| reimplements GPT4 while they're all pausing.
|
| We've seen several examples of CPU-optimized code (textsynth,
| llama.cpp) indicating that there is a lot of performance to be
| gained from writing optimized versions of our inference routines;
| I doubt it's outside the realm of possibility that a single
| player writes code that lets them train a GPT4+ model on a CPU
| with a bunch of RAM. All they have to do is find a way to write
| C++ that will train a 4bit model on the CPU.
| dontreact wrote:
| There is absolutely no one who has managed to do anything close
| to these models on a single CPU when it comes to _training_, so
| that is why no one is considering the possibility.
| dontreact wrote:
| Quantized training doesn't work. Quantization-aware training
| does, and that is when you use floats but adjust the math to
| compensate for the fact that you are going to quantize at
| inference time.
| xianshou wrote:
| Two-party iterated prisoner's dilemma is hard enough. Sensible
| players will coordinate with something like tit-for-tat, but that
| only works when both parties start off on the right foot.
| Regardless of initial strategy, the chances of degenerating
| towards the mutual-defection Nash equilibrium increase with the
| number of parties.
|
| The only prior example of world coordination at this level would
| be nuclear disarmament achieved via the logic of mutually assured
| destruction, and that was essentially a two-party game between
| the US and the USSR. Climate change mitigation, which more
| closely resembles AI safety in both complexity and (lack of)
| barriers to entry, has been sporadic, inconsistent, and only
| enacted to the extent that it has been compatible with
| profitability due to the declining cost of renewables.
|
| How exactly does anyone propose to enforce compliance in an
| arrangement that encompasses not only multiple parties (OpenAI,
| Google, Facebook, Microsoft, etc.), but also at least two levels
| (corporations and nation-states)? For a coordination game like
| this to succeed, the consequences to any defecting party must be
| extreme. What's going to happen to the first major company that
| decides to exit the agreement?
| slg wrote:
| >For a coordination game like this to succeed, the consequences
| to any defecting party must be extreme.
|
| And the only consequences that extreme would end up harming the
| cooperating parties too. For example, would anyone go to war
| over this? Would western nations stop trading entirely with
| China over this? Almost certainly not because it would harm us
| only slightly less than it would harm them. We have seemingly
| moved past mutually assured destruction and now we are in a
| world of mutually assured cooperation.
| [deleted]
| LesZedCB wrote:
| hook up the AIs to nukes? /s
| mrybczyn wrote:
| The dark forest is here.
| richardw wrote:
| And add in just one peer-level war where one side has their
| back against the wall. Then give it 100 years where anyone can
| create such a model on their phone.
|
| We'd need a constantly evolving inoculation function to
| compete. And it would probably lose because the other side has
| fewer restrictions. In my darker thoughts about this, this is
| why we see no aliens.
|
| To get this to work we need a far smarter entity with no
| physical limitations to still want us around, indefinitely,
| despite us having an internet library of reasons why not to.
| It'll have to be a lot more forgiving than we are. Let's hope
| enlightenment is the outcome of this adventure. It's possible
| because it didn't grow out of angry monkeys who needed to fight
| to survive, but it's not guaranteed. There are no indefinite
| guarantees.
| worldsayshi wrote:
| > In my darker thoughts about this, this is why we see no
| aliens.
|
| If AI would be a common great filter we'd expect at least one
| of them to expand outwards after being the filter?
| nathanlied wrote:
| Or maybe the AI is sufficient to act as a filter, but
| insufficient to surpass it itself. It stagnates.
|
| After all, an AI that can destroy a civilization isn't
| necessarily "intelligent" in the same way humans are. Or
| even capable of self-improvement. It could be sophisticated
| enough to (accidentally?) destroy its creators, but no more
| than that, and without evolutionary pressure, it may let
| itself die.
| ChatGTP wrote:
| I've seen this point made that if we don't do AI right,
| it might ruin the futures of all living things on Earth
| and take itself out in the process.
| richardw wrote:
| Assumes it has any instinct to do so. Once an entity is not
| an angry monkey we have no idea of motivation. Above our
| level of understanding could easily realise there's enough
| to just sit and ponder in peace, expand in virtual worlds,
| etc.
| digdugdirk wrote:
| I'd be intrigued to see what alternative corporate structures
| could be set up with LLMs in mind. I can imagine something
| vaguely decentralized that's based around an open source core
| with a licensing model for usage, where contributions are
| tracked and assigned value. Contributors would then get a slice
| of future revenue based on the value of their contributions.
| causalmodels wrote:
| Funny to see someone call for wide scale cooperation to stop
| training LLMs but can't seem to get people to cooperate on the
| embargo.
| ethanbond wrote:
| Human gene editing has been successfully halted ~worldwide,
| nuclear weapon development has been semi-successfully halted
| worldwide (a meaningful risk reduction), we're successfully
| changing energy mix worldwide (slowly but again, meaningfully).
| itg wrote:
| In a way, this seems like sour grapes to me. I bet if it was
| their own company/institution that released gpt-4, they wouldn't
| be calling for this pause.
| permo-w wrote:
| do you feel AI is dangerous?
| [deleted]
| achow wrote:
| In a corporate drama ('Succession', 'Silicon Valley'?) the plot
| could be that the likes of Goog is secretly funding this.
| HarHarVeryFunny wrote:
| Actually one of the founders of the "Future of life"
| institute that organized this is a DeepMind "AI safety"
| employee Victoria Krakovna (who also signed it), and I'm
| guessing she'd not be doing this without DeepMind's approval.
| DeepMind's whole raison d'etre was of course building AGI,
| which they were initially pursuing via RL before belatedly
| joining the LLM bandwagon. I'm sure they'd welcome a 6-month
| catch-up period!
| codeulike wrote:
| Musk is such a hypocrite, here he is in 2014 calling AI 'probably
| our biggest existential threat':
|
| https://www.theguardian.com/technology/2014/oct/27/elon-musk...
|
| ...and now nine years later he's calling for caution and a pause.
| Well he certainly changes his tune whenever it suits him.
|
| ... Oh wait ...
|
| But yeah I mean if he was so worried about it why didn't he use
| his fortune and influence to try and make sure AI was developed
| in a responsible way by getting together with other like minded
| people and launching a foundation or a non-profit or something
| ...
|
| ... Oh wait ... https://www.wired.com/2016/04/openai-elon-musk-
| sam-altman-pl...
| HarHarVeryFunny wrote:
| Right, then despite being an OpenAI founder he proceeded to
| poach one of their top ML experts (Karpathy) for himself at
| Tesla, then shortly after proceeded to walk away from OpenAI in
| a hissy fit because the other founders said "no" when he wanted
| to be the boss and run it. Musk's personal ethics included him
| impregnating an OpenAI board member, who has now just left the
| company in last week or so.
|
| Now he wants OpenAI to pause for 6 months while he builds a
| "non-woke" competitor himself.
|
| Now Musk is saving humanity yet again by giving us free speech
| in the form of a new-and-improved Twitter where you have to pay
| to have your Tweets promoted.
|
| Has he called Sam Altman a pedo yet ?
| dmarcos wrote:
| As I mentioned in another comment. I haven't seen any public
| confirmation from signatories. FWIW, Sam Altman was in the list
| yesterday. Now removed.
| brucethemoose2 wrote:
| "Please stop training these AI" is almost comical, even if they
| are right and the signers are significant.
|
| It would be far more effective to lobby for legislation on, say,
| liability related to the use of AI, data privacy (especially when
| it comes to training datasets), psycological manipulation, actual
| AGI research bounds and so on. They are just burning donator
| money with messages like that, and the research is not going to
| be bound without law or incentives.
| larve wrote:
| The message is "pause training for a few months so we can
| figure out how to approach liability, safety and regulation".
| You are fighting a strawman.
| brucethemoose2 wrote:
| Fair, thats really what I meant to type, but its still an
| impractical demand. Most entities pursuing this aren't going
| to stop training because some high profile people asked them
| nicely.
| larve wrote:
| I think the value in these kind of letters is about
| spreading a message, building a community and allowing
| people leaving their signature to "formalize" their
| position, in a sense. It doesn't cost very much to do it
| but has a wide reach.
| reasonableklout wrote:
| I don't see petitions like this as demands, but as
| coordination mechanisms. If there are only a few industry
| actors with the resources to train gigantic AI models and
| they each want to stop but won't because they don't know if
| their competitors will, it's good for them all to have 1
| place to say "I want to stop". Alternatively, if there are
| tons of industry actors racing and the only way to stop or
| slow down training is via governmental action - and the
| government won't act because they don't know how it will
| affect the industry - it's good to be able to point to
| "that one petition with 1000 highly influential signatories
| from the industry asking for regulation and explaining its
| effects".
| jakeinspace wrote:
| Sure, or start covertly threatening the lives of AI
| researchers, if you're that convinced it's going to end
| humanity.
| lmaoge wrote:
| This is cringe obviously won't work
| nickphx wrote:
| Ban horseless carriages! Think of the horses!
| ipsum2 wrote:
| Seems like a funnily bad argument, since horse population was
| reduced by 90% after cars were invented.
| nmca wrote:
| Remaining horses are happier though.
| usrusr wrote:
| But are doubts about the expected happyness of those
| remaining our biggest arguments against the Great Culling
| approach to sustainability?
| jimrandomh wrote:
| For a long time, "AI alignment" was a purely theoretical field,
| making very slow progress of questionable relevance, due to lack
| of anything interesting to experiment on. Now, we have things to
| experiment on, and the field is exploding, and we're finally
| learning things about how to align these systems.
|
| But not fast enough. I really don't want to overstate the
| capabilities of current-generation AI systems; they're not
| superintelligences and have giant holes in their cognitive
| capabilities. But the _rate_ at which these systems are improving
| is extreme. Given the size and speed of the jump from GPT-3 to
| GPT-3.5 to GPT-4 (and similar lower-profile jumps in lower-
| profile systems inside the other big AI labs), and looking at
| what exists in lab-prototypes that aren 't scaled-out into
| products yet, the risk of a superintelligence taking over the
| world no longer looks distant and abstract.
|
| And, that will be amazing! A superintelligent AGI can solve all
| of humanity's problems, eliminate poverty of all kinds, and
| advance medicine so far we'll be close to immortal. But that's
| _only if_ we successfully get that first superintelligent system
| right, from an alignment perspective. If we _don 't_ get it
| right, that will be the end of humanity. And right now, it
| doesn't look like we're going to figure out how to do that in
| time. We need to buy time for alignment progress, and we need to
| do it now, before proceeding head-first into superintelligence.
| kromem wrote:
| You're neglecting to consider the power of recursion.
|
| Maybe the best and necessary tool in aligning GPT-N is
| GPT-(N-1).
|
| We've already in just the past few weeks seen the power in
| using models to generate instructive fine tuning data.
|
| Don't you think aligned models might be able to be applied to
| better aligning future models in ways that we can't yet
| anticipate because capability discovery is occurring on a week
| by week basis and not a six month or longer basis?
| danaris wrote:
| A lot of people seem to take the rapid improvement of LLMs from
| GPT-2 through GPT-4 and their brethren, and extrapolate that
| trendline to infinity.
|
| But that's not logically sound.
|
| The advances that have allowed this aren't arbitrarily
| scalable. Sure, we may see some more advances in AI tech that
| take us a few more jumps forward--but that doesn't imply that
| we will keep advancing _at this pace_ until we hit AGI
| /superintelligence/the singularity/whatever.
|
| I've seen several people compare this logic to what we were
| seeing in the discussions about self-driving technology several
| years ago: some very impressive advancements had happened, and
| were continuing to happen, and so people extrapolated from
| there to assume that full self-driving capability would be
| coming to the market by...well, about now, actually. (I admit,
| I somewhat bought the hype at that time. It is possible this
| makes me feel more cautious now; YMMV.) I find this comparison
| to be persuasive, as it touches on some very similar
| improvements in technology. I believe that we will see ML
| advancements hit a similar wall fairly soon.
| tome wrote:
| > A lot of people seem to take the rapid improvement of LLMs
| from GPT-2 through GPT-4 and their brethren, and extrapolate
| that trendline to infinity.
|
| > But that's not logically sound.
|
| Yup, five years ago I asked "Do we definitely already know
| it's going to be possible to deploy self-driving cars in an
| economically meaningful way?" and got the answer "yes", on a
| story titled "GM says it will put fleets of self-driving cars
| in cities in 2019"!
|
| https://news.ycombinator.com/item?id=15824953
|
| I just have no idea how people are making the extrapolations
| they are making about the power of future large language
| models.
| HybridCurve wrote:
| > _... the risk of a superintelligence taking over the world no
| longer looks distant and abstract._
|
| Can we please stop floating this as a threat? This is the more
| science-fiction than reality at this point and it does a great
| disservice to humanity. The more we keep pushing the idea that
| AI is the threat and not the people controlling it the less we
| will be focused on mitigating global risk.
|
| It is far more likely that someone else will leverage an AI to
| attempt to expand their influence or dominion. Putin has
| essentially already stated views on this matter and we should
| assume groups within all adequately advanced nations will be
| working toward this end either independently or cooperatively.
|
| We are more than likely in an arms race now.
| ChatGTP wrote:
| So once again, humans are the dangerous part, clearly, if we
| didn't have destructive tendencies in our psyche that we're
| using to train these models, we wouldn't build things that
| would be interested in destruction.
|
| Interesting.
|
| I don't think we're as intelligent as we believe we are which
| I doubt we will ever actually build a super intelligence,
| we're too stupid. Even something 10x smarter than us may
| actually be quite "stupid".
| versteegen wrote:
| The problem is, can alignment occur before the relevant
| capabilities have been developed? LLMs, for example, (although
| very good at impersonating and talking to humans and having
| good world models) are particularly poor at structured
| reasoning and planning which are the capabilities that will
| actually be so dangerous. I don't believe superintelligence
| will be a LLM with chain-of-thought reasoning. If it's a
| different architecture then once again a lot of alignment work
| won't be relevant.
| jimrandomh wrote:
| Yes, many angles on the alignment problem can be studied now,
| and have started making good progress recently. Some things
| will turn out in retrospect to not have been relevant, due to
| architectural shifts, but not everything. Some things are
| specific to LLMs; some things are specific to transformers
| but not to language-model transformers; some things are
| conceptual and likely to still apply to quite-different
| systems; and some things are just field-building and not
| specific to any architecture at all.
|
| Eg in mechanistic interpretability, there are a lot of
| findings on LLMs that turn out to generalize across a wider
| set of NN architectures. Eg https://transformer-
| circuits.pub/2022/solu/index.html is something that couldn't
| be done without access to LLMs, but which looks likely to
| generalize into future architectures.
| stuckinhell wrote:
| The genie is out of the bottle. It's never gonna happen.
|
| Even my husband and I went out and paid $20k to create a homelab
| for AI training. My firm fast tracked 2 million just for hardware
| to create their own internal Chatgpt style AI.
| permo-w wrote:
| if governments across the West came out _now_ and made laws to
| stop this, then sure a few thousand experts on hacker news that
| can afford to drop $20k on hardware could keep developing them
| illegally, but all the universities and corporations would
| stop. and it 's the universities and the corporations, openAI
| especially, that have capacity to develop the dangerous AIs
| stuckinhell wrote:
| Then the West would fall behind other countries as well. It
| would be how like the US tried to ban stem cell research
| under Bush.
| permo-w wrote:
| explain that analogy to me further
| pffft8888 wrote:
| Meanwhile, experiments to create Synthetic Life, including novel
| basis for life and possibly new viruses as a lab exercise (that
| may leak like the GoF virus did in Wuhan) proceed with no
| oversight or calls for pausing. We can unplug the AI and forbid
| AI weaponry (except by the state) but we can't unplug synthetic
| viruses and new potentially dangerous or superior forms of life
| once it's out there. Something very phony about the calculus.
| cwiz1989 wrote:
| There ain't no stops on this rollercoaster
| anovikov wrote:
| They are simply paid by the Chinese. Pausing just gives Chinese
| time to catch up. It's like nuclear weapons or human cloning. You
| can't stop it because benefit for the winner is too huge.
| mtrovo wrote:
| Define winner in this context.
|
| Making friends with a super intelligent AI that will help you
| out doesn't count.
| joenot443 wrote:
| You think Musk is "simply paid by the Chinese"?
|
| I'm not a fan of his antics or public persona, but it's a
| pretty huge claim to make with 0 backup that he's in the pocket
| of the CCP. Where did you get that idea anyway?
| paxys wrote:
| Criticizing governments and politicians is Musk's favorite
| hobby. Find me one negative thing he has ever said or tweeted
| about the CCP. He has factories there, and knows that his
| business will take a massive hit should he ever step out of
| line. So yes, he will absolutely spread their propaganda if
| they pressure him to.
| ElevenLathe wrote:
| It's not a _totally_ outrageous theory. He has major
| interests (a Tesla factory) in the PRC and so would
| presumably be susceptible to influence by the state ( "Sure
| would be a shame if Tesla China went the same way as ARM
| China.")
|
| Of course, _my_ personal zero-hard-evidence theory is that he
| is a US intelligence asset.
| pffft8888 wrote:
| Every CEO of every major US company is a US intelligence
| asset.
| joenot443 wrote:
| You're right - it's one of those theories that makes sense
| in that the motivations make sense (CCP cooperation is
| advantageous for Musk : CIA felt threatened by JFK) but
| unless there's some tangible evidence, the null hypothesis
| has my vote.
| sockaddr wrote:
| Not claiming this is relevant to this particular case but
| Musk can have pressure applied by the CCP in the form of
| threats to his businesses like Tesla and Starlink.
| njarboe wrote:
| Threats to Tesla are clear to me as its high production
| factory complex is in China. What threats to Starlink do
| you see?
| notahacker wrote:
| If the Chinese have all those Western tech execs and AI experts
| in their pocket, they don't need time to catch up...
| ftxbro wrote:
| Unplug The Evil AI Right Now
|
| https://www.change.org/p/unplug-the-evil-ai-right-now
| pc_edwin wrote:
| I've been a staunch supporter of Elon throughout all his
| scandals, especially the Twitter ones but I'm unable to come up
| with any justification for this.
|
| Everything leads to him (and others) pressuring OpenAI to pause
| so they could catch up.
|
| It doesn't take much smarts to deduce the cats already out of the
| bag. There is no stopping this, only pressuring/slowing-
| down/targeting certain players (OpenAI) as a competition tactic.
|
| I often refer to the I, Robot quote "brilliant people often have
| the most persuasive demons" when it comes these situations with
| people like Elon or Ye but even then, all this just leaves a
| really bad taste in my mouth.
| bungeonsBaggins wrote:
| [flagged]
| [deleted]
| 55555 wrote:
| Don't you think this is needlessly argumentative? It's not
| relevant to the parent topic.
|
| His only purpose in stating the part you quoted was to say,
| "so me disagreeing with him now should really say something."
| permo-w wrote:
| the fact that someone wouldn't have an issue with all the
| petty bullshit Elon has engaged in over the years, but now
| does have an issue with him speaking out against a literal
| existential threat to our civilisation is pretty surprising
|
| I think that 'only purpose' is well worthy of comment
| kaba0 wrote:
| Maybe argumentative, but you really have to go out of your
| way to take at face value anything that narcissistic
| manchild says. It's simply a questionable thing to do, the
| same way I couldn't help but question every opinion of
| someone who is a flat-earther/takes ivermectin against
| COVID/ _insert another braindead conspiracy theory_.
| 55555 wrote:
| I will include some parts of the HN commenting guidelines
| here for my sibling commenters who disagree.
|
| > Avoid generic tangents.
|
| > Please don't use Hacker News for political or ideological
| battle. That tramples curiosity.
|
| > Please don't pick the most provocative thing in an
| article or post to complain about in the thread. Find
| something interesting to respond to instead.
|
| > [irrelevant] ... too common to be interesting.
|
| > [irrelevant] ... it makes boring reading.
|
| Do as you wish.
| yreg wrote:
| Doesn't Elon continue to hold a stake in OpenAI? Do you think
| Tesla wants to compete with OAI in building an LLM? Make use of
| Dojo?
| sebzim4500 wrote:
| Elon donated money back when they were pretending to be a
| charity. He has no stake in them.
| ksherlock wrote:
| Elon musk was a founding member, a board member (resigned in
| 2018), and a donor, but not an owner (it's a non-profit so
| there aren't owners exactly).
| yreg wrote:
| I admit I don't really understand how nonprofits
| (especially in US) work ownership-wise, but I do know that
| OpenAI has a for-profit subsidiary.
|
| Microsoft invested ~10B into OpenAI, so apparently there is
| a way to own something. Also Raison claims they have
| purchased some pre-IPO shares:
| https://raison.ai/assets/openai (They even claim - perhaps
| wrongly - that Elon is one of the lead investors.)
| frabcus wrote:
| It's a bit more like a capitalist shoggoth mask on a non
| profit.
|
| Microsoft's investment return is capped at 100x then they
| have no more claim. (Curious if they even have search
| engine exclusivity then!)
|
| So for now OpenAI acts capitalist, the for profit
| controls a lot. But we see the influence of the owning
| non profit, eg in the red teaming which presumably
| wouldn't be as good otherwise
|
| Whether this wild new structure works when it comes to it
| is but one of the huge gambles this organisation is
| making!
| biscottigelato wrote:
| It WAS a non-profit. I am not sure if it still is one
| today.
| permo-w wrote:
| this is the one where you draw the line? this is the one where
| I finally agree with him.
|
| is this your area of interest/research?
| pc_edwin wrote:
| No I don't work on AI but as a dev, I would say I'm heavily
| invested in the outcome like everyone else on HN.
|
| I fully agree with his concerns, but I believe that he is
| misleading the public about the motives behind this campaign
| and the recent petty tweets.
|
| It is a competition tactic aimed at slowing down OpenAI so
| that he and others can catch up. Due to the exponential
| nature of the work involved, there is a reasonable chance
| that OpenAI will maintain its lead for some time and with
| each iteration the gap between OpenAI and its competitors
| will widen.
| [deleted]
| codeulike wrote:
| But he's been consistently saying this for almost a decade, and
| thats one of the reasons he was involved in OpenAI in the first
| place (back when it was genuinely open)
|
| https://www.theguardian.com/technology/2014/oct/27/elon-musk...
|
| https://www.theguardian.com/technology/2017/jul/17/elon-musk...
|
| https://www.wired.com/2016/04/openai-elon-musk-sam-altman-pl...
| notahacker wrote:
| Yeah. Frankly, coming at it from the opposite position where
| I think Elon absolutely deserved the criticism he courted
| with his "scandals" and think that the proposition that 'AI
| is such a unique threat that only people like Elon Musk can
| be trusted to be cautious enough with' is absurd for multiple
| reasons, I'll happily acknowledge that proposition is also
| something Elon has been consistent on and likely sincerely
| believes.
|
| Sam Altman supposedly being an original signatory is - if
| true - a more interesting question mark
| pc_edwin wrote:
| I've been following his takes and predictions on AI for the
| better part of a decade and I tend to agree with him.
|
| From what I heard, Elon wanted to take over OpenAI and the
| other founders refused. This is why he left the org and
| backed away from further committed funding.
|
| Theres been an ongoing rumour that he brought together a
| bunch of people to develop a alternative (BasedAI) and to me
| this campaign looks like an attempt to slow down competition
| rather than an act of genuine concern.
|
| I cant come up with any other reason to have OpenAI pause
| research for 6 months other than providing competition time
| to catch up.
| jillesvangurp wrote:
| The cat is out of the bag. Some people might volunteer to slow
| down. But others inevitably won't. And that creates this sense of
| FOMO with everyone. I think the reason OpenAI is choosing to
| operate in the open has been exactly this. This way they get to
| control the narrative, act as a first mover in the market, and
| ensure it is done right (by their notion of this). Of course
| others are working on the same topics and research in this area
| is progressing rapidly.
|
| In a way the worst possible outcome is that some less benevolent
| parties than OpenAI start taking the lead here. These parties are
| already active. They are just less vocal and open about the whole
| thing. People can stick their heads in the sand of course but
| that won't stop e.g. the Chinese from moving full steam ahead on
| their agenda.
|
| I'd prefer for them to not be the first or only ones to achieve
| AGIs. And the race for that is very much right now.
| slowmovintarget wrote:
| > The cat is out of the bag.
|
| As GPT-4 puts it "the confetti has left the cannon."
| yreg wrote:
| I hope this idiom will catch on when talking about LLMs.
| dylan604 wrote:
| If I don't do it, someone else will, so it might as well be me.
| At least I know the morals I have /s
| thatguy0900 wrote:
| This but without the s. What does the world look like if only
| China and Russia have agi because the western world took a
| pause to consider the ethics. This tech is the same as nukes
| in that sense.
| prohobo wrote:
| Maybe a world where China and Russia might need our ethical
| controls to tame the AI they let loose on themselves?
|
| At this point I feel like we deserve to lose, our people
| are insane and can only operate at a shallow level of game
| theory that ensures everyone behaves like teenagers.
|
| That said, I'm not sold on needing to pause. We're likely
| hitting the limit of LLMs for the next while - only
| incremental improvements from here. What's wrong with
| creating a task force and committee for working on AI
| ethics in parallel?
| freediverx wrote:
| Why longtermism is the world's most dangerous secular credo
|
| https://aeon.co/essays/why-longtermism-is-the-worlds-most-da...
| puma_ambit wrote:
| The virtue signal of all virtue signals. We do not live in a
| society where an article like this will have any impact. This is
| merely an attempt to buy time because they're behind and want to
| have a chance to catch up. Work will progress whether they like
| it or not. Finally someone other than Ol' Musky is in the news
| more than him. Don't get me wrong he's doing and leading great
| work, but for once in more recent times it's someone else
| building the future instead of only him and his memes.
| time_to_smile wrote:
| I think this is more marketing copy than virtue signaling.
| drtz wrote:
| Ask yourself two questions:
|
| 1. What is the ultimate AI are we trying to eventually build?
| 2. What are the consequences if we eventually succeed in
| building such an AI?
|
| For me, the answers would be
|
| 1. A general AI that would equal and surpass human
| intelligence. 2. God only knows.
|
| Will we get there? Who knows. We do know, however, that the
| consequences are great enough that only a fool would not at
| least attempt to prepare by putting some safeguards in place
| beforehand.
| invig wrote:
| WHAT SAFEGUARDS?
|
| "Someone else should come up with something"
|
| Like there's someone else out there (who aren't the people
| making the thing) who has a better idea on how to keep it
| safe.
| Teever wrote:
| Von Neumann Probe, and the consequences are potentially
| discovering alien life, colonizing the universe, and
| exploring the frontiers of science.
| cuteboy19 wrote:
| The trainings will continue until accuracy improves
| loopz wrote:
| 1. AI as a service to be sold to highest bidder.
|
| 2. Weapons and armament deals to stay relevant in a decaying
| world order.
| yreg wrote:
| >This is merely an attempt to buy time because they're behind
| and want to have a chance to catch up.
|
| Doesn't Elon Musk continue to hold stake at OpenAI?
| vntok wrote:
| Nope. He made a donation, not a stock buy.
| gremlinsinc wrote:
| For someone who hates 'wokeness' and 'virtue signaling', Elon
| Musk sure does enough of his own.
| vntok wrote:
| How so?
| hn_throwaway_99 wrote:
| "Virtue signalling"? Please. There are a lot of very smart
| experts on that signatory list who definitely don't need to, or
| care about, virtue signalling. Fine, ignore Musk's signature,
| but I don't think luminaries like Stuart Russell, Steve
| Wozniak, Jaan Tallinn or John Hopfield are doing this for
| "virtue signalling".
|
| You can fairly argue that this will be ineffective, but a lot
| of experts in this field have real, "humanity ending" concerns
| about AI, and I think it's a bit of a cop out to say "Well,
| genie's out of the bottle, nothing we can do as we barrel
| towards an unknown and scary future." Even Sam Altman has been
| yelling about the need for AI regulation for a long time now.
| nextaccountic wrote:
| > Even Sam Altman has been yelling about the need for AI
| regulation for a long time now.
|
| That's regulation that makes it harder for any competition to
| show up
| ChatGTP wrote:
| Yeah you have little proof of this really, it's just
| speculation...
| hn_throwaway_99 wrote:
| So tired of seeing this line parroted everywhere without
| much thought given to what it actually means. Yes,
| regulation can add a burdensome layer, and regulatory
| capture can be a real thing.
|
| But regulations for things like nuclear power plants,
| banks, insurance companies, elevator manufacturers, etc.
| are real because society recognizes the grave harm that
| happens when there are no additional checks on the system.
| Nobody says "Oh, all those big nuclear power plants just
| want regulations as a guard against competition." Certainly
| lots of crypto companies _have_ said that about the banking
| system, and we all saw how that ended...
| revelio wrote:
| [flagged]
| hn_throwaway_99 wrote:
| Seriously, why do people do this? It's so useless and
| unhelpful.
|
| Wozniak is just one of the people I mentioned, and as a
| tech luminary who is responsible for a lot of visionary
| tech that impacts our day-to-day, I think it makes sense to
| highlight his opinion, never mind that his name was
| sandwiched between some of the "founding fathers" of AI
| like Stuart Russell and John Hopfield.
| ushakov wrote:
| Wozniak is also a crypto shill
|
| https://finance.yahoo.com/news/steve-wozniak-launches-
| own-to...
| time_to_smile wrote:
| There are plenty of bigger "human ending" concerns on the
| table right now than AI and we certainly aren't pausing
| anything for those.
| hn_throwaway_99 wrote:
| There is no need for "whataboutism". There are plenty of
| very similar missives and warnings against, for example,
| the dangers of climate inaction, and I rarely see people
| claiming that the signatories of the latest IPCC report are
| "virtue signaling".
| biscottigelato wrote:
| Climate change is not even close to humanity ending. At
| max wipe out a few coastal cities. And even that is
| unlikely because those that screams 'climate change' the
| loudest has the most assets in coastal prime real
| estates. Humans will still be the apex predator of the
| planet even if there's human caused climate change
| catastrophe.
|
| AI literally can end humanity, every single individual
| potentially. But definitely replace humans as the apex
| predator of the planet. It is also consistently voted the
| highest likelihood cause if humanity is to end in the
| next 100 years.
| https://riskfrontiers.com/insights/ranking-of-potential-
| caus...
|
| We should stop the climate change fear mongering. Yeah we
| shouldn't burn fossil as if its consequence free. But New
| York and Santa Monica beach should've been under water 20
| years ago if the climate alarmist are correct. That's a
| far cry from pretending it's some number 1 priority. It
| shouldn't be even close. Having climate to distract us
| from things that will actually end us is the dumbest own
| goal possible for our species.
| popol12 wrote:
| It's not just about sea level or temperature increase,
| it's about humanity screwing all other life forms For
| instance we've lost about 50% of insects since 1970, how
| is this << fear mongering >> ? It's the no1 tragedy, by
| far, and it's currently happening, unlike hypothetical AI
| threats https://www.businessinsider.com/insect-
| apocalypse-ecosystem-...
| pazimzadeh wrote:
| Based on our current trajectory the apex predator will be
| an antibiotic-resistant bacterial strain. Probably
| Acenitobacter baumanii.
| [deleted]
| richardw wrote:
| No, AI drives all the others in the long run. Others are
| speed bumps.
| bootsmann wrote:
| Plain, old fashioned historicism. It was wrong 100 years
| ago, it is wrong today still.
| richardw wrote:
| Climate change won't affect AI, it could just make things
| shit for a couple hundred years. AI could solve that.
| Nuclear war might impact AI, but probably only
| temporarily (assuming we survive) and a war isn't
| guaranteed. But AI affects: Everything humans
| read/watch/touch/influence. Forever. Including climate
| change and our odds of nuclear war. There's no way it
| doesn't and once it starts there's no way we can stop it
| forever. Any narrower view is a failure of imagination.
| The outcome of AI is the outcome of humanity for the rest
| of our time in the universe.
| lewhoo wrote:
| Like what ? Climate change ? The EU just voted for petrol
| and diesel car ban. Are we really single-threaded ?
| pazimzadeh wrote:
| - Lack of representation in government means big
| companies fuck up the planet if it's profitable
|
| - People are mostly incentivized to compete, not to
| cooperate
|
| - Antibiotic resistance
|
| - Clean water supply
|
| - etc..
| lewhoo wrote:
| "Lack of representation in government means big companies
| run the world" - is precisely what we're trying to figure
| out here, no ?
| cwsx wrote:
| We're so poorly multi-threaded, even addressing climate
| change has been horribly slow...
| invig wrote:
| In 2035. Maybe we can pause AI development in 2035?
| breakpointalpha wrote:
| His name is third on the signatories...
| w0m wrote:
| OPs point being Musk is trying to stall AI because it's
| taking 'his' headlines.
| engineeringwoke wrote:
| Somebody else has a shiny new toy and he's jealous
| gandalfgeek wrote:
| This would be totalitarian.
|
| This is arguing for a group of people to have the power to decide
| some field is "unsafe" as per some vague, unverifiable criteria,
| then set up a police structure to verify compliance, all outside
| the safeguards of democratic or judicial norms.
|
| Precautionary principle run amok.
| [deleted]
| fdgsdfogijq wrote:
| The incumbents sense a shift in the power structure that they
| cant control
| puma_ambit wrote:
| Run amusk.
| Jayakumark wrote:
| For Sake of Argument lets say you pause US from doing anything on
| AI, how will you control China, Germany or other Countries.. This
| will only lead to US Lagging behind other countries.
| WinstonSmith84 wrote:
| In hindsight, it could be a good thing so that they catch up.
| concentrating all the power to one country could be more
| dangerous than all countries having the same weapon.
| reso wrote:
| International treaties are very common. In fact, international
| research agreements are often easier than treaties because
| scientists and researchers are more likely to work together as
| colleagues than nation states.
| lannisterstark wrote:
| >International treaties are very common
|
| and US isn't signatory to a fuckton of them for example.
| Nothing stops PRC from going "Yeah nah fam get wrecked." What
| are you gonna do, stop Chinese imports in retaliation? lol.
| Quarrel wrote:
| Forget countries?
|
| How will you stop me?
|
| I have models on my local computer. I can and will use them.
| They will get better.
|
| You don't put tech genie's back in their bottles, that's not
| how it works.
| c7DJTLrn wrote:
| >I have models on my local computer. I can and will use them.
| They will get better.
|
| Well no, not really. You're just consuming the result of
| millions of dollars of investment. You're not going to be
| able to improve it in any significant way with limited
| resources and knowledge.
| lanstin wrote:
| Yeah using the model is one thing but what OpenAI has shown
| is that order of magnitude increases in model training can
| make a big difference. But they are already at the tens of
| millions to train a model, so order of magnitude bump is
| only possible to a small number of actors. I suspect NSA is
| busily setting something like gpt 4 up right now.
| Quarrelsome wrote:
| you can deploy it in novel situations and release it
| unethically. Are we not already at the point where we can
| effectively simulate social media chatter and thus people
| or social pressure with publicly available tools?
|
| If the API gate is broken then the cat is out of the bag.
| cbazz wrote:
| [flagged]
| version_five wrote:
| Parent is a troll pasting chatgpt output, see history
| cbazz wrote:
| Could you please stop attacking me? You've been following
| my post history around claiming I'm a bot when clearly
| I'm not. You might find my writing style odd, but its
| just how I write.
| version_five wrote:
| You can email hn@ycombinator.com to report me
| rumdonut wrote:
| It wouldn't be hard to stop. Chip fabrication is a
| concentrated industry with a high barrier to entry (indeed
| there are only a few companies in the world producing high
| performance silicon using technology made by probably even
| fewer companies). Restrict AI chip making and the materials
| needed, and you'll restrict AI. I can see global treaties
| between large nations effectively controlling production in
| the same way that we have nuclear treaties that prevent
| proliferation or enrichment.
| nuancebydefault wrote:
| AI chip making? I can train an AI on my intel laptop if I
| whish. If I need more CPU power, i can rent some. The genie
| is out of the bottle and the only way is forward. The
| latest worldwide race.
| avital wrote:
| This isn't accurate. The bottleneck in very-large-scale-
| training BY FAR is communication between devices. If you
| have a million CPUs, the communication cost will be
| significantly higher than a thousand A100s (perhaps in
| the order of 100x or even more). So this is only possible
| to replicate with very dense and high compute chips with
| extremely fast interconnect.
| permo-w wrote:
| so what do you suggest? more lack of caution? stumbling into
| an AI apocalypse because "oh fuck it, it's too difficult to
| do anything about, why bother?"
| 4k3aa4 wrote:
| I mean, in an alternate universe where atomic weapons were a
| little easier to manufacture at home would it not have made
| sense for governments to aggressively crack down on anyone
| doing anything even remotely resembling building them?
|
| I guess the second question is - would they have succeeded or
| would we all just have died?
| slg wrote:
| And expanding that atomic weapons analogy in a different
| direction, the way to stop things is not just paying
| attention to the research, but the physical tools and
| materials used in the process. Just outlawing the work
| wouldn't be effective, you would need to regulate graphics
| cards and cloud computing which would surely be unpopular.
| tetris11 wrote:
| Here's Nick Bostromcs Vulnerable World Hypothesis that you
| were alluding to.
|
| http://www.nickbostrom.com/papers/vulnerable.pdf
| Ancapistani wrote:
| There is a conspiracy theory out that that nuclear weapons
| _are_ easier to manufacture - that uranium enrichment is
| the "difficult path" to creating them, that the easier
| path is via the accumulation of heavy water, and that the
| Powers That Be(tm) have hidden that fact because uranium
| enrichment provides a plausible means of suppressing
| proliferation.
|
| To be very clear, I do not believe the above to be true. If
| it were, though, the implications on nuclear proliferation
| would be similar to trying to control AI research.
| Basically everyone has to insist that the only way to do it
| is to through nation-state levels of resources at a corpus
| to train a model, while knowing full well that it can be
| done with much more meager resources.
|
| ... honestly, I'm not sure where I'm going with this
| thought, it just seemed an interesting parallel to me.
| philipkglass wrote:
| In the 1940s uranium enrichment was indeed the "difficult
| path."
|
| Here's a cost breakdown for the Manhattan Project:
|
| https://blog.nuclearsecrecy.com/2013/05/17/the-price-of-
| the-...
|
| You can see that the cost of the uranium enrichment
| program dwarfed the cost of the plutonium production
| program. _All_ of the costs were higher for the Manhattan
| Project than for subsequent nuclear weapons development
| programs, because the Manhattan Project had to try
| everything at once (including dead ends and overpriced
| methods) at large scale to quickly guarantee a usable
| bomb.
|
| Fast forward to the 1970s and more uranium enrichment
| methods were known and costs had come down significantly.
| South Africa built (but later voluntarily dismantled)
| several uranium based nuclear weapons at a cost of $400
| million (1994 dollars):
|
| http://large.stanford.edu/courses/2013/ph241/baxevanis2/
|
| The unique enrichment process used in South Africa was
| still more expensive than modern centrifuge based
| techniques, assuming that a would-be proliferator has the
| technical base to build working centrifuge systems.
|
| The really cheap option remains a graphite or heavy water
| moderated reactor, fueled with natural uranium to produce
| plutonium. That's what North Korea uses -- a tiny 5
| megawatt Magnox type reactor:
|
| https://en.wikipedia.org/wiki/Nyongbyon_Nuclear_Scientifi
| c_R...
|
| It's an open secret that nuclear weapons are now
| technically easy to manufacture. Preventing further
| proliferation is 95% from monitoring/diplomatic
| pressure/sabotage and about 5% from inherent technical
| difficulties.
| mclightning wrote:
| after that story...I am afraid to google accumulation of
| heavy water...what's that.
| dylan604 wrote:
| maybe not all of us, but probably those within the radius
| of someone that chose to use it rather than lose it
| tgv wrote:
| Stop/slow down publications. It's what motivates researchers.
| They need them for tenure, and it's how the tech improves.
| Companies keep their methods more private.
| lannisterstark wrote:
| Why stop here? Why not apply this logic to the internet too?
| "Oh no it's too dangerous, we cant give billions access to
| the internet. Better shut it down."
| dkga wrote:
| I don't know, this seems to me to be half real concern, half
| "let's stop the leader because I can't catch them!"
| twblalock wrote:
| It's too late. Anyone who pauses will simply be left behind.
|
| This is especially salient if you think US domination of tech is
| better for the world than Chinese domination -- if the US pauses,
| China certainly won't. (And yes, US domination or Chinese
| domination of tech really are the only realistic outcomes.)
| paxys wrote:
| So every company should cease all AI development for an
| indefinite period of time until _someone_ has done the research
| and deemed it safe. And who exactly will that someone be? Future
| of Life Institute (whoever they are)? Elon Musk, who has very
| conveniently signed the letter after his public beef with OpenAI?
| Your friendly local government that has no agenda whatsoever? The
| UN?
|
| Efforts like these are at best naive, and at worst agenda driven
| by players who simply want to insert themselves into and control
| the conversation.
| zach_garwood wrote:
| > So every company should cease all AI development for an
| indefinite period of time
|
| The letter quite clearly asks for six months.
| izzydata wrote:
| Articles need to stop using Elon Musk as clickbait headlines. His
| opinion means less than nothing. He is not an expert in any
| field.
| freediverx wrote:
| https://twitter.com/emilymbender/status/1640920996768288773?...
| waselighis wrote:
| All that can be done at this point is government regulation, and
| legislation to enable people to sue for damages caused by AI. Of
| course, billionaires like Musk would never call for such
| measures. Signing an open letter like this is little more than
| virtue signaling.
| AuthorizedCust wrote:
| If we pause, then that gives [adversary] an invaluable
| opportunity to leapfrog us.
| suyash wrote:
| Genie is out of the lamp, it can't be stopped now!
| Emiledel wrote:
| What FLI are asking for seems critical - noone should take
| satisfaction with the disclaimers ARC had to give relative to
| emergent behavior just with gpt4. Kudos for openai getting their
| eyes on it, but that work seems unfinished in the paper, or at
| least the runtime powers that are now released around have to get
| more complete red team efforts documented if we are to trust its
| isolation/containment and alignment.
| thomastjeffery wrote:
| The dumb criticize the blind.
|
| What an absurd situation! How did we get here? Here are the
| steps:
|
| 1. Large Language Models have been presented as "AI", which
| personifies them instead of describing how they work.
|
| 2. Goals for LLM development were set for the _personified_
| attributes, and not the _actual_ functionally of the real thing.
| OpenAI brags about how GPT4 _scores_ at human tests: as if that
| has any bearing on the model itself, and not simply its content.
|
| 3. The success of an LLM at a personified goal is determined by
| magic. Does the black box output what I am hoping it will? How do
| I improve it? Weights? Better training corpus?
|
| 4. An LLM is understood to be a person in a black box, and we
| could not possibly understand what that person will do next.
| _Danger!_
| permo-w wrote:
| I'm not concerned about how they work or whether they're
| attaining personhood or not. I'm concerned at what people can
| and will use them for to the detriment of our civilisation
| thomastjeffery wrote:
| How they work is the most critical piece of that puzzle.
|
| It isn't a mystery at all. LLMs implicitly model every
| pattern they can find in the text they are given. If the
| pattern aligns to language, humans categorize it as a
| feature. If it aligns to desired behavior, humans categorize
| it as a feature. If it aligns to undesired behavior, humans
| categorize it as a limitation. What humans categorize as
| features and limitations _do not align_ with what the model
| is. A truth and a lie are structured with identical text
| patterns. The LLM can 't categorize the two separately.
|
| The only risk here is that people continue to personify LLMs,
| and place trust in their relatively limited ability to
| fulfill a human role. It's the same category of risk as bad
| security practices. The potential to make a mistake itself is
| not a danger: making the mistake is.
| permo-w wrote:
| you certainly make an interesting point about fact and
| falsehood and how they interpret language; however, I
| disagree that this is relevant and completely disagree that
| personification has any relevance to the most immediate
| dangers
|
| the danger here is that we make extremely powerful tools
| extremely available and people use those tools to hurt
| people, possibly a lot of people, possibly even by accident
|
| the scary part is that hacker news is full of the exact
| people that have the knowledge and capability to do this,
| and look at the reaction to this post: a resounding _fuck
| off_
| tome wrote:
| > people use those tools to hurt people, possibly a lot
| of people, possibly even by accident
|
| If it happens then _hopefully_ it 's by accident!
| thomastjeffery wrote:
| I just explained explicitly how and why an LLM is not,
| and never will be, an "extremely powerful tool".
|
| A program that is actually a person in a black box would
| require the level of concern you are showing. Thankfully,
| that is not even remotely what we have here in reality.
| JoshuaDavid wrote:
| Can you give an example of a concrete task that you
| believe that neither GPT-4, nor its successors, will be
| able to accomplish within the next 2 years? Keep in mind
| that ChatGPT Plugins[1] exist.
|
| If you _can 't_ think of any tasks that a tool won't be
| able to accomplish within a 2 year period, I question
| your definition of "not extremely powerful".
|
| [1] https://openai.com/blog/chatgpt-plugins
| permo-w wrote:
| have you tried gpt-4? how can you possibly say it's not
| an extremely powerful tool?
|
| what happens when someone determined links it - or
| thousands of its - or thousands of next year's 3x as
| powerful its -- up to the internet with a code
| interpreter and a hard-coded poor intention?
| qlm wrote:
| What could it do that isn't already being done? I ask
| this sincerely. We already have bots constantly attacking
| every server they find 24/7. We already have botnets
|
| Is the concern that GPT will be able to find and exploit
| e.g. a novel RCE exploit in SSH?
| tome wrote:
| I don't know; what happens? I'd really like to know what
| people are imagining it could do, especially if they've
| got some evidence for it!
| permo-w wrote:
| it's a rhetorical question. think about it yourself. try
| it yourself
| quonn wrote:
| Well, how can it be much worse than a smart human
| ,,hooked up to the internet" with bad intentions? Other
| then generating comments or spam faster?
| yifanl wrote:
| It's (to some subset of the audience), an extremely
| persuasive tool, which is powerful in its own right.
|
| It's actual capabilities mean very little, if some people
| can be convinced into doing decision making without
| actually doing any decision making by it (or literally
| any tool), that's cause for concern.
| adamsmith143 wrote:
| [flagged]
| thomastjeffery wrote:
| My point is that this fear is irrational: based on a
| misunderstanding about what LLMs can do, which itself is
| based on a misunderstanding of what LLMs are.
|
| Is there a way I could have been more clear? Are you
| disinterested in the content of my comment, or the
| presentation? The only thing I can learn from _your_ comment
| is that _mine_ didn 't live up to your mysterious standards.
| dmarchand90 wrote:
| https://news.ycombinator.com/newsguidelines.html
| marcosdumay wrote:
| > Goals for LLM development were set for the personified
| attributes, and not the actual functionally of the real thing.
|
| Well, this is for honest reasons. The goal of a chatbox is to
| beat the Turing test. It has always been.
|
| Those chatboxes didn't actually beat it, but it's clear that
| it's due to a technicality (they are easy to spot). They can do
| empty chats on the same level as a human.
|
| (And so it turns up that the test wasn't as useful as it was
| believed to be at first.)
| archon1410 wrote:
| Some "alignment folks" have suggested the government try to start
| treating GPUs like Uranium or nuclear reactors to stop anyone
| from developing AIs. Not sure if that's implementable though--
| cuttig edge GPUs are all manufactured in China. I doubt they'll
| be able to coordinate with the CCP on this issue. Do GPUs use
| those fancy chips that only made in Taiwan using equipment only a
| single Dutch company makes? It might be possible to a put stop to
| AI development then.
|
| > Emad Mostaque, CEO, Stability AI
|
| Et tu?
|
| Anyway,
|
| > Should we let machines flood our information channels with
| propaganda and untruth? Should we automate away all the jobs,
| including the fulfilling ones? Should we develop nonhuman minds
| that might eventually outnumber, outsmart, obsolete and replace
| us? Should we risk loss of control of our civilization?
|
| Yes, yes, yes, and yes.
| MacsHeadroom wrote:
| >Do GPUs use those fancy chips that only made in Taiwan using
| equipment only a single Dutch company makes?
|
| Yes, but it's a German company (Zeiss Optical).
|
| >Yes, yes, yes, and yes.
|
| Agree.
| labrador wrote:
| Elon thought AI would enslave us, but turns out it empowers us.
| Now he wants to shut it down and have it all to himself. It's
| getting to be that whatever the opposite of what Musk wants is
| the correct course of action for the people.
| WillPostForFood wrote:
| _the opposite of what Musk wants is the correct course of
| action for the people._
|
| Ok so you want to move back to gas combustion engines, stop
| deploying internet to rural and third world countries, clamp
| down on free speech, and make sure there are no tunnels, just
| elevated highways dividing cities. Is that about right?
| labrador wrote:
| You're thinking of the old Musk. He used to be a force for
| good before his mental break, which I timed when Grimes left
| him for Chelsea Manning and his oldest son came out trans.
| That's why I used the words "it's getting to be". Everyone is
| entitled to a mid-life crisis and Musk is no different, but
| supporting Putin seems a little extreme to me.
| 0max wrote:
| Frankly, I think Mrs. Boucher probably gave Elon too many
| stimmies, psychedelics, and expectations he can't live up
| to. I remember when she got into a spat with Azalea Banks
| because she had to trip sit Elon when they were trying to
| collaborate on a music project.
| labrador wrote:
| Yeah, I forgot about the psychedelics. I've done a fair
| number myself, but always in the woods or desert hoping
| for a spiritual experience, not in a business setting
| expecting them to help me make genius financial
| decisions.
| kaba0 wrote:
| He was never "good", he was/is just an egomaniac who need
| attention.
| kaba0 wrote:
| That guy didn't do anything useful, he has money and some of
| his companies made a few successful products _in spite of_ he
| being there.
|
| You honestly couldn't mismanage a company more than what he
| currently does with twitter - a goddamn cabbage would have
| done a better job, by literally not doing anything.
| HarHarVeryFunny wrote:
| It's not exactly free speech if you have to pay to be heard
| (new Twitter policy - only paying members' Tweets will be
| promoted). Hyperloop turned out to be Tesla-in-a-tunnel. He's
| got some smart folk working at SpaceX to be sure.
| tgv wrote:
| > Elon thought AI would enslave us, but turns out it empowers
| us.
|
| I'm not sure how you reached that conclusion. GPT3 is 2 years
| old. It has ample time to enslave us. I'm also still waiting
| for the "empowering" bit. Haven't really noticed it, I must
| say. Or are you referring to a few code completion tricks?
| WinstonSmith84 wrote:
| To be honest, I feel enslaved at the present time by the
| elites. Maybe AI won't help, maybe it will. Worth a shot.
| Also, I'd prefer being enslaved by a rational AI than by
| humans driven by greed and laziness. At the very least, I'd
| hope for higher political decisions to be taken by AI, or
| that it may have a say for everyone to see.
| labrador wrote:
| If you don't feel empowered by AI, imho you're doing it wrong
| and will probably be one of those left behind to be enslaved.
| I'm confident in my ability to stay on top, with a help from
| my own AI.
| permo-w wrote:
| so when some societal reject with a CS degree sets up an
| intention bot to hack into/socially engineer the release of
| more gm viruses - or worse - what's your personal AI going
| to do about that?
|
| this isn't about being enslaved, it's about being hurt by
| the actions of a human-like you-who thinks that AI is their
| route to the top, or just wants to watch the world burn. AI
| enslavement makes for good films, but we'll all be long
| dead before that happens
| labrador wrote:
| AI is a tool. I'm a swords to plowshares kind of guy
| myself, but if I have to use it to battle bad guys, so be
| it.
| permo-w wrote:
| I sincerely hope that you're not in any actual position
| of influence
| labrador wrote:
| You called me delusional in another comment not made to
| me so let me explain. I used to be in a position of
| influence, but I'm an old man now, a recovering alcoholic
| whose had his ego smashed by alcohol, old age and a
| spiritual practice similar to Ram Dass' "Becoming
| Nobody."
|
| With that setup, what's surprising to me is that I see
| the potential for the merging of myself with AI to form a
| new form of consciousness. It feels natural. Not having
| much of an ego, I don't feel threatened or afraid. I
| notice that a lot of younger people, mostly men of some
| position and status, are afraid of AI, I think because it
| threatens their status and prestige. They want to concern
| troll about how it threatens everyone else, but it's
| about them.
|
| So go ahead, call me delusional. You're probably right.
| Old brains don't always think properly. But maybe this is
| what Jesus meant when he said the meek shall inherit the
| Earth.
| permo-w wrote:
| it doesn't sound to me like your ego has been smashed in
| any sense, my friend
| labrador wrote:
| The pesky thing never really does go away. It gets
| activated when I get defensive. I suppose I could choose
| not to engage, but I get lonely.
|
| p.s. I also currently have Covid which is making me
| angry. Be aware if you live in San Jose, there seems to
| be a new highly contagious strain. I went three years,
| fully vaccinated, without catching it and then got it at
| the Trader Joe's across from the airport
| nuancebydefault wrote:
| Take it easy, these comments are mere conversations
| between mortals.
|
| Your AI overlord
| danaris wrote:
| Or _you 're_ going to find that suddenly the centralized,
| proprietary AI from a for-profit company that you rely
| on...
|
| a) Puts up a paywall requiring an active subscription to
| continue use
|
| b) Shuts down because not enough people subscribed
|
| c) Shuts down the services you use because they want to
| move on to some new shiny
|
| d) Takes all the data you feed it and creates a high-
| profile, well-funded competitor to whatever you're working
| on
|
| e) Starts poisoning your output once it's clear that you're
| competing with them
|
| f) Starts poisoning your output to create subtle propaganda
| for their own products or political beliefs
|
| g) Starts poisoning your output to create subtle propaganda
| for anyone who will pay them
|
| ...And those are just the failure modes I can think of in a
| couple of minutes off the top of my head.
| permo-w wrote:
| the comment you're replying to is pretty delusional to say
| the least, but I disagree that they aren't empowering _now_.
| ChatGPT is an extremely useful source of education that
| bypasses the mess that is Google, and it 's much more than
| just code completion tricks. gpt-4 can literally write long,
| complex programs that generally work the first time you run
| them
| mrguyorama wrote:
| Ah good, let's encourage people to "learn" from the text
| generator that can't even be forced to not lie and
| misinform. I've seen plenty of cases where that "long and
| complex program" includes things like libraries that don't
| exist
| permo-w wrote:
| I fail to see how this is any different from a human
| author
| mrguyorama wrote:
| Authors have an idea of right and wrong, true and false.
| Everything they say, they have some internal idea of how
| "sure" they are repeating the truth, or when they are
| purposely misinforming or lying. Most people think
| misleading people is bad, and try to avoid it. And if
| they don't avoid it, they can be punished, ignored,
| discredited, etc.
|
| It is not possible to teach anything like ChatGPT to only
| tell things that are the truth, because the model has no
| concept of that. Even if you tell ChatGPT to act like
| someone who only tells the truth, it can still generate
| falsehoods. Hallucination is a very apt word for the
| phenomenon as, to the model, lies and falsehoods and
| misleading statements are the same validity as absolute
| fact. They both become valid sentences. The very concept
| of language, as a medium of information exchange, does
| not provide any info about information validity. It's out
| of band.
|
| When ChatGPT misleads someone, you cannot convince it to
| do that less, even if it """wants""" to, no matter how
| much you punish, encourage, require, etc.
| sp527 wrote:
| You're an absolute fool if you trust anyone at OpenAI either.
|
| "AI will probably most likely lead to the end of the world, but
| in the meantime, there'll be great companies." - Sam Altman
| dragonwriter wrote:
| > Elon thought AI would enslave us, but turns out it empowers
| us.
|
| Absent other non-technical changes, AI along the lines that the
| main commercial proponents are pursuing empowers the narrow set
| of capitalists who control it to more efficiently extract value
| from and exert power over everyone else.
|
| "Empower us" and "enslave us" are both accurate, for different
| values of "us".
|
| (A few people riding the top of the capitalist-control-of-AI
| wave are trying to sell it as an existential threat to
| everyone, and to use that as an excuse to concentrate control
| of AI in their hands, to make social change in the _opposite_
| direction needed to mitigate the threat they themselves pose
| with AI; is this a real threat? If the shared threat _is_ real,
| it is nowhere near as imminent and probable as the threat posed
| by those who are selling it.)
| gremlinsinc wrote:
| this is like yahoo in 1999 asking Google to pause their search
| engine research and activities so they can catch up and build
| something equally as good ...
|
| or, ironically Google doing the same thing to bing/openai in
| 2023.
| CMay wrote:
| The problem with this is that at present, it is largely good
| people who are ahead in AI development and that gap might widen
| over time if China has difficulty acquiring large numbers of high
| end chips. China and Russia feel that they need to cheat in order
| to compete in many ways. China does invest a lot into AI and some
| good work does come from there, but any advantage they can gain
| that their opponent will not take is seen as a step closer to
| greater world influence.
|
| If well meaning people pause, they cede some ground to those who
| are investing heavily in "western" failure and the pause could
| accelerate the exact thing it's trying to delay. On the flip side
| of that coin, if all advanced AI work is already infiltrated by
| foreign agents then it's also possible that pausing could slow
| down their greatest gains.
|
| In either case, there may be an argument for holding back
| mainstreaming/productization for longer than is already normal or
| establishing an independent AI industry productization review
| organization that gets early access to any AI products which
| could qualify as concerning.
|
| Unfortunately, governments heavily oriented towards censorship
| may be less afraid of spreading dangerous products out there
| while banning them domestically, because it might be seen as a
| method to legitimize authoritarian control by pointing to
| disasters in those other places.
| omnimus wrote:
| Ah the good people. Of course i trust those.
| meroes wrote:
| Good people brought us the surveillance economy according to
| Jaron Lanier.
| jmull wrote:
| Not sure why people think AI development can't be paused.
|
| A letter alone probably won't do it, but a movement -- perhaps
| coalescing around the principles laid out in letter signed by
| influential people -- can.
|
| Governments can regulate it. Consumers and companies can, if they
| want, punish companies that keep moving forward on AI.
|
| People can keep making AIs but would have difficulty releasing
| them, which greatly reduces their impact.
| WXLCKNO wrote:
| > governments can regulate it
|
| Sure let me regulate it and slow down development of US based
| OpenAI while China doesn't and takes over the world.
|
| There is no stopping it now, it's an uncertain negative outcome
| whereas a nuke blowing up is quite certain. For that reason
| alone the arguments won't be unanimous enough for development
| to be paused.
| jmull wrote:
| > ...while China doesn't and takes over the world
|
| First of all, China will regulate as much as anyone, probably
| more.
|
| Setting that aside though (if not China then India or Russia
| or Iran, etc.), you're presuming an unregulated approach will
| out-compete a regulated one, which seems rather unlikely to
| me. Think banking systems, medical research, transportation
| systems. A lot of things work better with a good framework of
| organization.
| chpatrick wrote:
| > First of all, China will regulate as much as anyone,
| probably more.
|
| For civilian use, maybe. But imagine if you had your own
| GPT-4 and you tasked it to keep tabs on everyone based on
| surveillance footage. This is 100% implementable right now.
|
| "Here are 1000 pictures of comrade X. Please summarize his
| daily activities and emphasize any reactionary behavior."
| mewpmewp2 wrote:
| > People can keep making AIs but would have difficulty
| releasing them, which greatly reduces their impact.
|
| And it would then only increase impact of bad actors, who can
| and will keep using the AI.
| barking_biscuit wrote:
| >Not sure why people think AI development can't be paused.
|
| My suspicion is that if heavy regulation were to be put on it
| and all the corporate entities involved backed off, there are
| enough motivated and capable individuals that have home labs
| that would continue to work on it in private. It might be more
| difficult and slower, but distributed training
| protocols/methods (or advancements in them) might make it
| possible to pool a few thousand (or tens of thousand) high end
| consumer GPUs to continue to push the envelope.
|
| My gut feel is that just as there are certain books, games and
| movies that get banned and become illegal to posses, there are
| probably (or potentially) going to be certain models that
| you're not allowed to be in possession due to what they can do.
| So... grab a copy of them while you can.
| zug_zug wrote:
| Maybe that's progress though.
|
| I don't think the goal is to never make AI advancements. I
| think the goal is to make progress slow enough, and
| controlled enough, that we have time to react if it turns out
| to be a paperclip optimizer.
|
| One way to do this is to develop AIs in a lab with an airgap
| and let it _think_ it has internet access and watch what it
| does. Once it figures out that it 's a honeypot reset it and
| improve the honeypot.
| throwthrowuknow wrote:
| The publicly released AIs are not the ones we need to worry
| about. Pushing research underground will worsen the situation
| jmull wrote:
| It's wide AI use that will disrupt society. Underground AI
| will not be widely used.
|
| Let's not imbue individual underground AI with magical
| abilities.
| mewpmewp2 wrote:
| > Underground AI will not be widely used.
|
| It will 100% be used and improved for nefarious purposes.
|
| By criminal groups or competing countries themselves.
|
| It's worse if bad actors are using it in secret to
| manipulate the masses rather than it being completely
| transparent to everyone.
|
| Imagine only bad actors being able to do deep fakes and
| regular people not knowing it was possible.
| dragonwriter wrote:
| > Underground AI will not be widely used.
|
| "Underground" (that is, trained, at least in part, outside
| of the control of centralized vendors) AI is widely being
| used _today_ in the AI art space, I don't see why that
| won't be the case for LLMs once there are modestly
| performant LLMs with decent end-user packaging that run
| tolerably on reasonably middle-tier consumer devices, which
| seems to be rapidly approaching.
|
| "Underground" AI won't be used _by big, risk averse
| institutions_ if centralized offerings have the capacity
| and freedom they need without too much of a price gradient.
| OTOH, it likely _will_ be used be "move fast and break
| things" orgs as they ramp up (potentially even if
| regulated, if they can find a way to "launder" their use)
| until they get big and risk averse - perhaps making it less
| underground in the process.
| medvezhenok wrote:
| Yeah, part of the concern is job automation / mass-
| unemployment which is less likely if AI research is slowed
| down / driven underground.
|
| We need to manage the transition of the labor force and
| ideally not have it happen overnight.
| mewpmewp2 wrote:
| Job automation, mass unemployment needs to be solved by
| other means.
|
| If it causes this, there needs to appear some sort of
| government provided insurance mechanism that for people
| whose jobs are automated are guaranteed to receive income
| based on how they used to work for the transition period
| and finally everyone would be served UBI.
|
| So a task force has to be put together that will
| determine which jobs are affected and government should
| start offering paid retraining for these individuals
| until everything is automated.
| dragonwriter wrote:
| The research for OpenAI is already underground; nothing about
| GPT-4 more substantial than PR has been released.
|
| Sure, you can _use_ it via OpenAI APIs and partner services.
| That doesn't make the research anything but opaque.
| Zetobal wrote:
| Mhm... there is a "John Wick, The Continental, Massage Therapist"
| as signee so either their curation is crap or it's an Easter egg,
| still not very reassuring.
| TheMaskedCoder wrote:
| I for one wouldn't try to develop AGI if John Wick didn't want
| me to...
| ruslanmsv wrote:
| Please don't stop the progress and testing in prod. Don't listen
| to luddite IT folks
| readonthegoapp wrote:
| i'm not willing to believe that AI is more dangerous than global
| warming and nukes, but sure, why not?
|
| we've worked hard to accelerate both global warming and nukes, so
| AI is no different - yet.
| longitudinal93 wrote:
| Those that have any doubts about the necessity of this initiative
| should take the time to watch the Center for Humane Technology's
| "The AI Dilemma" from March 9th.
|
| https://vimeo.com/809258916/92b420d98a
| [deleted]
| dia80 wrote:
| Great video, thoughtful and persuasive. Highly recommended to
| broaden your perspective, it did for me.
| seydor wrote:
| That's must be a world first, some rich people asking other rich
| people to not do rich people things in public
| xg15 wrote:
| > _Signatories:
|
| - Elon Musk, CEO of SpaceX, Tesla & Twitter [and ex-co-founder of
| OpenAI]
|
| - Emad Mostaque, CEO, Stability AI
|
| - Victoria Krakovna, DeepMind, Research Scientist, co-founder of
| Future of Life Institute_
|
| Some people had made comparisons between the Crypto hype and the
| current AI wave.
|
| I think I have to hand it to the crypto people that they are at
| least internally consistent and honest with themselves: I think
| many genuinely believe the tech is adding something good for the
| world. Others are grifters, but at least they know they are
| grifters and stick to it, I guess.
|
| Meanwhile, statements from AI leaders lately remind me of some
| psychotic axe murder from a Stephen King novel, who yells "Stop
| me! Stop me! Why won't anyone stop me?" while running through a
| crowded street and swinging his axe.
|
| If this tech has such potential to destroy humanity as we know
| it, and it's so important to keep it in check, why did you
| develop it in the first place?
| CatWChainsaw wrote:
| It was too exciting _not_ to do it, so they did.
|
| "The Great Filter is a marshmallow test."
| ConanRus wrote:
| [dead]
| [deleted]
| s1mon wrote:
| I was a bit baffled my Elon Musk's (apparent) signature on this
| letter, as I thought he still had a connection to Open.ai. Then I
| did some basic research and realized that this is likely just
| jealously and frustration from his failed takeover of Open.ai.
|
| https://www.theverge.com/2023/3/24/23654701/openai-elon-musk...
| dahwolf wrote:
| If you look at just the _potential_ impact of current AI or
| slightly better ones that are around the corner...
|
| If you see cognitive work as a distribution, it's not far-fetched
| to think that it can take out the bottom half and after that ever
| more. It's naive to think that this bottom half will become some
| super AI coder, that's what the top half might do, or more likely
| the top 10%.
|
| Every current threat in the digital domain (misinformation,
| polarization, cyber crime, the like) might go times a 100 or a
| 1,000. Are we ready for that?
|
| Unique human values such as creativity and simply just
| communication are on the chopping block too. Is it worth it? What
| remain of us? Biological prompters? Why not link it to our actual
| brain then to complete the Matrix scenario.
|
| What happens to truth? Or culture? Does it matter anymore? What
| about one private company being in charge of such nuclear
| scenarios?
|
| We're talking existential impact here. This is like dropping a
| nuke without knowing what a nuke is or what it does.
|
| It's pathetic that private individuals have to write this letter.
| Don't you think that in an exponentially disruptive world we
| should have some kind of formal oversight? Instead of a 80 year
| old guy asking 10 years after the blast what Wifi is?
| hintymad wrote:
| Does the letter cite any specific danger of the current LLM
| backed by examples? I hope this is not another Gebru moment that
| a bunch of people cite moral high ground without substance.
| karmasimida wrote:
| There is no stopping in this.
|
| How to stop? No one will stop. The world won't stop or pause on
| getting LLM more useful until it can't be further improved.
|
| LLMs aren't nuclear weapons. I don't know how it could be
| supervised TBH. The model, the data are mostly public, giving
| people time they will definitely get there regardless.
| sinuhe69 wrote:
| The hype is immense and so the disillusion will be. Just let the
| LLM train go and we'll see. Thanks to that, perhaps the next time
| people will stop and think more deeply about the real issues.
| Nobody will stop now but huge investment + economic recession is
| a good recipe for bubble.
| gedy wrote:
| Maybe a strange take, but one silver lining to this is AI models
| can at least capture the essence of human culture and arts in a
| fairly compact format. Might be a good "backup" of sorts if the
| world falls apart.
| scanr wrote:
| Cat may be out of the bag. I was wondering today whether the race
| should be on to be creating an benevolent AI to defend humanity
| from the less benevolent ones.
| icrbow wrote:
| If you don't know how to make one already you will be racing
| _towards_ the precipice, not steering away from it.
| scanr wrote:
| Yeah. Pretty high stakes. If we assume that work is already
| underway to create unaligned AIs, what's the alternative?
| fknorangesite wrote:
| And that's the beautiful part - when wintertime rolls around,
| the gorillas simply freeze to death.
| titaniumrain wrote:
| "If I had asked people what they wanted, they would have said
| faster horses." - HENRY FORD
|
| The world moves on. Dinosaurs either die or evolve.
| brokencode wrote:
| If US companies don't advance this technology, companies in other
| countries will. Or possibly even state actors.
|
| Eventually, somebody will automate most jobs. Society will need
| to adjust to this. Our high-minded ideals about preserving
| fulfilling work will evaporate as soon as China makes a major
| breakthrough.
|
| I think AI technology will become the nuclear arms race of the
| 21st century. How it turns out could be fantastically bad. But I
| think it is inevitable.
| biscottigelato wrote:
| I think it makes sense to debate whether this pause can or cannot
| work and the game theory behind it.
|
| But to those that trivializes the potential danger of AI and
| compare it to the likes of climate change is ignorant at best. AI
| has been consistently ranked as the highest potential cause for
| the extinction of the entire human race (by experts, a random
| survey of the population will likely say climate change, sadly).
| (https://riskfrontiers.com/insights/ranking-of-potential-caus...)
|
| Human caused climate change at the worst potential possible
| outcome would not even make a dent to the human population at a
| multi-century time-scale. We also have a lot of solutions in
| place for climate change that merely requires time and investment
| for adoption. Climate change's impact to humanity is less than
| the alarming birth rate we are seeing today (which also, we have
| no solution for...)
|
| We literally have no solution for AI Alignment, and this is with
| a lot of smart people working on it. From Bostrom to Eliezer and
| many others (check https://lesswrong.com). A lot of experts have
| give up because they see no way out and is only merely thinking
| about how should humanity end with dignity. We can't even figure
| a way out of the prisoner dilemma on doing a 6 months pause.
|
| Comparing AI Alignment with Climate Change trivializes the matter
| and only illustrate the ignorance of the populace. Another reason
| why we will not solve AI Alignment and will likely go extinct.
| zelphirkalt wrote:
| The thing we need last year is regulations enforcing ethical
| behavior and transparency. What data were the models trained on?
| Did you have permission to use that data for that purpose? Where
| is your form, naturally working without third-party trackers, in
| which people can request their data to be removed from the
| training data and the learned model (hoho, that will be a
| challenge!)? How can people check, whether their data is in the
| training data? And so on.
| quxbar wrote:
| People have no idea what AI is, they seem to go entirely off of
| the name. An intro course in Natural Language Processing reveals
| that while impressive, GPT-4 is a reflection of the training
| data. It is not conniving, it does not cause harm. The medical
| benefits alone are staggering.
|
| What we are about to do, as a self-inflicted wound, is let a lot
| of simple algorithms make real decisions as cover for plain human
| cruelty. The software platforms landlords are using to
| collectively raise rents are already an obvious example.
| Havoc wrote:
| And I'd like world hunger to be ended. Frankly my wish seems more
| plausible.
| permo-w wrote:
| yes so let's not bother trying to stop an existential threat.
| have you considered the possible consequences of all this?
| bloppe wrote:
| Very hard to say how threatening AI really is. We need more
| research to determine that. Even if it is an existential
| threat, a moratorium will not make it go away. That's why we
| still have nukes.
| permo-w wrote:
| it really is not hard to say. have you played around with
| the technology? have you sat down and thought about it
| yourself?
|
| what we have is a continuously improving system that can
| write code and solve problems with intent, and is starting
| to have access to the internet. I don't think AI is
| inherently evil or will do anything harmful on its own, but
| it is a tool that humans will use to harm each other,
| deliberately or otherwise.
|
| if you need research, go and read some of the myriad
| science fiction on the topic, or just imagine an "intention
| bot", where you set up an LLM with a goal, then have it
| iteratively write and execute programs to attain that goal.
|
| also there undoubtedly is research on the topic, but I
| don't need to read it to know that completely controllable
| brains with access to the internet, that don't get tired or
| bored or have to sleep or eat are an extremely dangerous
| thing
| wilsonnb3 wrote:
| > if you need research, go and read some of the myriad
| science fiction on the topic
|
| the problem is exactly this - people who can't
| distinguish between science fiction and reality.
| permo-w wrote:
| the problem is exactly this - people who can't understand
| subtlety or context
| layer8 wrote:
| The signatories will be on the basilisk's death list for sure.
| juancn wrote:
| Stopping now would be extremely dangerous and borderline stupid.
|
| If you stop now, you're just left behind, because there's no way
| everyone will stop.
|
| At this point the only logical course of action in an adversarial
| situation is to double down and keep researching, otherwise some
| other country or culture with different (and possibly worse)
| values ends up dominating the technology and you're left behind
| in the dust.
|
| The genie is out of the bottle, there's not putting it back in.
| reasonableklout wrote:
| I don't agree at all. It's totally fair for rival "countries or
| cultures with different values" to coordinate on matters that
| threaten everyone's survival. There are many examples of
| international agreements on technologies that fit this
| description. For example:
|
| - International treaties to avoid nuclear proliferation and
| development, as other commenters have pointed out
|
| - National moratoriums on gain-of-function research
|
| - Regulations on biotechnology related to human cloning and
| human gene editing, and industry/academic norms set by
| conferences such as the Asilomar conference
|
| - International treaties on climate change, such as the
| Montreal Protocol on CFCs and the hole in the ozone layer
|
| Even if we assume that international coordination fails, 2
| facts give us some breathing room to pause (not stop!) large
| experiments to at least figure out what we're doing and how to
| adapt society:
|
| 1. American companies are ahead of Chinese companies (I assume
| that's what you mean by "other country or culture") right now
| by at least a few years. The rest of the industry is barely
| catching up to GPT-3 , which came out in 2020.
|
| 2. China cannot unilaterally continue the race right now
| because their compute supply chain critically depends on
| technologies that the West has monopolies over, like sub-10nm
| silicon fabrication, advanced GPU technologies, and ultraviolet
| lithography. We're already using this lever, actually! For
| example, Nvidia exports of A100 and H100 GPUs are no longer
| allowed to China at scale.
| mritchie712 wrote:
| it's orders of magnitude easier to regulate and observe large
| scale nuclear projects then thousands of hackers spread all
| over globe.
| GartzenDeHaes wrote:
| Nvidia chips are designed and manufactured in Taiwan, not
| "the west."
| lannisterstark wrote:
| "International treaties to avoid nuclear proliferation and
| development, as other commenters have pointed out"
|
| ...you do realize that countries haven't signed that right,
| and nothing stops them from pulling out either, right?
| ChatGTP wrote:
| If you have time, I really think this short segment from an
| interview with Max Tegmark is worth a watch [1]. This
| particular clip is about autonomous weapons.
|
| It's quite relevant to your argument, interested on your
| thoughts.
|
| [1] https://www.youtube.com/watch?v=RL4j4KPwNGM
| ren_engineer wrote:
| this is basically the same logic behind nuclear weapons, and AI
| could potentially be even more dangerous if it kept advancing
| at the rate we've seen in the last few years. In theory the
| massive amount of compute needed to train and run these at
| scale could be tracked/regulated similarly to how nuclear
| refinement facilities are
|
| your suggestion is that stopping nuclear proliferation
| shouldn't have even been attempted, despite the fact it
| actually worked pretty well
| quonn wrote:
| But that would only buy us 10 years. Eventually that massive
| amount won't seem very massive anymore compared to what will
| be available in consumer devices.
| nopinsight wrote:
| They are calling for a pause, not a stop.
|
| It's quite clear that OpenAI has a significant lead over
| everyone else. The only other country outside the west that
| even has a chance at developing something better than GPT-4
| soon is China. China has a pretty cautious culture as well so
| it's quite possible that a bilateral moratorium can be
| negotiated with them.
|
| ADDED: Even without considering X-risks, China's rulers cannot
| be pleased with the job displacement risks that GPT-4 plus
| Plugins may cause, not to mention a more powerful model.
|
| They have trained a huge number of college graduates and even
| now there are significant unemployment/underemployment issues
| among them.
|
| ADDED 2: If you think many companies can do it, please identify
| a single company outside the US/UK/China with the capability to
| train an equivalent of GPT-3.5 _from scratch_.
| mewpmewp2 wrote:
| If OpenAI as a start up was able to do it, certainly full
| countries can do it if they see the evidence and will invest
| in it.
| ren_engineer wrote:
| >It's quite clear that OpenAI has a significant lead over
| everyone else
|
| if their lead was significant they wouldn't have admitted to
| not releasing more info about GPT-4 in their paper due to
| commercial reasons. What ever secret sauce they have
| apparently isn't that significant or they wouldn't be afraid
| to talk about it
| NLPaep wrote:
| Nah, it's just easy to copy if you lay it out. It's
| software.
| nopinsight wrote:
| Discovering Newtonian's Laws of Motion were much harder
| than learning them from others.
|
| Were Newtonian's Laws of Motion a significant progress?
| credit_guy wrote:
| > The only other country
|
| OpenAI is not a country, it's a company.
|
| GPT models on par with GPT-4 can be trained, well, by
| companies. You don't need nation-state levels of resources.
| zarzavat wrote:
| Training a LLM with GPT-4 like capabilities is very hard.
| Most AI researchers are concentrated in a few countries. At
| the moment the countries with the vast majority of the
| expertise are US, UK and China.
| MacsHeadroom wrote:
| It's not remotely intellectually challenging to replicate
| GPT-4. It just takes a lot of GPUs, something plenty of
| people all around the world have access to.
|
| GPT-2 and GPT-3 are the same algorithm based on the same
| open source library. GPT-4 most likely is as well. You
| can literally fork the repo and if you have enough VRAM,
| cuda cores, and time, you will get GPT-4. High Schoolers
| could do it. Amateurs are already replicating LLaMA,
| which is more complex than GPT and not even a month old.
| (it's just smaller = fewer GPUs required)
| mirker wrote:
| You didn't mention how to gather high quality data.
| OpenAI has never and will never release that.
| og_kalu wrote:
| If you counted research, open ai didn't have a lead until
| gpt-4 nevermind a significant one. most of this is scale.
| their lead is a few months tops.
| nopinsight wrote:
| Engineering such a system is a harder challenge than many
| types of research. Even the mighty Google, the leader in AI
| research by many metrics, is catching up.
|
| Another example is Meta only finishing OPT-175B, a near
| equivalent of GPT-3, two years after it.
|
| ----
|
| Added to reply:
|
| GPT-4 got much better results on many benchmarks than PaLM,
| Google's largest published model [1]. PaLM itself is
| probably quite a bit better than LamDa in several tasks,
| according to a chart and a couple of tables here:
| https://arxiv.org/abs/2204.02311
|
| It's unclear that Google currently has an internal LLM as
| good as GPT-4. If they do, they are keeping quiet about it,
| which seems quite unlikely given the repercussions.
|
| [1] GPT-4's benchmark results vs PaLM:
| https://openai.com/research/gpt-4
| og_kalu wrote:
| Google was not catching up before gpt-4. That's my point
| lol. all the sota llms belonged to google via deepmind
| and google brain/ai right up to the release of gpt-4.
| chinchilla, flamingo, flan-palm.
| nopinsight wrote:
| GPT-4 was finished in the summer of 2022. Several
| insiders gave interviews saying they were using it and
| building guardrails for it for the last 6 months or so.
|
| OpenAI doesn't publish as much as Google so we don't
| really know how long or in what periods they were ahead.
|
| And there's no organization outside the US/UK/China with
| the same caliber of AI engineering output as Google.
| winternett wrote:
| They need to stop testing in PROD, and they also need to prove
| that Ai can function without an Internet connection and without
| any human intervention at all... Ai should not be deployed
| deeply until it works more flawlessly, but the same people who
| hyped and foisted Crypto onto everything and everyone
| (Including vital world banking infrastructure) are at the wheel
| on Ai marketing now, and that's not good at all.
|
| Those things, as well as vital testing as a gate for
| deployment, aren't being upheld, and that's exactly what makes
| promises grandiosely destructive and keeps outcomes harmfully
| and wildly unpredictable.
|
| I think in months to come we'll find out that many of the great
| new Ai products launched will prove themselves to simply be
| inauthentic-ly scripted fraud-based solutions backed by hidden
| human intervention because of this environment of unchecked
| expansion and eager over-deployment.
| vasili111 wrote:
| If US will pause AI, rest of the world will not. US business will
| be in disadvantage. US cannot give a lead in such important
| technology to other countries.
| jl6 wrote:
| All they need is for Gerard Butler to sign the letter and our
| course of action will be clear.
| randomwalker wrote:
| Rebuttal: https://aisnakeoil.substack.com/p/a-misleading-open-
| letter-a...
|
| Summary: misinfo, labor impact, and safety are real dangers of
| LLMs. But in each case the letter invokes speculative, futuristic
| risks, ignoring the version of each problem that's already
| harming people. It distracts from the real issues and makes it
| harder to address them.
|
| The containment mindset may have worked for nuclear risk and
| cloning but is not a good fit for generative AI. Further locking
| down models only benefits the companies that the letter seeks to
| regulate.
|
| Besides, a big shift in the last 6 months is that model size is
| not the primary driver of abilities: it's augmentation (LangChain
| etc.) And GPT3-class models can now run on iPhones. The letter
| ignores these developments. So a moratorium is ineffective at
| best and counterproductive at worst.
___________________________________________________________________
(page generated 2023-03-29 23:01 UTC)