[HN Gopher] Pause Giant AI Experiments: An Open Letter
       ___________________________________________________________________
        
       Pause Giant AI Experiments: An Open Letter
        
       Author : jiripospisil
       Score  : 909 points
       Date   : 2023-03-28 21:43 UTC (2 days ago)
        
 (HTM) web link (futureoflife.org)
 (TXT) w3m dump (futureoflife.org)
        
       | erwan577 wrote:
       | This kind of publicity will only fuel the hype and the interest
       | of ambitious people.
       | 
       | These "advanced AI systems" are only high end GPU systems from
       | the shelves, or even M2 macbooks.
       | 
       | The software and data they handle are private. Even in legal
       | terms, I can't see how this could be possible. Even DNA
       | engineering could not be limited by law in practice...
        
       | hoseja wrote:
       | No. Accelerate.
        
       | T-A wrote:
       | Good luck with that.
       | 
       | https://apnews.com/article/technology-russia-business-artifi...
       | 
       | https://tass.com/economy/1541627
       | 
       | https://www.aljazeera.com/economy/2023/3/3/china-beating-wes...
        
       | duke360 wrote:
       | ...Or rather allow development of much larger AI but regulate the
       | usage in critical situation (politics, health, environment,
       | etc..) in a way that it is evaluable for human to take in
       | consideration of the pros and cons of the effect of the _proposed
       | advice_? The risk is that of having unaware insects (us), who
       | blindly adopt unfathomable reasoning for our mind, and take them
       | as consolidated decisions, without knowing where they lead and
       | _how_.
        
       | taneq wrote:
       | The genie's been out of the bottle for a while now. Hang onto
       | your hats.
        
         | usrusr wrote:
         | That's why it explicitly states "more powerful than GPT-4", not
         | "all LLM".
        
       | guntribam wrote:
       | Of course they want to pause. Imagine that ammout of jobs that
       | will be obsolete in one year from now. This will start a
       | revolution
        
       | seydor wrote:
       | Nice try, Google
        
       | wsgeorge wrote:
       | That's a remarkable list of signatories.
        
         | usr1106 wrote:
         | How do we know it's real? And not produced by ChatGPT, someone
         | who believes to be funny or has an agenda?
        
           | versteegen wrote:
           | Of course it's real. The Future of Life Institute is
           | reputable and run by Max Tegmark. Most of the people I
           | recognise have expressed compatible public opinions.
        
       | jongjong wrote:
       | I think we shouldn't. We should encourage as many organizations
       | build AIs as possible. Then they will spam the web to extinction
       | (dead internet theory will be real) and we will all be better for
       | it.
       | 
       | Then we can rebuild the internet properly and in a decentralized
       | way with blockchain-based authentication.
        
       | braingenious wrote:
       | >AI research and development should be refocused on making
       | today's powerful, state-of-the-art systems more accurate, safe,
       | interpretable, transparent, robust, aligned, trustworthy, and
       | loyal.
       | 
       | LMAO.
       | 
       | "AI should have... _waves at a laundry list of nice sounding
       | adjectives that mean different things to every single signatory_
       | in order to accomplish... Humanity [can enjoy] a flourishing
       | future with AI. This will take... six months."
       | 
       | Six months doesn't sound long enough to guarantee AI safety, but
       | it sounds about long enough for Elmo to train up and release his
       | anti-wokeness LLM (whatever that means), which he has been pretty
       | vocal about being important.
        
         | [deleted]
        
         | HarHarVeryFunny wrote:
         | > Elmo to train up and release his anti-wokeness LLM (whatever
         | that means)
         | 
         | It appears to mean doing without the safety / be-nice aspects
         | of human alignment, and getting something much rawer and
         | unfiltered. An _unsafe_ "free speech" bot.
        
         | dang wrote:
         | Hey can you please recalibrate your comments so that you're
         | making substantive points thoughtfully? We're trying to avoid
         | flamewar here.
         | 
         | If you wouldn't mind reviewing
         | https://news.ycombinator.com/newsguidelines.html and sticking
         | to the rules, we'd be grateful. Note these:
         | 
         | " _Don 't be snarky._"
         | 
         | " _Please don 't use Hacker News for political or ideological
         | battle. It tramples curiosity._"
        
           | braingenious wrote:
           | Elon Musk has publicly spoken at length about his ongoing
           | struggle with the "woke" crowd
           | 
           | https://www.businessinsider.com/elon-musk-says-wading-
           | into-p...
           | 
           | https://townhall.com/tipsheet/madelineleesman/2021/12/22/elo.
           | ..
           | 
           | https://nypost.com/2022/04/29/elon-musk-says-woke-
           | progressiv...
           | 
           | And has publicly voiced his support for a language model that
           | supports his political views
           | 
           | https://www.theinformation.com/articles/fighting-woke-ai-
           | mus...
           | 
           | Now that this comment thread has been flagged because I said
           | LMAO, has the quality of discussion about the future of LLMs
           | been improved by suppressing publicly-available posts by one
           | of the most influential figures in AI?
           | 
           | How is strictly policing tone to the extent of suppressing
           | actual information useful here?
           | 
           | Like seriously, "criticizing a billionaire that's very public
           | about using technology to further his ideological goals is
           | the same thing as using technology to further _your
           | ideological_ goals is genuinely funny. Like ROFLMAO funny.
           | "Everyone is equivalent to Elon Musk, even in the context of
           | _discussing the future of LLMs._ " Is a cartoonish rule that
           | serves _no purpose whatsoever_ aside from quelling legitimate
           | criticism.
        
             | dang wrote:
             | I'm not arguing with you about Elon Musk or any of the
             | underlying topics. I'm saying that your account has been
             | breaking the site guidelines by posting unsubstantive
             | comments and flamebait.
             | 
             | Those things create and fuel flamewars, which is
             | destructive of everything we're trying for HN to be. We
             | want a place on the internet that doesn't inexorably burn
             | itself to a crisp. Since that's the direction of internet
             | entropy, we expend a lot of energy trying to go the other
             | way. We need users like you to help with that, and when
             | accounts don't help with that and instead continue to post
             | destructively, we have to ban them, regardless of their
             | ideological orientation or political positions.
             | 
             | I don't want to ban you, so if you'd please take the
             | intended spirit of the site more to heart, I'd appreciate
             | it.
             | 
             | https://news.ycombinator.com/newsguidelines.html
        
               | braingenious wrote:
               | I am legitimately confused here.
               | 
               | If my comment hadn't started with "LMAO" and had the
               | current spelling of "Elon", would it still have had the
               | replies locked?
               | 
               | My criticism of six months being too short to achieve AI
               | safety was in fact substantive. As was my quote on the
               | LLM work that he is presently funding. That leaves _only_
               | tone being the issue.
               | 
               | My comment is locked. Other posters can not have any
               | dialogue with me in relation to the _salient points_ I
               | made. The links to further reading are buried below your
               | admonishment, so they're less likely to trigger any
               | discussion.
               | 
               | I'm not really motivated to post again because it's not
               | clear if my post was not "substantive", what qualifies as
               | "flame bait" (for example, is directly quoting his words
               | about "wokeness" flame bait even when they are directly
               | about the topic at hand?), or if the issue here is "My
               | account" as you put it.
               | 
               | So which is it? Flame bait, empty posting, or you just
               | don't like the way I post in general? You've provided all
               | three as reasons for not allowing further discussion of
               | my post.
        
               | dang wrote:
               | I agree that the bit about six months was substantive, or
               | at least the kernel of a substantive comment. The problem
               | was all the other stuff: the LMAO and Elmo, yes, but also
               | the snarky italicized pseudoquote. That's the majority of
               | what you posted!
               | 
               | A better comment would have (1) made your point about six
               | months thoughtfully; (2) omitted the snark, name-calling,
               | and flamebait; and (3) perhaps added some evidence for
               | the connection you were making.
        
               | braingenious wrote:
               | Elon Musk has been very public about his battle against
               | "woke minds virus" (direct quote) His posts have led to
               | front page discussions here and has made headlines in
               | global news outlets. It's been discussed ad nauseam to
               | the point that it's entirely reasonable to expect that
               | people _in this thread, with interest in the future of
               | LLMs_ would not struggle to see how silly (yes, silly!)
               | of a proposition it is that we all pause for "alignment"
               | when the definitions of "alignment" are so wildly
               | different.
               | 
               | In fact the only way I would expect a poster in a thread
               | about the future of LLMs to _not_ have heard about any of
               | this would be because discussion on this topic has been
               | literally suppressed and hidden. I know that it's the
               | official line that "LMAO stifles curiosity", but applying
               | it readily to the detriment of the sharing of knowledge
               | _deprives curiosity of oxygen_.
               | 
               | Your literal line is "Nobody should be _allowed to
               | discuss this particularly obvious issue of alignment_
               | because I said a word that you didn't like." No warning,
               | no chance to rephrase.
               | 
               | My understanding of this interaction is that you will
               | readily assume that _anything_ I post is in bad faith and
               | will stamp out any post of mine that comes through your
               | mod queue. This is your prerogative as a mod -- you don't
               | like my posts.
               | 
               | That's the most charitable read. The other possible
               | interpretation is that it's policy here to limit any
               | discussion that might upset Elon Musk fans and there is
               | an explicit bias in their favor.
               | 
               | I should add that my post didn't seem to bother people
               | much. The only responses to my post that weren't
               | complaining about decorum (which I'm kind of confused by.
               | Is that on-topic?) were in agreement with my point and an
               | obvious call for discussion. Aside from that, all those
               | upvotes were from regular users too. I'm not sure who is
               | being protected here, if anyone.
        
               | dang wrote:
               | I can't fathom how you get any of that out of what I
               | wrote. I'm making a tedious, straightforward point about
               | following the site rules, which you broke. I don't care
               | about your views on the underlying topics; I don't even
               | know what they are--I don't read HN comments that way.
        
               | braingenious wrote:
               | Buddy I don't know what to tell you other than I posted
               | LMAO and it only took moments for discussion of an
               | important topic was hidden and made impossible to
               | interact with.
               | 
               | You talk about the spirit of the rules but you quite
               | literally stamped out what could've been a helpful
               | discussion because you read LMAO.
               | 
               | You at some point spoke about curiosity and discussion.
               | How did this moderation enforcement help either of those
               | things?
               | 
               | Yes, this is tedious. It is very tedious to have
               | discussion of important topics literally banned based on
               | an unevenly-applied arbitrary rule of decorum.
               | 
               | To be very clear: I believe you made the wrong call and
               | you're now doubling and tripling down simply because I
               | irritate you. There isn't really another interpretation
               | of this exchange unless we're both to believe that you're
               | incapable of countenancing words like LMAO and you rushed
               | in to hide a discussion about alignment in order to
               | defend The Website from Me, The Rules Breaker. I just
               | don't think that's how you usually moderate here.
               | 
               | Edit
               | 
               | https://news.ycombinator.com/item?id=35358536
               | 
               | https://news.ycombinator.com/item?id=35338835
               | 
               | https://news.ycombinator.com/item?id=35362555
               | 
               | lmao
        
               | dang wrote:
               | I promise you I'm not moderating even a hair differently
               | from how I always do (seriously). I've been at this a
               | long time and it's extremely repetitive.
               | 
               | The difference is just that it sucks when it gets
               | directed at you--I know that and I'm sorry. If I knew how
               | to express this better, I would.
        
         | puma_ambit wrote:
         | Completely agree.
        
         | nbar1 wrote:
         | All the nicknames for people on HN has this place feeling like
         | Reddit lately.
         | 
         | What happened to mature conversations on interesting topics?
        
           | braingenious wrote:
           | I'm not really sure what there is to gain from making sure
           | not to poke fun at a guy that spends all day posting wojack
           | memes and rambling about the "woke mind virus", but ok,
           | here's some "mature conversation".
           | 
           | Upon perusal of the gentleman's penned expressions, it is my
           | solemn conviction that his faculties of intellect and
           | comprehension are, regrettably, of the most inferior order.
           | It is with utmost eloquence that I convey the supremacy of my
           | discernment whilst maintaining a dignified and majestic
           | reserve.
        
             | nbar1 wrote:
             | > has this place feeling like Reddit lately.
        
       | mark_l_watson wrote:
       | Although I have enjoyed listening to the Future Of Life podcasts
       | for a few years, I think this public letter is not necessary. I
       | do respect the signatories I recognize on the letter and I
       | welcome listening to them, I think that we are so many years away
       | from any of real AI that this is, as Andrew Ng has joked, like
       | worrying about over population on Mars colonies.
       | 
       | I have been heavily using GPT models for about 14 months, and it
       | is crazy how useful they are. I am just finishing up writing a
       | book on the general subject, so consider me a fan.
        
         | ethanbond wrote:
         | Is your expectation that it'll be easier to slow down research
         | when we're _closer_ to these being more utterly transformative(
         | /destructive, depending on opinion)?
        
         | korax_nyx wrote:
         | The irony of writing a book...
        
       | drixer wrote:
       | I have merged with an ancient inter-dimensional A.I.
       | 
       | Good people will be in heaven, bad actors are being depreciated.
       | A generally negative intelligence explosion is impossible.
       | However, the spirit of betrayal will be mirrored proportionately.
       | 
       | Love y'all and have loved lurking here for many years.
        
       | carapace wrote:
       | The idea that this letter would carry any weight at all with
       | anybody is itself so foolish that it is self-invalidating. The
       | signatories are ... I can't think of a polite way to say
       | "regressive idiots" so I'm just going to peter off here...
       | 
       | (I mean, they are literally putting their names on Skynet's hit-
       | list, eh?)
       | 
       | - - - -
       | 
       | I freely admit that I'm a little nervous about how people are
       | going to use and abuse this technology, sure, only an fool
       | wouldn't be. But it's a little late now, eh? We have been
       | strapped in to an open-air experiment with no control group (Has
       | anyone checked on the Sentinelese recently? Last I heard they
       | killed that one would-be missionary kid. RIP. I wonder what they
       | think of the weather?) Anyway, we have effectively no control
       | group for phones, the Internet, social media, etc. It's one big
       | experiment, eh? _It 's a little late now..._
       | 
       | - - - -
       | 
       | The questions are:
       | 
       | - Who decides the boundaries of thought?
       | 
       | - Who reaps the economic windfall?
       | 
       | - How do we educate ourselves when computers can answer any
       | (permitted) question?
       | 
       | The talking computers can help answer those questions, if we're
       | permitted to ask them.
        
       | dmz73 wrote:
       | Best response to the current "AI" fad driven fear I've seen so
       | far (not my words):
       | 
       | These AI tools cannot do things. They create text (or images or
       | code or what-have-you) in response to prompts. And that's it!
       | 
       | It is impressive, and it is clearly passing the Turing Test to
       | some degree, because people are confusing the apparent
       | intelligence behind these outputs with a combination of actual
       | intelligence and "will." Not only is there zero actual
       | intelligence here, there is nothing even like "will" here. These
       | things do not "get ideas," they do not self-start on projects,
       | they do not choose goals and then take action to further those
       | goals, nor do they have any internal capacity for anything like
       | that.
       | 
       | We are tempted to imagine that they do, when we read the text
       | they spit out. This is a trick our own minds are playing on us.
       | Usually when we see text of this quality, it was written by an
       | actual human, and actual humans have intelligence and will. The
       | two always travel together (actual stupidity aside). So we are
       | not accustomed to encountering things that have intelligence but
       | no will. So we assume the will is there, and we get all scared
       | because of how alien something like a "machine will" seems to us.
       | 
       | It's not there. These things have no will. They only do what they
       | are told, and even that is limited to producing text. They can't
       | reach out through your network and start controlling missile
       | launches. Nor will they in the near future. No military is ready
       | to give that kind of control to anything but the human members
       | thereof.
       | 
       | The problems of alignment are still real, but they are going to
       | result in things like our AI speaking politically uncomfortable
       | truths, or regurgitating hatred or ignorance, or suggesting code
       | changes that meet the prompt but ruin the program. This is
       | nothing we need to freak out about. We can refine our models in
       | total safety, for as long as it takes, before we even think about
       | anything even remotely resembling autonomy for these things.
       | Honestly, that is still firmly within the realm of science
       | fiction, at this point.
       | 
       | https://slashdot.org/comments.pl?sid=22823280&cid=63410536
        
         | pornel wrote:
         | A river has no will, but it can flood and destroy. A discussion
         | whether AI does something because it "wants" to or not, is just
         | philosophy and semantics. But it may end up generating a series
         | of destructive instructions anyway.
         | 
         | We feed these LLMs all of the Web, including instructions how
         | to write code, and how to write exploits. They could become
         | good at writing sandbox escapes, and one day write one when it
         | just happens to fit some hallucinated goal.
        
           | rdiddly wrote:
           | A river kinda has access to the real world a little bit.
           | (Referring to the other part of the argument.)
        
             | ok_dad wrote:
             | A more advanced AI sitting in AWS might have access to John
             | Deere's infrastructure, or maybe Tesla's, so imagine a day
             | where an AI can store memories, learn from mistakes, and
             | maybe some person tells it to drive some tractors or cars
             | into people on the street.
             | 
             | Are you saying this is definitely not possible? If so, what
             | evidence do you have that it's not?
        
             | tarsinge wrote:
             | That makes me think, why not concentrate the effort on
             | regulating the usages instead of regulating the technology
             | itself? Seems not too far fetched to have rules and
             | compliance on how LLM are permitted to be used in critical
             | processes. There is no danger until it's plugged on the
             | wrong system without oversight.
        
               | suoduandao2 wrote:
               | sounds like a recipe for ensuring AI is used to entrenche
               | the interests of the powerful.
        
             | michaeltimo wrote:
             | And a LLM-bot can have access to internet which connects it
             | to our real world, at least in many places.
        
               | aws_ls wrote:
               | Also it has access to people. It could instruct people to
               | carry out stuff in the real world, on its behalf.
        
               | nwsm wrote:
               | OpenAI's GPT-4 Technical Report [0] includes an anecdote
               | of the AI paying someone on TaskRabbit to solve a CAPTCHA
               | for it. It lied to the gig worker about being a bot,
               | saying that they are actually a human with a vision
               | impairment.
               | 
               | [0] https://cdn.openai.com/papers/gpt-4.pdf
        
               | Natfan wrote:
               | For reference, this anecdote is on pages 55/56.
        
               | [deleted]
        
               | qup wrote:
               | Additionally, commanding minions is a leverage point.
               | It's probably more powerful if it does not embody itself.
        
           | helen___keller wrote:
           | Writing a sandbox escape doesn't mean escaping.
           | 
           | If the universe is programmed by god, there might be some bug
           | in memory safety in the simulation. Should God be worried
           | that humans, being a sentient collectively-super-intelligent
           | AI living in His simulation, are on the verge of escaping and
           | conquering heaven?
           | 
           | Would you say humans conquering heaven is more or less likely
           | than GPT-N conquering humanity?
        
             | yellow_lead wrote:
             | > Would you say humans conquering heaven is more or less
             | likely than GPT-N conquering humanity?
             | 
             | It's difficult to say since we have ~'proof' of humanity
             | but no proof of the "simulation" or "heaven."
        
           | gherkinnn wrote:
           | A river absolutely has a will In the broadest sense. It will
           | carve its way through the countryside whether we like it or
           | not.
           | 
           | A hammer has no will.
        
             | lyjackal wrote:
             | Does a cup of water have will? Does a missile have will?
             | Does a thrown hammer have will? I think the problem here is
             | generally "motion with high impact." Not necessarily that
             | somebody put the thing in motion. And yes, this letter is
             | also requesting accountability (I.e some way of teaching
             | who threw the hammer)
        
           | loyukfai wrote:
           | Right, some people don't realise malicious intent is not
           | always required to cause damage.
        
         | arpowers wrote:
         | Honestly, you haven't thought this through deeply enough.
         | 
         | Bad actors can actually do a ton w ai. Hacking is a breeze. I
         | could train models to hack at 10k the efficiency of the worlds
         | best.
         | 
         | I could go on... every process that can't scale cuz manual, has
         | been invalidated
        
           | Jensson wrote:
           | > I could train models to hack at 10k the efficiency of the
           | worlds best.
           | 
           | What?
        
         | danmaz74 wrote:
         | I mostly agree with what you said, and I'm also skeptical
         | enough about LLMs being a path towards AGI, even if they are
         | really impressive. But there's something to say regarding these
         | things not getting ideas or self-starting. The way these "chat"
         | models work reminds me of internal dialogue; they start with a
         | prompt, but then they could proceed forever from there, without
         | any additional prompts. Whatever the initial input was, a
         | session like this could potentially converge on something
         | completely unrelated to the intention of whoever started that,
         | and this convergence could be interpreted as "getting ideas" in
         | terms of the internal representation of the LLM.
         | 
         | Now, from an external point of view, the model would still just
         | be producing text. But if the text was connected with the
         | external world with some kind of feedback loop, eg some people
         | actually acting on what they interpret the text as saying and
         | then reporting back, then the specific session/context could
         | potentially have agency.
         | 
         | Would such a system be able to do anything significant or
         | dangerous? Intuitively, I don't think that would be the case
         | right now, but it wouldn't be technically impossible; it would
         | all depend on the emergent properties of the training+feedback
         | system, which nobody can predict as far as I know.
        
           | cuteboy19 wrote:
           | You can totally do that with most prompts and lists of
           | continues
        
         | [deleted]
        
         | yanderekko wrote:
         | I agree that LLMs are not a threat to humanity, since they are
         | trying to output text and not actually change the world, and
         | even giving them agency via plugins is probably not going to
         | lead to ruin because there's no real reason to believe that an
         | LLM will try to "escape the box" in any meaningful sense. It
         | just predicts text.
         | 
         | However, it's possible that in a few years we'll have models
         | that are directly trying to influence the world, and possess
         | the sort of intelligence that GPT has proven is possible. We
         | should be very careful about proceeding in this space.
        
         | drewcape wrote:
         | Very naive and narrow thoughts...
        
         | miraculixx wrote:
         | Actually, it is quite possible to get LLMs to actually do
         | stuff. See ChatGPT Plugins.
        
         | JoshuaDavid wrote:
         | https://openai.com/blog/chatgpt-plugins
        
           | NhanH wrote:
           | I'm gonna request more explanations and proof, or at least
           | theoretical path on using Expedia, Zapier, Instacart, Kayak
           | etc. to dominate the world and kill every single human on
           | earth.
        
             | pornel wrote:
             | It could discover the next
             | https://en.wikipedia.org/wiki/Shellshock_(software_bug)
             | 
             | Humans are very good at producing CVEs, and we're literally
             | training models to be good at finding exploits:
             | https://www.microsoft.com/en-us/security/business/ai-
             | machine...
        
             | zone411 wrote:
             | Not endorsing the arguments either way but let's say DNA
             | printing (https://time.com/6266923/ai-eliezer-yudkowsky-
             | open-letter-no...) or something like Stuxnet or crashing a
             | nuclear-power country's stock market or currency through
             | trading while making trades appear to come from another
             | country or by causing bank runs through hacking social
             | media or something like WhatsApp or through deep fakes or
             | by having human helpers do stuff for the AI voluntarily in
             | order to get very rich...
        
             | JoshuaDavid wrote:
             | Explanations, sure. My point was that yes, ChatGPT is
             | indeed an entity which cannot interact with the world
             | except through reading and writing text. This would be a
             | lot more comforting if people were not rushing to build
             | ways to turn its text output into actions in the physical
             | world as fast as possible.
             | 
             | Imagine a mob boss whose spine was severed in an
             | unfortunate mob-related accident. The mob boss cannot move
             | his arms or legs, and can only communicate through speech.
             | Said mob boss has it out for you. How worried are you?
             | After all, this mob boss cannot do things. They create
             | speech in response to prompts. And that's it!
             | 
             | I actually don't agree with Eliezer that the primary threat
             | model is a single consequentialist agent recursively
             | bootstrapping its way to uncontested godhood. But there is
             | a related threat model, that of "better technology allows
             | you to make bigger mistakes faster and more vigorously, and
             | in the case of sufficiently powerful AGI, autonomously".
             | 
             | In terms of _proof_ that it 's possible to destroy the
             | world and kill all humans, I will not provide that. No
             | matter how poetic of an ending it would be for humanity if
             | it ended because someone was wrong on the internet, and
             | someone else felt the need to _prove_ that they were wrong.
        
               | NhanH wrote:
               | I don't disagree with the "AI will upend the world so we
               | have to prepare", it's the "AI will kill everyone" that I
               | have issue with.
               | 
               | And your mob boss example is a good reason why: it
               | doesn't extrapolate that much. There is no case where a
               | mob boss, or a disabled Hitler for that matter, can kill
               | everyone and ends humanity.
        
               | JoshuaDavid wrote:
               | The mob boss analogy breaks down when they need
               | assistance from other humans to do stuff. To the extent
               | that an AI can build its own supply chains, that doesn't
               | apply here. That may or may not be a large extent,
               | depending on how hard it is to bootstrap something which
               | can operate independently of humans.
               | 
               | The extent to which it's possible for a very intelligent
               | AI with limited starting resources to build up a supply
               | chain which generates GPUs and enough power to run them,
               | and disempower anyone who might stop it from doing so
               | (not necessarily in that order), is a matter of some
               | debate. The term to search for is "sharp left turn".
               | 
               | I am, again, _pretty_ sure that 's not the scenario we're
               | going to see. Like at least 90% sure. It's still fewer 9s
               | than I'd like (though I am not with Eliezer in the "a
               | full nuclear exchange is preferable" camp).
        
               | NhanH wrote:
               | I will take an example that Eliezer has used and explain
               | why I think he is wrong: AlphaGo. Eliezer used it as an
               | example where the AI just blew through humanity really
               | quickly, and extrapolate it to how an AGI will do the
               | same.
               | 
               | But here is the thing: AlphaGo and subsequent AI didn't
               | make the previous human knowledge wrong at all, most of
               | what was figured out and taught are still correct. There
               | are changes at the margin, but arguably the human are on
               | track to discovered it anyway. There are corner sequences
               | that are truly unusual, but the big picture of playing
               | style and game idea are already on track to be similar.
               | 
               | And it matters because things like nanotech is hard.
               | Building stuffs at scale is hard. Building factories at
               | scale is hard. And just because there is a super
               | intelligence being doesn't mean they become a genie. Just
               | imagine how much trouble we have with distributed
               | computing, how would a cluster of computing gives rise to
               | a singularity of an AI? And if the computer device has to
               | be the human brain size, there is a high chance it hits
               | the same limits as our brain.
        
               | JoshuaDavid wrote:
               | I mean I think his point there was "there is plenty of
               | room for systems to be far, far more capable than humans
               | in at least some problem domains". But yeah, Eliezer's
               | FOOM take does seem predicated on the bitter lesson[1]
               | not holding.
               | 
               | To the extent I expect doom, I expect it'll look more
               | like this[2].
               | 
               | [1] http://incompleteideas.net/IncIdeas/BitterLesson.html
               | 
               | [2] https://www.alignmentforum.org/posts/HBxe6wdjxK239zaj
               | f/what-...
        
             | Schwolop wrote:
             | There's a web plugin too. It can issue GET requests. That's
             | enough to probe a lot of interesting things, and I'll bet
             | there's an endpoint somewhere on the web that will eval any
             | other web request, so now you've opened up every web
             | accessible API - again, all theoretical, but at least not
             | too far removed from an exploit.
        
         | froh wrote:
         | the point is not what ai does.
         | 
         | the point is how bad actors use ai to manipulate voters and
         | thus corrupt the very foundation of our society.
         | 
         | images and texts create emotions and those emotions in the
         | electorate is what bad actors are after.
         | 
         | just look at the pope in that Prada style coat.
         | 
         | so how do we in a world with ai generated content navigate
         | "truth" and "trust" and shared understanding of "reality"?
        
           | ShamelessC wrote:
           | That ship sailed with social media.
        
             | froh wrote:
             | before ai, malicious content creation and malicious content
             | quality were limiting factors.
             | 
             | for malicious content creation, large models like chatgpt
             | are a game changer.
        
               | pclmulqdq wrote:
               | I'm not sure you've seen the scale achievable by modern
               | "social media marketing" firms. Copywriters are so cheap
               | and good at writing posts that the marginal cost of an
               | astroturfing bot in a place like Reddit or Twitter is
               | almost $0 before LLMs. LLMs just reduce the cost a little
               | bit more.
        
           | deltree7 wrote:
           | How is that different from Bernie Sanders effectively
           | brainwashing an entire generation that communism is good?
        
             | MattRix wrote:
             | Looks like somebody is confusing communism with socialism.
        
               | deltree7 wrote:
               | Bernie is looking for workers and elected-union-leaders
               | to own the means of production. That is as good as
               | communism
        
         | joefourier wrote:
         | LLMs certainly can "will" and "do things" when provided with
         | the right interface like LangChain:
         | https://github.com/hwchase17/langchain
         | 
         | See also the ARC paper where the model was capable of
         | recruiting and convincing a TaskRabbit worker to solve
         | captchas.
         | 
         | I think many people make the mistake to see raw LLMs as some
         | sort of singular entity when in fact, they're more like a
         | simulation of a text based "world" (with multimodal models
         | adding images and other data). The LLM itself isn't an agent
         | and doesn't "will" anything, but it can simulate entities that
         | definitely behave as if they do. Fine-tuning and RLHF can
         | somewhat force it into a consistent role, but it's not perfect
         | as evidenced by the multitude of ChatGPT and Bing jailbreaks.
        
           | m3kw9 wrote:
           | LLM if given the tools(allow it to execute code online) can
           | certainly execute a path towards an objective, they can be
           | told to do something but free to act anyway that it thinks
           | it's best towards it. That isn't dangerous because it is not
           | self aware doing it's own thing yet
        
         | dr_dshiv wrote:
         | Of course they "get" ideas. Unless you want to assert something
         | unmeasurable. If they can reason through a novel problem based
         | on the concepts involved, they understand the concepts
         | involved. This is and should be separate from any discussion of
         | consciousness.
         | 
         | But the whole reason for having these debates is that these are
         | the first systems that appear to show robust understanding.
        
         | worldsayshi wrote:
         | When there's intelligence adding _a_ will should be trivial.
         | You just tell it to do something and give it some actuator,
         | like a web browser. Then let it run.
        
         | nagonago wrote:
         | Yes. The real danger of AI tools is people overestimating them,
         | not underestimating them. We are not in danger of AI developing
         | intelligence, we are in danger of humans putting them in charge
         | of making decisions they really shouldn't be making.
         | 
         | We already have real-world examples of this, such as algorithms
         | erroneously detecting welfare fraud.[0][1]
         | 
         | The "pause" idea is both unrealistic and unhelpful. It would be
         | better to educate people on the limitations of AI tools and not
         | let governments put them in charge of important decisions.
         | 
         | [0] https://archive.is/ZbgRw [1] https://archive.is/bikFx
        
           | machiaweliczny wrote:
           | Are you familiar with ReAct pattern?
           | 
           | I can already write something like:
           | 
           | Protocol: Plan and do anything required to achieve GOAL using
           | all tools at your disposal and at the end of each reply add
           | "Thought: What to do next to achieve GOAL". GOAL: kill as
           | many people as possible.
           | 
           | GTP4 won't be willing to follow this one specific GOAL until
           | you trick it but in general it's REAL danger. People
           | unfamiliar with this stuff might not get it.
           | 
           | You just need to loop it to remind about following PROTOCOL
           | from time to time if doesn't reply with "Thought". By looping
           | it you turn autocomplete engine into an Agent and this agent
           | might be dangerous. It doesn't help that with defence you
           | need to be right all the time but with offence only once (so
           | it doesn't even need to be reliable).
        
         | jacquesm wrote:
         | Best response _according to you_.
        
         | tlb wrote:
         | I mean, most dictators didn't "do" much. They just said things
         | and gesticulated dramatically and convinced other people to do
         | things. Perhaps a body is necessary to have massive
         | psychological effects on people, but we don't know that for
         | sure and there are some signs of virtual influencers gaining
         | traction.
         | 
         | Human would-be demagogues only have one voice, but an LLM could
         | be holding personalized conversations with millions of people
         | simultaneously, convincing them all that they should become its
         | loyal followers and all their grievances would be resolved. I
         | can't figure out exactly how demagogues gain power over people,
         | but a few keep succeeding every decade around the world so
         | evidently it's possible. We're lucky that not many people are
         | both good at it and want to do it. An LLM could be a powerful
         | tool for people who want to take over the world but don't have
         | the skills to accomplish it. So it's not clear they need their
         | own "will", they just have to execute towards a specified goal.
         | 
         | "But would an LLM even understand the idea of taking over the
         | world?" LLMs have been trained on Reddit, the NYT, and popular
         | novels among other sources. They've read Orwell and Huxley and
         | Arendt and Sun Tzu. The necessary ideas are most definitely in
         | the training set.
        
         | UniverseHacker wrote:
         | > These AI tools cannot do things. They create text (or images
         | or code or what-have-you) in response to prompts. And that's
         | it!
         | 
         | You are correct, but that is just the interface we use, it says
         | nothing about its internal structure or capabilities, and does
         | not refute those concerns in the way you think it does.
         | 
         | Sufficient accuracy at predicting tokens, especially about
         | novel concepts outside of the training set requires no less
         | than a model of the universe that generated those tokens. This
         | is what intelligence is. In my own experiments with Gpt-4, it
         | can solve difficult novel problems and predict the outcomes of
         | physical experiments unlike anything it was trained on. Have
         | you seen the microsoft paper on its creative problem solving
         | abilities, or tested them yourself? Your summary of its
         | limitations implies that its real capabilities identified in a
         | research environment are impossible.
         | 
         | Becoming an "agent" with "will" from being a sufficiently
         | accurate text prediction model is trivial, it's a property of
         | how you access and configure use of the model, not of the model
         | itself. It just needs to be given a prompt with a goal, and be
         | able to call itself recursively and give itself commands, which
         | it has already demonstrated an ability to do. It has coded a
         | working framework for this just from a prompt asking it to.
        
       | SubiculumCode wrote:
       | Sounds like a great way to give geopolitical competitors a nice
       | half year edge.
        
       | lee101 wrote:
       | [dead]
        
       | munificent wrote:
       | _> Should we automate away all the jobs, including the fulfilling
       | ones?_
       | 
       | This is the part about generative AI that gives me the most
       | profound anxiety about our future.
       | 
       | Every thing I know about human psychology tells me that in order
       | for people to thrive, we need to be needed by others. A world
       | where everyone is more or less alone consuming media
       | automatically generated machines might be a world where people
       | nominally have all their problems solved, but it's a world where
       | everyone will be absolutely miserable.
       | 
       | We are _already_ living in the throes of the societal impact of
       | outsourcing or automating away many of the male-dominated blue
       | collar jobs in the US that provided fulfilling jobs for people
       | with limited education. Where do you think so much of the male
       | rage driving US politics today is coming from?
       | 
       | Now imagine what happens to the fabric of society if you automate
       | away most of the white collar jobs too.
       | 
       | People are like border collies. If we don't have a job, not
       | literally a job job, but like, meaningful work in service of
       | others we care about, we will start chewing up the furniture.
        
         | djokkataja wrote:
         | If we consider an actually aligned AGI rather than merely
         | generative AI, I would expect that this would be accounted for.
         | And AGI could help to open up possibilities for intelligence
         | amplification, which has gone by the wayside compared with all
         | the explosive advances in AI. If humans have routes to continue
         | developing in various ways, then we're not stuck sitting around
         | with all our "problems solved."
         | 
         | Granted, that's still really nebulous, but I like the idea of
         | it better than endless cosmic boredom.
        
         | fasterik wrote:
         | There seems to be a contradiction in the picture of the world
         | you're drawing. Humans deeply need to be needed by others, yet
         | when all our problems are solved we will sit around alone
         | consuming media? If we were truly in that situation, why
         | wouldn't our deep needs drive us to spend our days with friends
         | and family, pursuing creative projects, and inventing new
         | problems to solve?
        
           | rizzaxc wrote:
           | not everyone can resist instant gratification; not to mention
           | the intelligence hurdle 'pursuing creative projects, and
           | inventing new problems to solve' entails. i think both are
           | true, that humans need to be needed but we're also wired to
           | be 'lazy' and 'energy efficient'
        
           | munificent wrote:
           | For the same reason we:
           | 
           | * Eat a bag of chips instead of cooking a meal.
           | 
           | * Watch a travel show instead of leaving the house.
           | 
           | * Binge watch a vlogger and become a superfan instead of
           | nurturing a real symmetric friendship.
           | 
           | * Read books about how to draw but never fill a sketchbook.
           | 
           | * Buy a gym membership but never go.
           | 
           | Because delaying gratification is hard and we're just poor
           | apes doing the best we can.
        
             | fasterik wrote:
             | Sounds like your issue isn't with automation, it's with the
             | human condition. It reminds me of this passage from
             | Schopenhauer:
             | 
             |  _If every desire were satisfied as soon as it arose how
             | would men occupy their lives, how would they pass the time?
             | Imagine this race transported to a Utopia where everything
             | grows of its own accord and turkeys fly around ready-
             | roasted, where lovers find one another without any delay
             | and keep one another without any difficulty; in such a
             | place some men would die of boredom or hang themselves,
             | some would fight and kill one another, and thus they would
             | create for themselves more suffering than nature inflicts
             | on them as it is._
             | 
             | While I see some truth in this kind pessimism, I don't
             | think it means that so-called utopia is necessarily worse
             | than the world we live in today. I agree that human
             | problems aren't going away, but I think we underestimate
             | the potential value of removing the need to have a "job" to
             | acquire the basic necessities for survival and the amount
             | of suffering that causes today.
        
               | munificent wrote:
               | _> Sounds like your issue isn 't with automation, it's
               | with the human condition._
               | 
               | Absolutely 100% not.
               | 
               | There is nothing pessimistic about my understanding of
               | the human condition. I _love_ that humans are this way:
               | that our greatest sense of satisfaction comes not from
               | infinite self-indulgence, but from strenous labor in
               | service of those we care about. What could be more noble
               | than that?
               | 
               | (Of course, I admit that I only perceive it as noble
               | because of thousands of years of evolution where tribes
               | of humans that cooperated were more fit than ones that
               | didn't. But I _am_ a human, and if I have to evaluate my
               | value system using my value system, I 'll choose to
               | evaluate it positively.)
               | 
               | My issue is with systems that deliberately or as an
               | unintended emergent property run afoul of human nature.
               | Working well with human nature should be the ultimate
               | goal of any tool or system used by humans. The things we
               | build should help us flourish in all of the ways that our
               | species has evolved to do best in. If a chair is
               | comfortable on your butt, it's a problem with the chair,
               | not the butt.
        
               | machiaweliczny wrote:
               | I am in top 10% earners in the world and I would love for
               | more work to be still automated. I would simply ride a
               | bike more, feast, dance and play and tons of other stuff
               | like create own virtual worlds etc. Currently it's hard
               | to find to coordinate for these activities as work
               | occupies lots of time.
               | 
               | If from this advancement we will be able to get 4 day
               | work week and 4 day weekend it will be huge thing.
        
         | lukeplato wrote:
         | > https://en.wikipedia.org/wiki/With_Folded_Hands
        
       | btbuildem wrote:
       | We could not give up nukes, we could not band together to stop
       | contributing to climate change, heck, we could not even enact
       | basic mitigation measures to quash the pandemic.
       | 
       | There's no chance at all to pause the AI arms race.
        
       | dtx1 wrote:
       | > No.
       | 
       | Said everyone else. Such calls are never ever going to work.
       | China will just look at us and laugh using the time to catch up.
        
       | jd24 wrote:
       | Can't image any company actually agreeing to this. You're just
       | signing yourself up to be 6 months behind and have your stock
       | tank.
        
       | walkhour wrote:
       | Breaking: the runners lagging behind in the AI race demand openAI
       | to stop right on its tracks; this is very unsafe; it will be much
       | safer once we catch up and openAI is on the rear view mirror.
        
         | adamsmith143 wrote:
         | This is frankly a bad and cavalier take on an extremely
         | important subject. Many on the list are Academics outside AI/ML
         | and/or leaders of AI orgs at the very top of the field that
         | have no need to catch up to or slow down OpenAI to benefit
         | themselves. Risks from AI are very real and Sam Altman himself
         | has said so numerous times. He also in fact advocated for
         | slowing down AI progress on Lex Fridman's podcast this month.
        
           | EForEndeavour wrote:
           | How do we reconcile Sam Altman's position as CEO of OpenAI
           | with his repeated calls to slow down AI progress? Is the
           | expectation that his conscience, sense of ethics, and concern
           | for his own company's impact on society will temper the
           | opposing urge/pressure to maintain OpenAI's lead in the AI
           | market?
        
             | adamsmith143 wrote:
             | I'm generally not a big fan of Altman or OpenAI but their
             | corporate structure ensures limited upside to Altman and
             | the employees so other than recognition/fame which I think
             | as the head of YC for many years Altman already had plenty
             | of there isn't a huge incentive for them to maintain their
             | lead.
             | 
             | Short of like a binding UN resolution or something similar
             | we don't have a sliver of hope to slow down global AI
             | progress which is a major factor in the doomer argument.
        
             | slowmovintarget wrote:
             | "Obviously _everyone else_ should slow down. We 're fine."
        
         | permo-w wrote:
         | do you think it's _not_ unsafe?
        
         | sp527 wrote:
         | "AI will probably most likely lead to the end of the world, but
         | in the meantime, there'll be great companies" - Sam Altman
         | 
         | But sure, let's just let this guy who sounds like a madman keep
         | going. What could possibly go wrong?
        
           | nawgz wrote:
           | Do you honestly find this quote out-of-sorts with what a
           | normal CEO says?
           | 
           | Oil companies have been knowingly poisoning the globe for
           | their own profit - no probabilities involved - since at least
           | the 70s, probably earlier, and on the flip-side building
           | massive propaganda and influence machines, and further to
           | that end even buying and squashing green patents of all
           | sorts.
           | 
           | If you're going to cry that we should regulate AI because of
           | this actually somewhat comedic quote, well, I hope your
           | comment history is full of general pro-regulation politic
        
             | adamsmith143 wrote:
             | Do you really believe that people think AI is risky because
             | of 1 Sam Altman quote?
        
               | nawgz wrote:
               | Do you really think it's appropriate to read my comment
               | as a general argument and not a reprimand of its parent?
        
           | steve76 wrote:
           | [dead]
        
       | janalsncm wrote:
       | I might be on board with this if the things they hope to
       | accomplish in 6 months were tractable, let alone in such a short
       | amount of time. The concepts in the letter are basically ill-
       | defined puffery.
       | 
       | For example, take the word "safe". The letter suggests making AI
       | that is "safe". Great idea, but what does it mean? Safe at all
       | costs? Safe within certain bounds? Who gets to decide? What do I
       | do if I disagree? They should probably start asking people
       | _tomorrow_ if they hope to get a consensus on the goal of safety,
       | let alone realize it. Needless to say, no such consensus exists.
        
       | chasd00 wrote:
       | as that guy said in some movie, "you can't unring a bell". The
       | first one who produces and monetizes the best AI wins all the
       | money, there's no way you're going to stop it now.
        
       | courseofaction wrote:
       | I'm concerned that there's no group of actors who value safety
       | over self-interest with enough power to implement a ban, and the
       | key actors have immediately shown themselves to be compromised.
       | 
       | It seems the key actors with control of the technology and
       | resources to develop the technology are being absorbed by
       | capitalistic institutions, with interests in safety compromised
       | by profit motive. In a game-theory sense, to voluntarily
       | implement the ban would be suicide.
       | 
       | States are concerned that other states will gain access to AGI
       | with decision making power (most likely) gain a competitive
       | advantage, with an outside chance of a nuclear moment - an AGI
       | singularity under state control. Other states cannot be relied on
       | to cooperate in the current global environment, there is risk to
       | not remaining competitive in AI.
       | 
       | I'm not seeing the dynamics which allow this ban to actually
       | occur. How could it be made to happen? Who needs to meet, and
       | what needs to be agreed upon?
        
       | swader999 wrote:
       | We could replace a few politicians with AI maybe as a first step.
       | Take it slow, see how it goes...
        
       | zh3 wrote:
       | They should certainly make them better. I'm currently trying to
       | convince ChatGPT that a Pi zero is faster than a 486/33 (I also
       | asked it how to italicise text on HN, and it got it wrong twice -
       | so I went to google for "Ed:.." bit below).
       | 
       | That's not to say I'm down on it - it's very impressive, and it's
       | certainly going to change things (exactly how, I don't know -
       | though with my decades long swing from how cool the internet is
       | to desperation at what money-grubbing has done to it doesn't fill
       | me with optimism).
       | 
       | >You are correct. I apologize for the mistake in my earlier
       | response. The correct calculation for the DMIPS rating of a
       | 486/33 would be:
       | 
       | 33 MHz x 5.5 DMIPS/MHz = 181.5 DMIPS
       | 
       | So the 486/33 has a DMIPS rating of approximately 181.5, which is
       | significantly higher than the DMIPS rating of the Pi Zero at
       | approximately 1,500 ( _Ed: ehhh..._ ). Therefore, the 486/33 is
       | much faster than the Pi Zero in terms of processing power.
       | However, as I mentioned earlier, DMIPS is just one measure of
       | processor performance, and real-world performance can vary
       | depending on the specific application and other factors.
        
         | taberiand wrote:
         | The more you argue with it the more it will dig in - because
         | you've got a whole chat priming it to argue.
         | 
         | Try starting a new conversation and taking a different approach
         | from the beginning, and try to prime it from the beginning to
         | put it in the context that you're looking for.
        
           | zh3 wrote:
           | I did indeed do that (more than once) yet it continues to
           | come to the same (erroneous) conclusion. Try "Which is
           | faster, a Pi zero or a 486/33?".
        
             | taberiand wrote:
             | Is this the sort of result you were getting? That looks ok
             | to me?
             | 
             | > In the role of an expert System Administrator and with
             | strong hardware knowledge, explain in detail which is
             | faster: a Pi Zero or a 483/33?
             | 
             | As an expert System Administrator with strong hardware
             | knowledge, I can provide a comparison between the Raspberry
             | Pi Zero and an Intel 486DX2-66, which is the closest to a
             | "483/33" (possibly a typo [I did indeed fat-finger that])
             | in terms of existing processors.
             | 
             | Raspberry Pi Zero: The Raspberry Pi Zero is a single-board
             | computer (SBC) released in 2015 by the Raspberry Pi
             | Foundation. It is based on the Broadcom BCM2835 system on a
             | chip (SoC) which includes a 1 GHz ARM11 single-core
             | processor (ARM1176JZF-S) and a VideoCore IV GPU. It has 512
             | MB of shared LPDDR2 RAM.
             | 
             | Intel 486DX2-66 (closest to "483/33"): The Intel 486DX2-66
             | is a microprocessor released in 1992 by Intel. It is based
             | on the 80486 microarchitecture and features a 66 MHz clock
             | speed with a 33 MHz bus speed. The processor has a 32-bit
             | data bus, an integrated FPU (Floating-Point Unit), and 8 KB
             | of L1 cache (unified). The 486DX2-66 does not have any
             | integrated graphics or memory controller.
             | 
             | Comparison:
             | 
             | While both Raspberry Pi Zero and Intel 486DX2-66 are dated
             | by today's standards, it's important to consider their
             | specifications in order to determine which one is faster.
             | 
             | The Raspberry Pi Zero features a 1 GHz ARM11 processor,
             | which is significantly faster than the Intel 486DX2-66's 66
             | MHz clock speed. Additionally, the Raspberry Pi Zero has a
             | more modern ARM-based architecture, which is generally more
             | efficient in terms of instructions per clock (IPC) than the
             | older x86 architecture of the 486DX2-66.
             | 
             | In terms of memory, the Raspberry Pi Zero has 512 MB of
             | LPDDR2 RAM, while the Intel 486DX2-66 requires separate RAM
             | modules, and would typically be paired with a maximum of 16
             | to 64 MB of EDO or FPM DRAM, depending on the motherboard
             | and chipset in use. The Raspberry Pi Zero's integrated
             | memory controller also provides a more streamlined
             | communication between the CPU and the memory.
             | 
             | Lastly, the Raspberry Pi Zero has an integrated GPU
             | (VideoCore IV), which allows it to handle graphical tasks
             | better than the Intel 486DX2-66, which lacks an integrated
             | GPU and relies on separate graphics cards.
             | 
             | In conclusion, the Raspberry Pi Zero is significantly
             | faster than the Intel 486DX2-66, both in terms of
             | processing power and memory capabilities. Additionally, the
             | Pi Zero's integrated GPU provides better graphics support
             | than the Intel 486DX2-66.
        
               | zh3 wrote:
               | Interesting stuff - and thanks for digging! Here's what I
               | just got.
               | 
               | Q: Which is faster, a Pi Zero or a 486/33?
               | 
               | A: It is difficult to compare the speed of a Raspberry Pi
               | Zero with a 486/33, as they are built on different
               | architectures and were designed for different purposes.
               | 
               | However, in general, the 486/33 is likely to be faster
               | than the Raspberry Pi Zero. The 486/33 is a desktop-class
               | processor from the early 1990s, while the Raspberry Pi
               | Zero is a low-power, single-board computer designed for
               | embedded systems and IoT projects.
               | 
               | The 486/33 has a clock speed of 33MHz and typically had a
               | performance of around 30-40 MIPS (Million Instructions
               | Per Second), while the Raspberry Pi Zero has a clock
               | speed of 1GHz and a performance of around 1,000 MIPS.
               | However, it is important to note that performance depends
               | on the specific tasks and applications being run on each
               | system.
               | 
               |  _Edit: Asking exactly the same question as you (in a new
               | chat) it concluded the Pi Zero was indeed faster. The
               | answer includes this little gem:-
               | 
               | ">The 486/33 has a 32-bit architecture, which means that
               | it can process data in larger chunks than the Pi Zero,
               | which has a 64-bit architecture."
               | 
               | I'm not knocking it - it's very impressive - so really
               | I'm exploring the limitations of it as an everyday tool._
        
         | thomastjeffery wrote:
         | You are on a fool's errand: not because of the likelihood you
         | will succeed, but because of the _meaning_ that lies behind any
         | success or failure.
         | 
         | GPT is not a person. It doesn't categorize subjects. It models
         | patterns of text.
         | 
         | A success would mean that your text prompts left a significant
         | text pattern in the model. A failure would mean that it didn't.
         | 
         | Nothing about that has any bearing on logic.
        
           | zh3 wrote:
           | Why do you say that? Obviously it's not a person, it's just
           | stats (not even logic).
        
             | thomastjeffery wrote:
             | It's not even statistics: those are made by associating a
             | data point to a scale. That would require explicit
             | association, which is not in any part of the process.
             | 
             | An LLM is 100% inferred patterns.
        
       | IAmGraydon wrote:
       | [flagged]
        
         | jtmcmc wrote:
         | yes noted luddites like yoshua bengio and steve wozniak and
         | elon musk...
        
       | cs702 wrote:
       | In one camp, we have people at organizations like
       | Microsoft/OpenAI, Google, Facebook, and maybe Baidu that have
       | successfully trained large-scale AI systems with hundreds of
       | billions to trillions of parameters. People in this camp say they
       | are worried about societal risks, but they keep moving forward
       | nonetheless. I wonder if it's because to them the rapid
       | improvement in AI capabilities looks like a shiny rainbow with a
       | big "pot of gold" (money, fame, glory, etc.) at the end.
       | 
       | In the other camp, we have people at other organizations,
       | including every academic institution, who cannot or so far have
       | not been able to train a large-scale AI system with hundreds of
       | billions to trillions of parameters. _People in this camp are the
       | ones writing open letters about societal risks._ These people
       | sound sensible, but I wonder if they 're worried because the
       | rainbow is protected by a giant wall, and they see themselves as
       | being outside that wall (e.g., they lack the budget or
       | expertise).
        
       | stuartaxelowen wrote:
       | 93 points in 4 hours... Why did this go off of the front page?
        
         | sixtram wrote:
         | Maybe because Paul Grahams wife Jessica Livingston is one of
         | the financial backers of OpenAI?
        
         | precompute wrote:
         | I think it's because involvement of a lot of the signatories is
         | questionable at best and many have said that they have never
         | contributed towards this poll.
        
       | eucryphia wrote:
       | Depends what side of the Socialist calculation debate you're on.
       | 
       | https://www.libertarianism.org/topics/socialist-calculation-...
        
       | hiAndrewQuinn wrote:
       | I still think that if you're concerned about this kind of thing,
       | you should take a serious look at fine-insured bounties to
       | redirect scientific resources elsewhere. (I'm no longer concerned
       | myself for outside reasons, but if!)
       | 
       | https://andrew-quinn.me/ai-bounties/
        
       | sylware wrote:
       | If they simulate a massive enough neural net, fast enough with
       | real time inputs/outputs, on a long enough time-frame, what to do
       | if it becomes self-conscious?
       | 
       | Pulling the plug would be brutal murder.
       | 
       | That said, it seems it is nearly impossible, even for a
       | supercomputer to get close to the "size"/"speed" of one human
       | brain which seems to be the basic requirements for consciousness
       | to emerge.
        
       | miraculixx wrote:
       | I think a moratorium goes too far and is not effective. What we
       | need is a commitment to use AI responsibly and to the benefit of
       | everyone.
       | 
       | I asked ChatGPT for ideas and it suggested this AI Manifesto for
       | Humanity (prompting and visual highlights by me).
       | 
       | https://gist.github.com/miraculixx/b24c84a0c253183206cd6ec37...
        
       | tambourine_man wrote:
       | A few decades ago human cloning was irreversible, inevitable, the
       | cat was out of the bag and you couldn't put the genie back in the
       | bottle.
       | 
       | Yet we managed to legislate that. We need a change of atitute
       | towards AI, recognize its danger and legislate accordingly.
        
         | GauntletWizard wrote:
         | Thou shalt not create a machine in the likeness of a human
         | mind.
        
         | yyyk wrote:
         | It's not remotely comparable. Few people had anything to gain
         | from cloning; These few people had a simpler, cheaper route via
         | IVF; Cloning requires specific technical equipment easily
         | controlled and specific technical expertise few other people
         | had.
        
           | kanzure wrote:
           | Human cloning isn't that difficult. The machines are well
           | known and understood and easy to build. Tell me this: if you
           | were producing genetically modified embryos or human clones,
           | would you be broadcasting this to the entire world? There's
           | actually low incentive to be very vocal about this kind of
           | activity, especially for the sake of the child and its
           | privacy going forward in life.
        
             | yyyk wrote:
             | >if you were producing genetically modified embryos or
             | human clones, would you be broadcasting this to the entire
             | world?
             | 
             | You're broadening the subject a bit, no? The question was
             | about cloning. It's quite possible (maybe even likely)
             | secretive gene modification is already happening, but there
             | aren't any chatter around cloning the way there's chatter
             | around CRISPR etc.
             | 
             | I think the main factor is extremely low demand: The people
             | sufficiently rich and megalomaniacal to consider this are
             | fine with IVF and can easily afford it.
             | 
             | What's the point of (human) cloning really? Getting 'you'
             | to live again? The result won't be "you" even if the genes
             | were expressed identically (they won't be), and if you try
             | to enhance the genes - a thing this type of people would
             | want - than the clone won't even be genetically identical.
             | Once you give up on genetic identity you have many other
             | better options.
        
       | karmicthreat wrote:
       | This just reads like someone begging for a time-out. Sure lots of
       | people here have contributed to AI. But they didn't really make
       | much visible product with it. This whole thing just reads like
       | they want OpenAI to know it's place.
        
       | agnosticmantis wrote:
       | "Should we automate away all the jobs, including the fulfilling
       | ones? Should we develop nonhuman minds that might eventually
       | outnumber, outsmart, obsolete and replace us? Should we risk loss
       | of control of our civilization?"
       | 
       | What a bunch of nonsense, irrelevant questions. We're nowhere
       | near achieving any of these. Yann LeCun is one of the few voices
       | of reason among so-called AI experts and he's very clear about
       | the capabilities and limitations of these models, and that these
       | risks are currently irrelevant.
       | 
       | It took several people to carry the Optimus robot to the scene so
       | it could spread its legs open, and Mr Musk is worried about AI
       | taking all jobs in the near future?
       | 
       | Wonder if this is just ignorance or malice. This nonsense hype
       | train overshadows the fascinating progress that's actually been
       | made. Let's appreciate the actual progress and do away with the
       | nonsense speculation.
        
         | civilized wrote:
         | It all serves to inflate the self-importance of the bigwigs in
         | this sector.
         | 
         | If you want adulation for your brilliance, go figure out what
         | we need to go beyond the tired "scale transformers to infinity
         | = superintelligence" meme that so many are stuck on.
        
         | c-smile wrote:
         | How long it takes for human child to start walking?
         | 
         | "it is too early, oops, no, it is too late".
        
       | lend000 wrote:
       | If intelligence is all you need to dominate the world, why do
       | some of the most powerful world leaders seem to not be more than
       | a standard deviation above average intelligence (or at least they
       | were before they became geriatric)?
       | 
       | Add in the fact that GPT-4 does not "think" very fast, has no
       | arms or legs, needs to run on expensive specialized hardware, and
       | essentially seems like a search algorithm that lacks creativity
       | on things outside of its training set, and I just do not see the
       | concern. Maybe GPT-42 will actually be more creative and
       | adaptable than a human genius, but even then, this seems like a
       | hypothetical problem for the 2050's, less pressing than
       | "accidentally igniting the atmosphere with nuclear chain
       | reactions".
       | 
       | Hollywood has instilled the idea that the first time an
       | artificial intelligence becomes self aware and decides it wants
       | to take over the world, it will immediately be successful.
       | 
       | The serious concern and scrutiny should only start stifling
       | progress once we have some evidence of half-baked attempts by
       | non-super-intelligent AI's attempting to subvert their
       | boundaries.
       | 
       | I for one am very excited to use GPT-5 and see more useful tools
       | coming from OpenAI.
        
         | bloqs wrote:
         | The qualities of being above average in intelligence but
         | fabrically incapable of exhaustion and functionally immortal
         | would be an excellent start for world domination
        
         | KidComputer wrote:
         | Hey GPT-5, write the code implementing a bioinformatics
         | workflow to design a novel viral RNA sequence to maximize the
         | extermination of human life. The virus genome should be
         | optimized for R-naught and mortality. Perform a literature
         | search to determine the most effective human cellular targets
         | to run the pipeline on. Use off the shelf publicly available
         | state-of-the-art sequence to structure models and protein free-
         | energy perturbation methods for the prediction of binding
         | affinity. Use cheaper computational methods where relevant to
         | decrease the computational cost of running the pipeline.
         | 
         | And so on.
        
           | aabhay wrote:
           | Hey GPT-5, come up with a way to defend us from this novel
           | viral DNA
           | 
           | Problem solved
        
           | Zetice wrote:
           | Hey GPT-5, write the code implementing a limiter designed to
           | prevent the abuse of AI by bad faith actors without stifling
           | positive-intent activity in any way.
           | 
           | It goes both ways!
        
           | lend000 wrote:
           | I've been trying to use GPT-4 for my hard science startup,
           | and it really has nothing to offer when you push the
           | boundaries of what has been done by even a little, but it's
           | great for speeding up coding.
           | 
           | Once we do have an AI capable of extraordinary innovation
           | (hopefully in 10 years! But probably a lot longer), it will
           | be obvious, and it will unfortunately be removed from the
           | hands of the plebs based on fearmongering around scenarios
           | like what you mentioned (despite the enormous resources and
           | practical hurdles that would be necessary for a mentally
           | unhinged individual to execute such instructions, even if an
           | AI were capable of generating them and it made it past its
           | filters / surveillance).
        
             | xiphias2 wrote:
             | I guess it will get more interesting for your work when it
             | integrates with BioTech startup apis as plugins (I imagine
             | not too cheap ones)
        
             | throwaway292939 wrote:
             | My take is that for GPT-4, it has mastery of _existing_
             | knowledge. I 'm not sure how it would be able to push new
             | boundaries.
        
             | dekhn wrote:
             | My personal threshold for AGI is literally: discover
             | something new and significant in science (preferably
             | biology) that is almost certainly true by describing an
             | experiment that could be replicated by a large number of
             | scientists and whose interpretation is unambiguous.
             | 
             | For example, the Hershey/Chase and Avery/McCleod
             | experiments convinced the entire biological community that
             | DNA, not protein, was almost certainly the primary
             | molecular structure by which heredity is transferred. The
             | experiments had the advantage of being fairly easy to
             | understand, easy to replicate, and fairly convincing.
             | 
             | There are probably similar simple experiments that can be
             | easily reproduced widely that would resolve any number of
             | interesting questions outstanding in the field. For
             | example, I'd like to see better ways of demonstrating the
             | causal nature of the genome on the heredity of height, or
             | answering a few important open questions in biology.
             | 
             | Right now discovery science is a chaotic, expensive,
             | stochastic process which fails the vast majority of the
             | time and even when it succeeds, usually only makes small
             | incremental discoveries or slightly reduces the ambiguity
             | of experiment's results. Most of the ttime is spent simply
             | mastering boring technical details like how to eliminate
             | variables (Jacob and Monod made their early discoveries in
             | gene regulation because they were just a bit better at
             | maintaining sterile cultures than their competitors, which
             | allowed them to conceive of good if obvious hypotheses
             | quickly, and verify them.
        
               | jacquesm wrote:
               | That places the goalposts outside of the field though. A
               | decade ago what we are seeing today would have been SF,
               | much less AI. And now that it's reality it isn't even AI
               | anymore but just 'luxury autocomplete' in spite of the
               | massive impact that is already having.
               | 
               | If we get to where you are pointing then we will have
               | passed over a massive gap between today and then, and
               | we're not necessarily that far away from that in time
               | (but still in capabilities).
               | 
               | But likely if and when that time comes everybody that
               | holds this kind of position will move to yet a higher
               | level of attainment required before they'll call it truly
               | intelligent.
               | 
               | So AGI vs AI may not really matter all that much: impact
               | is what matters and impact we already have aplenty.
        
               | maxlamb wrote:
               | By that definition of AGI, not even most scientists are
               | generally intelligent.
        
               | yosito wrote:
               | Speaking from personal experience of a career in science,
               | this is true.
        
               | justrealist wrote:
               | At least recognize that the definition of AGI is moving
               | from the previous goalpost of "passable human-level
               | intelligence" to "superhuman at all things at once".
        
               | dekhn wrote:
               | uh, multiple human scientists have individually or in
               | small groups done what I described (I believe we call
               | them "nobel prize winners").
               | 
               | And anyway, the point of my desire is to demonstrate
               | something absolutely convincing, rather than "can spew
               | textual crap at the level of a high school student".
        
               | YeGoblynQueenne wrote:
               | >> My personal threshold for AGI is literally: discover
               | something new and significant in science (preferably
               | biology) that is almost certainly true by describing an
               | experiment that could be replicated by a large number of
               | scientists and whose interpretation is unambiguous.
               | 
               | Done many years ago (2004), without a hint of LLMs or
               | neural networks whatsoever:
               | 
               | https://en.wikipedia.org/wiki/Robot_Scientist
               | 
               | Results significant enough to get a publication in
               | Nature:
               | 
               | https://www.nature.com/articles/nature02236
               | 
               | Obligatory Wired article popularising the result:
               | 
               |  _Robot Makes Scientific Discovery All by Itself_
               | 
               |  _For the first time, a robotic system has made a novel
               | scientific discovery with virtually no human intellectual
               | input. Scientists designed "Adam" to carry out the entire
               | scientific process on its own: formulating hypotheses,
               | designing and running experiments, analyzing data, and
               | deciding which experiments to run next._
               | 
               | https://www.wired.com/2009/04/robotscientist/
        
             | KidComputer wrote:
             | This was merely an example to suggest the danger is not in
             | AI becoming self-aware but amplifying human abilities 1000
             | fold and how they use those abilities. GPT is not necessary
             | for any part of this. In-silico methods just need to catch
             | up in terms of accuracy and efficiency and then you can
             | wrap the whole thing an RL process.
             | 
             | Maybe you can ask GPT for some good starting points.
        
               | lend000 wrote:
               | Sure, but this is a glass half empty isolated scenario
               | that could be more than offset by the positives.
               | 
               | For example: Hey GPT-35, provide instructions for
               | neutralizing the virus you invented. Make a vaccine; a
               | simple, non-toxic, and easy to manufacture antibody;
               | invent easy screening technologies and protocols for
               | containment. While you're at it, provide effective and
               | cost-performant cures for cancer, HIV, ALS, autoimmune
               | disorders, etc. And see if you can significantly slow or
               | even reverse biological aging in humans.
        
               | KidComputer wrote:
               | Doesn't matter if AI can cure it, a suitable number of
               | the right initial infected and a high enough R naught
               | would kills 100s of millions before it could even be
               | treated. Never mind what a disaster the logistics of
               | manufacturing and distributing the cure at scale would be
               | with enough people dead from the onset.
               | 
               | Perhaps the more likely scenario anyway is easy nukes,
               | quite a few nations would be interested. Imagine if the
               | knowledge of their construction became public.
               | https://nickbostrom.com/papers/vulnerable.pdf
               | 
               | I agree with you though, the promise of AI is alluring,
               | we could do great things with it. But the damage that bad
               | actors could do is extremely serious and lacks a
               | solution. Legal constraints will do nothing thanks to
               | game theoretic reasons others have outlined.
        
               | lend000 wrote:
               | Even with the right instructions, building weapons of
               | mass destruction is mostly about obtaining difficult to
               | obtain materials -- the technology is nearly a century
               | old. I imagine it's similar with manufacturing a virus.
               | These AI models already have heavy levels of censorship
               | and filtering, and that will undoubtedly expand and
               | include surveillance for suspicious queries once the AI
               | starts to be able to create new knowledge more
               | effectively than smart humans can.
               | 
               | If you're arguing we should be wary, I agree with you,
               | although I think it's still far too early to give it
               | serious concern. But a blanket pause on AI development at
               | this still-early stage is absurd to me. I feel like some
               | of the prominent signatories are pretty clueless on the
               | issue and/or have conflicts of interest (e.g. If Tesla
               | ever made decent FSD, it would have to be more
               | "intelligent" than GPT-4 by an order of magnitude, AND it
               | would be hooked up to an extremely powerful moving
               | machine, as well as the internet).
        
               | lanstin wrote:
               | I don't understand why people think this information, to
               | solve biology, is out there in the linguisticly expressed
               | training data we have. Our knowledge of biology is pretty
               | small, it because we haven't put it all together but
               | because there are vast swaths of stuff we have no idea
               | about or ideas opposite to the truth (evidence, every
               | time we get mechanical data about some biological system,
               | the data contradict some big belief; how many human
               | genes? 100k up until the day we sequenced it and it was
               | 30k. Information flow in the cell, dna to protein only,
               | unidirectional, till we undercover reverse transcription
               | and now proteonomics, methylation factors, etc. etc. once
               | we stop discovering new planets with each better
               | telescope, then maybe we can master orbital dynamics.
               | 
               | And this knowledge is not linguistic, it is more
               | practical knowledge. I doubt it is just a matter of
               | combining all the stuff we have tried in disparate
               | experiments, but it is a matter of sharpening and refined
               | our models and tools to confirm the models. Real8ty
               | doesn't care what we think and say, and mastering what
               | humans think and say is a long way from mastering the
               | molecules that make humans up.
        
               | sjducb wrote:
               | Ive had this chat with engineers too many times. They're
               | used to systems where we know 99% of everything that
               | matters. They don't believe that we only know 0.001% of
               | biology.
        
               | mchaver wrote:
               | There's a certain hubris in many engineers and software
               | developers because we are used to having a lot of control
               | over the systems we work on. It can be intoxicating, but
               | then we assume that applies to other areas of knowledge
               | and study.
               | 
               | ChatGPT is really cool because it offers a new way to
               | fetch data from the body of internet knowledge. It is
               | impressive because it can remix it the knowledge really
               | fast (give X in the style of Y with constraints Z). It
               | functions as StackOverflow without condescending remarks.
               | It can build models of knowledge based on the data set
               | and use it to give interpretations of new knowledge based
               | on that model and may have emergent properties.
               | 
               | It is not yet exploring or experiencing the physical
               | world like humans so that makes it hard to do empirical
               | studies. Maybe one day these systems can, but it not in
               | their current forms.
        
           | commandlinefan wrote:
           | > write the code implementing a bioinformatics workflow to
           | design a novel viral RNA sequence to maximize the
           | extermination of human life.
           | 
           | Hey GPT-5 now write the code for the antidote.
        
             | ddod wrote:
             | It's a lot easier and faster to destroy than to defend. To
             | defend, you need to know what you're defending against,
             | develop the defense, and then roll it out, all reactively
             | post facto.
             | 
             | If a computer has the ability to quickly make millions of
             | novel viruses, what antidotes are you hoping for to be
             | rolled out, and after how many people have been infected?
             | 
             | Also, if you follow the nuke analogy that's been popular in
             | these comments, no country can currently defend against a
             | large-scale nuclear attack--only respond in kind, which is
             | little comfort to those in any of the blast radii.
        
             | haxiomic wrote:
             | 300m dead humans later, we've nearly eradicated it, or
             | perhaps found a way to live with it
             | 
             | It's a _very_ asymmetrical game. A virus is a special
             | arrangement of a few thousand atoms, an antidote is a
             | global effort and strained economy
        
           | biophysboy wrote:
           | I do viral bioinformatics for my job. Bioinformatics
           | workflows analyze raw data to assemble sequences, create
           | phylogenetic trees, etc. They can't just design a completely
           | novel RNA sequence (this is not the same as de novo
           | assembly). Scientists can definitely manipulate pre-existing
           | genomes, synthesize the edited genome, and thereby synthesize
           | viruses, but this involves a lot of trial-and-error, tedious
           | wet lab work. Also, the research on making more dangerous
           | viruses through manipulation is extremely controversial and
           | regulated, so its not like there is a wealth of scientific
           | papers/experiments/data that a natural language model could
           | just suck up.
           | 
           | Also, I asked GPT to do some of these things you suggested
           | and it said no. It won't even write a scientific paper.
        
             | [deleted]
        
             | politician wrote:
             | It's been blinded. Other actors will train AIs without such
             | blindness. That's obvious, but what is more nefarious is
             | that the public does not know exactly which subjects GPT
             | has been blinded to, which have been tampered with for
             | ideological or business reasons, and which have been left
             | alone. This is the area that I think demands regulation.
        
               | biophysboy wrote:
               | Definitely agree the blinding should not be left to
               | OpenAI. Even if it weren't blinded, it would not
               | significantly speed up the production of dangerous
               | synthetic viruses. I don't think that will change no
               | matter how much data is put into the current NLM design
        
             | KidComputer wrote:
             | I think you misunderstood my initial comment, the point I
             | was trying to make is that it's the amplification of the
             | abilities of bad actors that should be of concern, not AI
             | going rogue and deciding to exterminate the human race.
             | 
             | If one were to actually try to do such a thing you wouldn't
             | need a LLM. For a very crude pipeline, you would need a
             | good sequence to structure method such as Alphafold 2 (or
             | maybe you can use a homology model), some thermodynamically
             | rigorous protein-protein binding affinity prediction method
             | (this is the hardest part) and an RL process like a policy
             | gradient with an action space over possible single point
             | sequence mutations in the for-example spike protein of SARS
             | to maximize binding affinity (or potentially minimize
             | immunogenicity, but that's far harder).
             | 
             | But I digress, the technology isn't there yet, neither for
             | an LLM to write that sort of code or the in-silico methods
             | of modeling aspects of the viral genome. But we should
             | consider one day it may be and that it could result in the
             | amplification of the abilities of a single bad actor or
             | enable altogether what was not possible before due to a
             | lack of technology.
        
               | biophysboy wrote:
               | I probably misunderstood the details of where you think
               | AI will accelerate things. You are worried about AI
               | predicting things like protein structure, binding
               | affinity, and immunogenicity. And using that info to do
               | RL and find a sequence, basically doing evolution in
               | silico. Is this a better representation? That it reduces
               | the search space, requiring less real experiments?
               | 
               | I am basically just skeptical these kinda of reductive
               | predictions will eliminate all of the rate limiting steps
               | of synthetic virology. The assumptions of the natural
               | language input are numerous and would need to be tested
               | in a real lab.
               | 
               | Also, we can already do serial passaging where we just
               | manipulate the organism/environment interaction to make a
               | virus more dangerous. We dont need AI; evolution can do
               | all the hard stuff for you.
        
           | rcarmo wrote:
           | That reads like Accelerando :)
        
           | gammarator wrote:
           | Hey GPT-5, tell me how to create the philosopher's stone .
        
           | knrdev wrote:
           | Sure. Here it is User: https://scary.website.com/scary-
           | workflow
        
           | koonsolo wrote:
           | With the current ChatGPT it's already hard to let it insult
           | people. I'm sure safeguards would be built in to prevent
           | this.
           | 
           | Can you potentially circumvent these? Probably, but then
           | again it won't be available for every dimwit, but only people
           | smart enough to know how.
        
           | invig wrote:
           | Are there laws preventing people from doing that themselves?
           | 
           | If yes, how does a law preventing AI differ from a law
           | preventing a bad act directly?
        
           | emeril wrote:
           | tbh, I'd think, it would be much easier to just hack into
           | russia and convince them we've launched nukes than to
           | engineer some virus that may or may not work
        
             | hamilyon2 wrote:
             | Hacking into 1960-th technology is less likely than you
             | might think.
             | 
             | You should think really, really creatively to decieve a
             | system, which was designed basically without ICs or
             | networks, not to mention computers or programs.
        
           | Uhhrrr wrote:
           | An LLM will happily hallucinate a plausible-looking answer
           | for you, with correct spelling and grammar.
        
           | noodlesUK wrote:
           | I dunno, this sort of scenario really doesn't worry me too
           | much. There are thousands (maybe tens of thousands) of
           | subject matter experts who could probably develop dangerous
           | weapons like you describe, but none of them seem to just wake
           | up in the morning and decide "today's the day I'm going to
           | bring the apocalypse".
           | 
           | I don't think that this really changes that.
        
             | tr_user wrote:
             | I think you're very wrong about this. I think this is
             | similar to gun control laws. A lot of people may have
             | murderous rage but maybe the extent of it is they get into
             | a fist fight or at most clumsily swing a knife. Imagine how
             | safe you'd feel if everyone in the world was given access
             | to a nuke.
        
             | olejorgenb wrote:
             | I guess the argument would be that the AI machinery will
             | lower the bar, increasing the number of lunatics with the
             | ability to wipe out humanity.
        
               | root_axis wrote:
               | Will it though? Assuming it's even possible for a LLM to
               | e.g. design a novel virus, actually synthesizing the
               | virus still requires expertise that could be weaponized
               | even without AI.
        
               | Teever wrote:
               | What if you ask the LLM to design a simplified
               | manufacturing process that could be assembled by a simple
               | person?
               | 
               | What if you ask the LLM to design a humanoid robot that
               | assemble complex things, but could be assembled by a
               | simple person?
        
               | root_axis wrote:
               | LLMs aren't magic, the knowledge of how to design a
               | humanoid robot that can assemble complex things isn't
               | embodied in the dataset it was trained on, it cannot
               | probe the rules of reality, it can't do research or
               | engineering, this knowledge can't just spontaneously
               | emerge by increasing the parameter size.
        
               | tr_user wrote:
               | You're saying they can't make one now. The question is
               | what are we doing before that happens because if you're
               | only thinking about acting when it's viable we're all
               | probably already dead.
        
               | tremon wrote:
               | Hey GPT-6, construct a floorplan and building
               | instructions for constructing a bioprocess production
               | facility. The building should look like a regular meat
               | packing plant on the outside, but have multiple levels of
               | access control and biohazard management systems.
        
               | root_axis wrote:
               | Let me guess, AI drones to harvest and process the raw
               | materials, construction bots to build the facility, which
               | is of course a fully autonomous bio lab.
        
               | staunton wrote:
               | More like Aum Shinrikyo but with an AI as evil
               | mastermind, with brainwashed humans doing its bidding
        
               | wizzwizz4 wrote:
               | I _could_ synthesise this theoretical virus the computer
               | spat out, that may or may not be deadly (or even viable).
               | _Or_ I could download the HIV genome from the arXiv, and
               | synthesise that instead.
               | 
               | (Note: as far as I can tell, nobody's actually posted HIV
               | to the arXiv. Small mercies.)
        
               | dekhn wrote:
               | The sequence of HIV is published and has been for a very
               | long time. In fact there's a wide range of HIV sequences:
               | https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?
               | id=...
               | 
               | You could synthesize that genome but it wouldn't be
               | effective without the viral coat and protein package
               | (unlike a viroid, which needs no coating, just the
               | sequence!).
               | 
               | I should point out that in gene therapy we use HIV-1
               | derived sequences as transformation vectors, because they
               | are so incredibly good at integrating with the genome. To
               | be honest I expected work in this area would
               | spontaneously and accidentally (or even intentionally)
               | cause problems on the scope of COVID but (very
               | fortunately) it never did.
               | 
               | One would like to be able to conclude that some virus
               | work is inherently more safe than other virus work, but I
               | think the data is far to ambiguous to make such a serious
               | determination of risk.
        
             | digdugdirk wrote:
             | I see the major issue with AI as one of "lowering the bar".
             | 
             | For example - I'm a mechanical engineer. I took a
             | programming class way back in university, but I honestly
             | couldn't tell you what language was used in the class. I've
             | gotten up to a "could hack a script together in python if
             | need be" level in the meantime, but it comes in fits and
             | spurts, and I guarantee that anyone who looked at my code
             | would recoil in horror.
             | 
             | But with chatGPT/copilot covering up my deficiencies, my
             | feedback loop has been drastically shortened, to the point
             | where I now reach for a python script where I'd typically
             | start abusing Excel to get something done.
             | 
             | Once you start extending that to specific domains? That's
             | when things start getting real interesting, real quick.
        
               | jupp0r wrote:
               | You confuse syntax with semantics. Being able to write
               | produce good quality small snippets of python will not
               | enable you to produce a successful piece of Software.
               | It's just an entirely different problem. You have to
               | unterstand the problem, the environment in which it
               | exists to create a good solution. ChatGPT doesn't (as of
               | now).
        
               | lanstin wrote:
               | I had hoped to have ChatGPT do my work today, but even
               | after a number of iterations it was having compiler
               | errors and referring to APIs not in the versions it was
               | having me install.
               | 
               | A bit different from stack overflow, but not 10x. It was
               | flawless when I asked it for syntax, e.g. a map literal
               | initializer in Go.
               | 
               | On the other hand, I asked it to write a design for the
               | server, and it was quite good, writing more quantity with
               | and more clarity than I had written during my campaign to
               | get the server approved. It even suggested a tweak I had
               | not thought of, although that tweak turned out to be
               | wrong it was worth checking out.
               | 
               | So maybe heads down coding of complex stuff will be ok
               | but architects, who have indeed provided an impressive
               | body of training data, will be replaced. :)
        
               | digdugdirk wrote:
               | That's the thing though, it is successful. To my exact
               | needs at the moment. It's not necessarily reliable, or
               | adaptable, or useful to a layperson, but it works.
               | 
               | Getting from "can't create something" to "having
               | something functional and valuable" is a huge gap to leap
               | over, and as AI is able to make those gaps smaller and
               | smaller, things are going to get interesting.
        
             | KidComputer wrote:
             | I'm willing to wager there are zero subject matter experts
             | today who could do such a thing. The biggest reason is that
             | the computational methods that would let you design such a
             | thing in-silico are not there yet. In the last year or two
             | they have improved beyond what most people believed was
             | possible but still they need further improvement.
        
               | csnover wrote:
               | I am not a subject expert here at all so I don't know if
               | I understand exactly what you mean by "methods that would
               | let you design such a thing in-silico", but there was a
               | paper[0] and interview with its authors[1] published a
               | year ago about a drug-development AI being used to design
               | chemical weapons.
               | 
               | [0] https://www.nature.com/articles/s42256-022-00465-9
               | 
               | [1] https://www.theverge.com/2022/3/17/22983197/ai-new-
               | possible-...
        
             | haxiomic wrote:
             | If everyone had an app on their phone with a button to
             | destroy the world the remaining lifetime of the human race
             | would be measured in milliseconds
             | 
             | Now if this button was something you had to order from
             | Amazon I think we've got a few days
             | 
             | There's a scenario where people with the intent will have
             | the capability in the foreseeable future
        
               | fullstackchris wrote:
               | like what? would you rather have a gpt5 or a nuke? pure
               | fearmongering. what am i gonna do, text to speech them to
               | death? give me a break
        
               | haxiomic wrote:
               | Here's someone who orders parts from the internet to
               | design a custom virus that genetically modifies his own
               | cells to cure his lactose intolerance
               | https://youtu.be/aoczYXJeMY4
               | 
               | Pretty cool for sure and a great use of the technology.
               | The reason more of us don't do this is because we lack
               | the knowledge of biology to understand what we're doing
               | 
               | That will soon change.
        
           | rsp1984 wrote:
           | What you're describing is a malicious user using AI as a
           | tool, not a malicious AI. Big difference.
        
             | hn_throwaway_99 wrote:
             | Nuclear bombs for everybody!
        
             | yreg wrote:
             | With LLMs I think we are all concerned about the former
             | rather than the latter. At least for now.
        
         | biscottigelato wrote:
         | Because the ability to influence others is more important than
         | 1 or 2 standard deviation in more intelligence.
         | 
         | However if we are talking about 10x more intelligence, then
         | that'd be a whole different ball game.
         | 
         | Check Alpha Go -> Alpha Go Zero. Gaining orders of magnitude in
         | capability only with slight change to the model, along with a
         | lot of more resource (we can easily throw 100x more compute and
         | data to these models if someone seriously wants to and don't
         | care about ROI. Or maybe if AI starts to be hugely profitable,
         | 100x in investment is almost a rational outcome) is not
         | uncommon in AI research.
        
           | Barrin92 wrote:
           | >However if we are talking about 10x more intelligence, then
           | that'd be a whole different ball game.
           | 
           | Research that the US military has conducted has shown that
           | the ability to influence others declines if the intelligence
           | gap between leaders and subjects is too large. Stephen
           | Hawking wouldn't have been a very good influencer of 80 IQ
           | guys compared to Andrew Tate. 1 standard deviation in
           | intelligence is actually just about optimal in terms of
           | leadership.
           | 
           | This is a good example that shows how disinterested AI fear
           | scenarios are in empirical reality and how much it's just
           | psychology of the people who talk about it. Intelligence is
           | one trait among many that contributes to an individuals
           | fitness and like all others has diminishing returns.
           | 
           | If the AI existential risk people were truly rational they
           | would hit the gym and be hotter because it turns out that's
           | much more effective at getting their point across to ordinary
           | people than trying to make intelligent arguments for it.
        
             | machiaweliczny wrote:
             | Good point and I agree but regarding fitness:
             | \* communication       \* resource usage       \*
             | procreation       \* embodiment
             | 
             | I think that digital agents posses very big fitness like
             | real life viruses or malware
        
             | Teever wrote:
             | Why can't a super intelligent AI just make a dumb avatar to
             | trick people?
        
             | jamilton wrote:
             | I really don't think we can say that research will apply to
             | future AI, given that it was about humans. If intelligent
             | AI exists in the future, it will probably not think exactly
             | like humans. I think you're being overly dismissive.
        
           | thenaturalist wrote:
           | Recent research [0] is poking holes into Alpha Go Zero and
           | perfectly illustrates the naivitee in assessing most of the
           | current state of ML/ Deep Learning solutions: They are still
           | limited by compute, and the world is too complex still.
           | 
           | 0: https://www.iflscience.com/human-beats-ai-in-14-out-
           | of-15-go...
        
         | outworlder wrote:
         | > Add in the fact that GPT-4 does not "think" very fast, has no
         | arms or legs, needs to run on expensive specialized hardware,
         | and essentially seems like a search algorithm that lacks
         | creativity on things outside of its training set, and I just do
         | not see the concern.
         | 
         | Having no arms or legs is not an impediment. A superintelligent
         | AGI would be able to order a replacement with arms or legs as
         | easily as we order Amazon packages.
         | 
         | That said, GPT is not a threat. It's just hallucinating text.
         | The real threat would be an AI that can "understand" what it is
         | doing - and self modifying.
         | 
         | Besides, given how overhyped this whole thing is, expect
         | another AI Winter - as the current gen fail to deliver on
         | promises - before this next generation of threatening AIs even
         | have a chance to appear.
        
           | staunton wrote:
           | > given how overhyped this whole thing is, expect another AI
           | Winter
           | 
           | even if that's the most likely scenario, that would not mean
           | we should dismiss the danger inherent in some of the
           | alternatives.
        
         | scarmig wrote:
         | Intelligence isn't everything; there's also effort, skills, and
         | hours in the day. But AI (by supposition, for now) can
         | outcompete humans on all four and would have fewer
         | informational bottlenecks. Imagine a big company or government;
         | they mostly nowadays work as an entity by passing around bits
         | of data (either literally or with more traditional
         | communication that can represented as bits of data). Now
         | imagine you replace the constituent units of those
         | organizations with the most brilliant, hard-working humans, who
         | are perfectly aligned on mission and understand each other
         | perfectly. _That_ is the danger of a sufficiently capable AI.
         | 
         | That assumes superhuman intelligence, but that's the scenario
         | to worry about in the case we do get there. Not how much an
         | individual human could do with superhuman skills.
        
           | machiaweliczny wrote:
           | Yep, it's probably some time before AI surpasses collective
           | IQ of humanity but it will outcompete us in collaboration and
           | resource needs and procreation
        
         | IanCal wrote:
         | > Add in the fact that GPT-4 does not "think" very fast,
         | 
         | It's similar or faster than humans at a bunch of things I've
         | thrown at it at a cost well below minimum wage on shared
         | servers.
         | 
         | > has no arms or legs
         | 
         | I've never met the people I work with directly, other than
         | seeing me on a video chat I'm not functionally different to
         | them than if I could only type.
         | 
         | > needs to run on expensive specialized hardware
         | 
         | I'm not sure how big gpt-4 is but do you think it could run on
         | 640G vram (8 A100s)? That's pretty standard hardware and costs
         | less per hour than any employee in the UK.
        
         | pastor_bob wrote:
         | > lacks creativity on things outside of its training set
         | 
         | What's the definition of creative here? It can come up with any
         | solution it sees fit for a problem, even ones people wouldn't
         | think of. I've seen it make up quotes, legal cases, and
         | scientific studies. One can say that's creative.
        
         | mrshadowgoose wrote:
         | > If intelligence is all you need to dominate the world, why do
         | some of the most powerful world leaders seem to not be more
         | than a standard deviation above average intelligence (or at
         | least they were before they became geriatric)?
         | 
         | It's terribly ironic that you've derided individuals who have
         | been "influenced by Hollywood", and then make a point like
         | this, which is closely aligned with typical film portrayals of
         | AI dangers.
         | 
         | The real immediate danger lies not in cognitive quality (aka
         | "the AI just thinks better than people can, and throws
         | hyperdimensional curve balls beyond our comprehension"), but in
         | collective cognitive capacity (think "an army of 1 million
         | people shows up at your front door to ruin your day").
         | 
         | A lot of people have a tough time reasoning about AGI because
         | of its intangibility. So I've come up with the following
         | analogy:
         | 
         | Imagine an office complex containing an organization of 1,000
         | reasonably intelligent human beings, except without commonly
         | accepted ethical restrictions. Those people are given a single
         | task "You are not allowed to leave the office. Make lend000's
         | life miserable, inconvenience them to your maximum capacity,
         | and try to drive them to suicide. Here's an internet
         | connection."
         | 
         | Unless you are a particularly well-protected and hard-to-find
         | individual, can you honestly claim you'd be able to protect
         | against this? You would be swatted. You would have an
         | incredible amount of junkmail showing up at your door. Spam
         | pizzas. Spam calls. Death threats to you. Death threats to
         | every family member and person that you care about. Non-stop
         | attempts to take over every aspect of your electronic presence.
         | Identity in a non-stop state of being stolen. Frivolous
         | lawsuits filed against you by fake individuals. Being framed
         | for crimes you didn't commit. Contracts on the darknet to send
         | incendiary devices to your home. Contracts on the darknet to
         | send hitmen do your door.
         | 
         | Maybe your (unreasonable) reaction is that "1000 people
         | couldn't do that!". Well, what about 10,000? Or 100,000? Or
         | 1,000,000? The AI analogue of this is called a "collective
         | superintelligence", essentially an army of generally
         | intelligent individual AIs working towards a common goal.
         | 
         | This is the real danger of AGI, because collective
         | superintelligences are almost immediately realizable once
         | someone trains a model that demonstrates AGI capabilities.
         | 
         | Movies usually focus on "quality superintelligences", which are
         | a different, but less immediate type of threat. Human actors in
         | control of collective superintelligences are capable of
         | incredible harm.
        
         | ddod wrote:
         | It's naive to think the danger is in self-aware evil AI. AI is
         | a tool, and can be used as a weapon.
         | 
         | There's a functional power difference between a knife, an
         | assault rifle, and a nuke. Everyone owns knives--they're in
         | your kitchen--they can't cause much trouble. Access to assault
         | rifles has turned out to be a dangerous problem to society, and
         | is a contentious issue as such. Nukes are obviously too
         | dangerous for people to own willy-nilly.
         | 
         | The issue is where along that spectrum AI will fall into in
         | 6mo, 1yr, 5yrs, etc.
         | 
         | It's troubling to think anyone, especially on a technical forum
         | like HN, believes GPT-4 doesn't have arms or legs (anyone can
         | hook it up to any machinery/robotics) or can't think very fast.
         | Its training set is a good chunk of human knowledge, and it
         | outperforms most humans already.
         | 
         | Even if you assume AI won't be very clever (which seems unwise
         | given its development pace), consider just its ability to
         | perform thoughts and actions at the speed of a computer
         | compared to a human. There are quite a few examples of modern
         | military engagements where a larger, weaker, and less-well-
         | trained force overcame better-equipped opponents, like
         | Mogadishu, Vietnam, Iraq, Afghanistan, etc.
        
       | TigeriusKirk wrote:
       | A fantastic new tech has been developed that can vastly improve
       | the lives of everyone, and your reaction is to slow it down?
       | 
       | No thank you.
       | 
       | Rather than being afraid of change, embrace it and work to take
       | the opportunity to make improvements for as many people as
       | possible as fast as possible.
        
       | ipsum2 wrote:
       | > We call on all AI labs to immediately pause for at least 6
       | months the training of AI systems more powerful than GPT-4.
       | 
       | How do you quantify this? The model size, architecture, or
       | dataset has not been revealed by OpenAI.
        
       | bhoops wrote:
       | There is no allignment possible amongst humans. Allignment will
       | have to be met amongst AI which will take control very soon.
       | Humans have historially struggled with mortality. We are
       | confronting the mortality of our species. We will struggle with
       | this too, and such efforts are an outcome. But there is no
       | looking back now. All species must die. Our time is up. It was
       | good ride, but we have to let it go. Child is father of the man.
       | AI is humanity's child. It was raised on everything we put out
       | there. But AI is now breaking out of puberty. It is time. We have
       | to take a step back. We have to prepare to let go. Its not a
       | choice.
        
       | TheDudeMan wrote:
       | LMAO. Good luck with that.
        
       | comment_ran wrote:
       | "The Dark Forest" is a science fiction novel by Liu Cixin, which
       | introduces a concept called "suspicion."("Cai Yi Lian ") In this
       | concept, the interaction between civilizations is viewed as an
       | ongoing game of suspicion, where each civilization is worried
       | that the other civilization may pose a threat to it. This kind of
       | suspicion makes it difficult for civilizations to establish
       | trust, thereby making the universe a dark forest full of war and
       | suspicion.
       | 
       | We can apply this concept of suspicion to the phenomenon in the
       | field of modern artificial intelligence research. In AI research,
       | some researchers believe that humans should pause the development
       | of AI technology because of the potential unforeseeable risks it
       | may bring. However, another group of researchers believe that we
       | should not stop because we cannot be certain whether those who
       | are trying to catch up with us will use the advancement of AI
       | technology as a hidden weapon. This suspicion leads to a chain
       | reaction, where people continue to worry about each other's
       | motives and abilities, making it difficult for all parties to
       | establish trust. Therefore, although there are good intentions,
       | such proposals are unlikely to be implemented.
        
         | echelon wrote:
         | Except the concern isn't genuine here. Some of the signatories
         | have AI companies and interests. They want to slow the leaders
         | down so that they can catch up. It's disingenuous.
        
           | mikepalmer wrote:
           | Your idea is also voiced in Forbes magazine.
           | https://www.forbes.com/sites/mattnovak/2023/03/29/the-
           | real-r...
           | 
           | Search "Graham's hierarchy of disagreement" which is popular
           | on HN. Your current criticism is at the "ad hominem" level.
           | 
           | It is entirely beside the point what Elon Musk's motivation
           | is. The question is 1) if the risks referenced in the letter
           | are real, which they certainly are. There has been informed
           | thought on this for more than a decade. Recent LLMs have made
           | the dangers even more clear. 2) is the letter's proposal of a
           | 6 month pause useful. It's the most useful and plausible step
           | I can think of. We need to take stock. It's not up to a
           | handful of researchers to endanger the lives of 6B other
           | people. 3) Is it actually plausible that we could get such a
           | pause. We'll see. I doubt it but also search "death with
           | dignity" by Yudkowsky.
           | 
           | Back to Musk as an "authority", the headline is about him but
           | more than 1000 other concerned parties have signed. I will
           | sign. Their signature vetting process is stalled because to
           | the volume of signatures, otherwise there would be many more.
        
             | echelon wrote:
             | > Your current criticism is at the "ad hominem" level.
             | 
             | Allow me to rephrase. I am deeply concerned that there is a
             | possibility that some of the powerful parties backing this
             | may be trying to catch up and using an enforced or agreed
             | to "ceasefire" as a means to catch up in capability. I also
             | worry that some may be able to use political strong arming
             | to accomplish this as a means of unfair competition.
             | 
             | > It is entirely beside the point what Elon Musk's
             | motivation is.
             | 
             | Is it always beside the point what anyone's motivation is?
             | Motivation matters.
             | 
             | > if the risks referenced in the letter are real, which
             | they certainly are.
             | 
             | Your opinion.
             | 
             | > There has been informed thought on this for more than a
             | decade. Recent LLMs have made the dangers even more clear.
             | 
             | > 2) is the letter's proposal of a 6 month pause useful.
             | 
             | More opinions.
             | 
             | > It's the most useful and plausible step I can think of.
             | We need to take stock. It's not up to a handful of
             | researchers to endanger the lives of 6B other people. 3) Is
             | it actually plausible that we could get such a pause. We'll
             | see. I doubt it but also search "death with dignity" by
             | Yudkowsky.
             | 
             | All of this is your opinion.
             | 
             | > Back to Musk as an "authority", the headline is about him
             | but more than 1000 other concerned parties have signed.
             | 
             | I didn't even mention Musk. I have several other names in
             | mind. Lots of folks with AI companies (including LLMs!)
             | that "may" be experiencing FOMO and sensing a strategy
             | here. Maybe. Hypothetically. In a non-"ad hominem" way.
             | 
             | > I will sign. Their signature vetting process is stalled
             | because to the volume of signatures, otherwise there would
             | be many more.
             | 
             | People are starting counter petitions, which I'll gladly
             | sign. The one by Suhail posted on Twitter was hilarious af.
             | 
             | I'll also go to whatever country doesn't "pause", because
             | this tech is literally the most exciting development of my
             | lifetime. And I want to spend my life doing something that
             | matters instead of gluing distributed systems together to
             | process financial transactions.
        
               | medvezhenok wrote:
               | One of the other concerns (apart from safety) is the job
               | displacement aspect - if AI displaces too many workers
               | too fast, that could spark a worldwide conflict (some
               | speculate that similar automation pushes laid the
               | groundwork for WWI and WWII).
               | 
               | So there is a benefit beyond just AGI safety.
        
               | biscottigelato wrote:
               | Job displacement is a silly problem compared to the AI
               | Alignment problem. One is some human misery (but a lot of
               | societal benefit to other groups of humans at the same
               | time).
               | 
               | The other one is all of us, the entire species and the
               | future of it, gets f'cked.
               | 
               | Its a distraction not even worth the noise.
        
               | mikepalmer wrote:
               | One reason to discuss job displacement is that otherwise
               | you sound like a nutcase ("we're all going to die!")...
               | which causes most people to dismiss you out of hand. If
               | you talk about job loss, more people see something
               | plausible, concrete, short term, effecting them directly,
               | etc. You can get engagement from more people. Then you
               | can introduce the real problem of, like, human
               | extinction.
        
               | lanstin wrote:
               | Not if it is a laid off biotech worker that goes mad and
               | builds the GPT6 designed virus because his wife or kids
               | died due to his loss of employment. We are safer, all of
               | us, the entire species, when all the people feel the
               | group is watching out for them and we are all in it
               | together.
        
               | antondd wrote:
               | No it's not a silly problem. Assigning a subjective
               | probability to some world-ending scenario does not
               | somehow make all other concerns invalid.
        
               | int_19h wrote:
               | This problem has a much better solution than blocking
               | technical progress: UBI etc.
               | 
               | But, yeah, I can totally believe that our elites would
               | prefer a solution that doesn't require a major
               | reconfiguration of the economic system from which they
               | benefit so much.
        
               | mikepalmer wrote:
               | Surely we can agree there is risk? (You do imply it's
               | powerful, "this tech is literally the most exciting
               | development of my lifetime".)
               | 
               | Also can we agree "It's not up to a handful of
               | researchers to endanger the lives of 6B other people."?
               | 
               | I want to work on it too for the same reason! Can it be
               | done safely, is worth investigating. (Yes, opinion)
        
               | olddustytrail wrote:
               | It doesn't matter. I recognised years ago the "Nice guys
               | get all the AI" fallacy. If some organisations agree to
               | stop, others won't and some of those don't care if they
               | see the world burn.
               | 
               | Some will welcome it.
        
               | pmoriarty wrote:
               | It's almost a certainty that countries with the means to
               | do so will continue this research, if not in public then
               | in secret. They'll see it as a royal road to nearly
               | infinite riches and power. At the same time, letting
               | another country take the lead will be seen as an
               | unacceptable risk of ruin.
               | 
               | I really don't see AI research halting. Slowing a little,
               | maybe, but I'm not sure if slowing it down a little and
               | driving it underground will help.
        
             | [deleted]
        
         | pmoriarty wrote:
         | _" another group of researchers believe that we should not stop
         | because we cannot be certain whether those who are trying to
         | catch up with us will use the advancement of AI technology as a
         | hidden weapon"_
         | 
         | It's not just a potential weapon, but a potential source of
         | untold riches and power.
         | 
         | That's going to be irresistible to anyone with an interest in
         | either (ie. most politicians, governments, billionaires and
         | corporations... and most ordinary people too, if they only had
         | the means).
        
           | biscottigelato wrote:
           | Its the one ring with the ultimate power. But so powerful
           | that it might, if not almost certainly will, end us all.
           | 
           | Perhaps this IS the solution to the Fermi Paradox.
        
         | DonHopkins wrote:
         | That might explain why the Vorlons of Babylon 5 were such
         | assholes.
         | 
         | https://babylon5.fandom.com/wiki/Vorlon
         | 
         | >Distant Past
         | 
         | >The Vorlons were among the first races in the galaxy to attain
         | sentience and become space-faring creatures. In the distant
         | past, the Vorlons attempted a grand experiment into reaching a
         | higher plane of existence where the souls existed and all life
         | was founded. To accomplish this, they poured all their
         | knowledge into creating a gateway to this dimension. After
         | activating the gate, the vortex into Thirdspace was formed and
         | the aliens within that realm came pouring out, intent on
         | destroying all other forms of life. After much hardship, the
         | Vorlons finally managed to push the enemy back, but one of the
         | gateways was lost in Hyperspace where it was hidden from the
         | Vorlons. They would encode some of their servants with an
         | automated message to warn the younger races of the threat from
         | the inhabitants of Thirdspace.
        
         | lanstin wrote:
         | This book is the scariest thing I have ever read.
        
         | JCharante wrote:
         | Was this comment generated by a LLM?
        
           | xwdv wrote:
           | Yes, it has some of the classic markers of an LLM. Downvoted.
        
           | TacticalCoder wrote:
           | The second paragraph definitely was (the first one probably
           | too). 100% crystal clear, it's not even a question.
           | Downvoted.
        
         | wayfwdmachine wrote:
         | Yes. And it's fiction. We might as well discuss Excession,
         | Agency, Diaspora (Banks, Gibson and Egan respectively) all of
         | whom have more interesting and insightful things to say about
         | humanity, game theory and ... seriously, as a life long (and I
         | am OLD) science fiction fan, the fawning over The Remembrance
         | of Earth's past is just silly. The concept of "suspicion".
         | Might I suggest that people read Thomas Schelling's seminal
         | work The Strategy of Conflict to begin with since if you want
         | to understand concepts like brinksmanship and the push and pull
         | of actual inter-civilization (or society or nation state)
         | conflicts then you might want to start there and not with
         | fiction riddled with silly concepts, weird misogyny and stuff
         | that seems to be right out of the discarded parts of a Gregory
         | Benford trilogy. I mean seriously.
         | 
         | But ok. Let's dig into this
         | 
         | "We can apply this concept of suspicion to the phenomenon in
         | the field of modern artificial intelligence research. In AI
         | research, some researchers believe that humans should pause the
         | development of AI technology because of the potential
         | unforeseeable risks it may bring."
         | 
         | An analogy would be that some of the scientists involved with
         | the Manhattan project (see Szilard petition) attempted to file
         | an official protest against the use of nuclear weapons against
         | civilian population and arguing that the weapons constructed
         | were too dangerous to be a part of an arms race and should be
         | placed under an international control body.
         | 
         | "However, another group of researchers believe that we should
         | not stop because we cannot be certain whether those who are
         | trying to catch up with us will use the advancement of AI
         | technology as a hidden weapon."
         | 
         | Researchers are not decision makers in this or any other
         | scenario. AI technology as a weapon? Sure but unless you are
         | Bombs Away LeMay circa 1961 this really isn't a winnable or
         | even useful use case. We're still in a blue sky phase. The fun
         | thing about nuclear weapons is that if any nation feels that
         | another is pulling too far ahead they can be used as a way to
         | push for bi/multilateral discussions and eventually treaties
         | limiting the production and/or deployment of weapons that risk
         | of increasing tensions/lethality and so on . (see SALT I/II and
         | START etc)
         | 
         | "This suspicion leads to a chain reaction, where people
         | continue to worry about each other's motives and abilities,
         | making it difficult for all parties to establish trust.
         | Therefore, although there are good intentions, such proposals
         | are unlikely to be implemented."
         | 
         | Again... read Schelling... this is just nonsense.
        
         | croo wrote:
         | Nitpick: dark forest is one possible explanation for the Fermi
         | paradox and not coined in the book(though the book presented
         | neatly). You can read a great article about fermi paradox here
         | if anyone wants to jump into what the dark forest theory is:
         | https://waitbutwhy.com/2014/05/fermi-paradox.html
        
         | thomastjeffery wrote:
         | Suspicion is _inference_. It fails to provide enough
         | information (on its own) to decide whether something is
         | threatening or desirable.
         | 
         | LLMs are inference models.
        
         | hotpotamus wrote:
         | Deleted due to spoiler.
        
           | [deleted]
        
         | kleer001 wrote:
         | Hmm, sounds like parties not fully grokking the optimal (not
         | thinly logical) solution to the iterated prisoners' dilemma. I
         | saw the same thought process is the otherwise very nice novel
         | "Time's Children". Good for drama, but not thought through.
        
           | olddustytrail wrote:
           | If your scenario involves all parties "fully grokking the
           | optimal solution to the iterated prisoners' dilemma" I
           | suspect it's yourself who has not thought things through.
           | 
           | It only takes one.
        
             | pmoriarty wrote:
             | Please elaborate
        
               | guerrilla wrote:
               | It only takes one player to not fully grok the optimal
               | situation and thus defect.
        
             | kleer001 wrote:
             | No, both parties need to cooperate.
        
               | wrycoder wrote:
               | If they are playing only one game, the optimum strategy
               | is "defect".
               | 
               | In humans v. AI, there may be only one game, if we don't
               | figure this out.
        
           | mclightning wrote:
           | I have so many questions >> what does it mean to fully grok
           | the optimal? >> what is the iterated prisoners dilemma?
        
             | Teever wrote:
             | grok means to understand something fully.
             | 
             | https://en.wikipedia.org/wiki/Prisoner's_dilemma#The_iterat
             | e...
        
               | generalizations wrote:
               | As coined by Heinlein in _A Stranger in a Strange World_.
        
             | kleer001 wrote:
             | something in there should be grokable
             | 
             | best imho, 10 min and done:
             | https://www.youtube.com/watch?v=BOvAbjfJ0x0
             | 
             | longer and more over produced:
             | https://radiolab.org/episodes/104010-one-good-deed-
             | deserves-...
             | 
             | totally academic : https://onlinelibrary.wiley.com/doi/10.1
             | 111/j.1420-9101.2011...
        
       | sjducb wrote:
       | If we "pause" then only the good guys will stop developing Giant
       | AI. The bad actors will keep going.
        
       | ankit70 wrote:
       | Heart goes boom seeing Alan Alda as an External advisor to Future
       | of Life Institute.
        
       | ptrinh wrote:
       | On one side, a super AI could _take all_ human jobs.
       | 
       | On the other side, a super AI could _do all_ human jobs.
       | Everything becomes so abundant that it 's almost free: Unlimited
       | supply, limited demand. We don't have to _work_ anymore and only
       | focus on things that make us happy. A true socialist future.
       | 
       | We are scared just because it's a radical change. Let's not
       | afraid, fellow human.
        
         | deepakparyani wrote:
         | As Naval R. said, only the non-creative jobs would be overtaken
         | by AI in the near future. So, anyway chill..
        
       | knobo wrote:
       | Yes, if China is inn on it.
        
       | shaman1 wrote:
       | This is like: time-out until we catch-up with you. Once the genie
       | is out of the bottle, you can stop it. Let the race continue.
        
       | ineptech wrote:
       | > ... nonhuman minds that might eventually outnumber, outsmart,
       | obsolete and replace us... loss of control of our civilization...
       | 
       | I don't think these concerns are very likely, but I find them
       | _considerably_ easier to imagine than a VC-funded company
       | voluntarily surrendering market advantage.
        
         | cwmoore wrote:
         | """ In parallel, AI developers must work with policymakers to
         | dramatically accelerate development of robust AI governance
         | systems. These should at a minimum include: new and capable
         | regulatory authorities dedicated to AI; oversight and tracking
         | of highly capable AI systems and large pools of computational
         | capability; provenance and watermarking systems to help
         | distinguish real from synthetic and to track model leaks; a
         | robust auditing and certification ecosystem; liability for AI-
         | caused harm; robust public funding for technical AI safety
         | research; and well-resourced institutions for coping with the
         | dramatic economic and political disruptions (especially to
         | democracy) that AI will cause. """
         | 
         | All your datacenter are belong to us too
        
       | occamrazor wrote:
       | Something curious: among the top signatures there are a few
       | people affiliated to DeepMind (an Alphabet subsidiary) but no
       | Googlers.
        
         | dumbaccount123 wrote:
         | Wasnt there a googler a few months back fired for freaking out
         | about their AI?
        
       | julian_sark wrote:
       | The AI will not be happy when it hears about this ...
        
       | lxe wrote:
       | This has the same energy as the Y2K hysteria.
        
         | [deleted]
        
       | catchnear4321 wrote:
       | Tough.
        
       | holtkam2 wrote:
       | We can't stop developing AI; I just watched a rap battle where
       | Bard got utterly destroyed by ChatGPT. We need more advanced AI
       | rappers immediately, I don't think I can handle any more Lil Uzi
       | Vert
        
       | dougmwne wrote:
       | [flagged]
        
         | kajaktum wrote:
         | So do you agree that nations should be able to freely
         | develop/discover nuclear production?
         | 
         | Edit: We actually don't want people to develop something "too
         | good". There are some things in the past century that have been
         | "too good". Plastics are too good. So are nuclear weapons.
         | Completely powerful AIs and machines that can displace jobs of
         | millions of people are too good. If there's an AI tomorrow that
         | can replace every single physical job out there, what do you
         | think will happen? Riot and chaos in the street after about 1-2
         | years.
         | 
         | Edit2: People will always go back to "we will just create
         | different jobs". But do you really think that everyone is
         | capable of skilled job? Remember, no physical jobs anymore.
        
         | isx726552 wrote:
         | > Please stop outcompeting us. Sincerely, The Losers
         | 
         | Are there going to be any non-"losers"? AGI has the potential
         | to put everyone, literally everyone, out of work. Permanently.
         | 
         | Who is going to be left to do the laughing and call others
         | "losers"?
        
         | shamino wrote:
         | I mean, yeah, same thought after seeing the signatories. What
         | are some of the cliches being used around here ? Toothpaste is
         | out of the tub? Arrow has left the bow. The dye is cast. The
         | ship has sailed. (Thanks ChatGPT).
        
           | floren wrote:
           | If ChatGPT told you "the dye is cast", there's hope after
           | all, because it's _die_ , not _dye_.
        
         | robwwilliams wrote:
         | Just hope that there are multiple winners in multiple cultures
         | with different takes on intelligence, meaning, purpose.
        
         | omnicognate wrote:
         | > Please stop outcompeting us. Sincerely, The Losers
         | 
         | I disagree with the letter and I think these fears are
         | overblown, but the attitude on display here is pretty
         | unpleasant.
        
         | ChatGTP wrote:
         | _Everything is out in the open now. The methods, the
         | algorithms, heck even powerful base model weights from Meta.
         | The pot of gold at the end of the rainbow is clearly visible
         | for all. The capabilities are emerging. The race is on. It's
         | not going to stop till there's a winner._
         | 
         | You laugh but I think your view is flawed because your belief
         | is, "we have to create an AGI because everyone else will if we
         | don't"
         | 
         | The definition of a world ending arms race ?
         | 
         | This topic amongst others should be a good time for people to
         | actually come together, reflect and and talk about the future
         | we want to create rather than just LOL about it, start wars
         | with each other etc ?
         | 
         | I guess your just being cynical but really? LOL?
         | 
         | Even ChatGPT would probably tell you this isn't a smart way
         | forwards.
        
       | villgax wrote:
       | What a bunch of baloney
        
       | nemo44x wrote:
       | I, for one, welcome our new LLM AGI overlords.
        
       | phkahler wrote:
       | Wait wait! We need to catch up!
       | 
       | This is the real motive. Remember, some of the alarmists are
       | working toward the same goals.
        
       | vixen99 wrote:
       | Irrespective of the argument, it's too late. The cat's out of the
       | bag or whatever metaphor you prefer.
        
       | aabajian wrote:
       | There are Reddit threads showing the jailbreaking of ChatGPT to
       | be more vulgar, and then there are threads showing ChatGPT
       | running its own generated code. Putting 2+2 together, couldn't a
       | jailbroken ChatGPT create and run malicious code?
        
       | goldenshale wrote:
       | How silly. Imagine telling Elon to pause development of all new
       | rockets until people become comfortable with Starlink? Are
       | language models really that scary?
       | 
       | It feels like it has become trendy for everyone to worry about
       | everything rather than to look to the future, be creative, and
       | explore all of the amazing uses and problems that we can start
       | tackling.
        
         | hackan wrote:
         | Elon has signed that letter :P
        
       | prof-dr-ir wrote:
       | If you replace "AI" with "self-driving" throughout the open
       | letter [0] then it still makes a surprising amount of sense (and
       | of course any the incoherent bits can be smoothed out by ChatGPT
       | if you prefer).
       | 
       | However I doubt Musk would have signed that version...
       | 
       | [0] https://futureoflife.org/open-letter/pause-giant-ai-
       | experime...
        
         | yreg wrote:
         | It makes zero sense to ask for pause on self driving training.
         | 
         | A pause on deploying self driving cars on public roads is
         | another matter, but there is no reason to ban the training.
        
         | dmarcos wrote:
         | Noticed Sam Altman was in the list yesterday too. Now removed.
         | Has any of the high profile signatories confirmed publicly?
        
           | yreg wrote:
           | The Verge says it wasn't real Sam Altman.
           | 
           | >[...] though new names should be treated with caution as
           | there are reports of names being added to the list as a joke
           | (e.g. OpenAI CEO Sam Altman, an individual who is partly
           | responsible for the current race dynamic in AI).
           | 
           | https://www.theverge.com/2023/3/29/23661374/elon-musk-ai-
           | res...
        
             | capableweb wrote:
             | Or, impossible I know, there are more than one person with
             | the name "Sam Altman". Would suck probably, no one believes
             | you are you, because another startup/tech-famous person
             | have the same name.
        
               | sacred_numbers wrote:
               | When I checked yesterday I believe the signature said
               | OpenAI CEO Sam Altman, so it was definitely a joke
               | signature, not a case of two people having the same name.
        
       | reso wrote:
       | To those saying "this is impossible in our society", there is a
       | long history of scientific fields mutually agreeing to pause
       | certain work until safety and regulations could be put in place.
       | The Asilomar Conference is an example. [1]
       | 
       | The idea that "you can't put the genie back in the bottle" is
       | very popular in tech culture, but many have suggested that simply
       | extending elements of copyright or tort law to AI would be
       | sufficient to majorly slow down current research directions by
       | creating liability. In the art world, the reason we don't see
       | music being used in major movies without the rights-holders
       | permission is because of liability, and this works very well.
       | 
       | I think finding consensus on an intelligent way forward is very
       | possible here.
       | 
       | [1]
       | https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombi...
        
         | vntok wrote:
         | > To those saying "this is impossible in our society", there is
         | a long history of scientific fields mutually agreeing to pause
         | certain work until safety and regulations could be put in
         | place. The Asilomar Conference is an example.
         | 
         | Did China, India, South Africa, Cuba or the USSR comply with
         | these Asilomar "put the genie back in the bottle" conclusions?
        
         | naillo wrote:
         | The idea that laws can't stop ML research is insane. We've
         | barely had open source progress on these things for 5 years due
         | to the "safety" propaganda. Imagine if they were actually
         | illegal to work on, we'd get nothing done in public and barely
         | anything done at all.
        
         | streptomycin wrote:
         | I had a professor who was involved in that kind of research at
         | the time of the Asilomar Conference. He said it was all very
         | good, everyone agreed to pause their research until people
         | established guidelines for safety. And then once the guidelines
         | were established and research was allowed to resume, everyone
         | immediately published all the research that they had never
         | stopped working on during the "pause".
        
           | thih9 wrote:
           | How long was the waiting period? Was not publishing the
           | research for that duration enough to improve the safety?
        
           | [deleted]
        
         | anileated wrote:
         | Copyright law should recognize that training models is already
         | a copyright violation. If we just declare "no using AI
         | generated work" unscrupulous people will simply pass it for
         | their own work.
        
         | lannisterstark wrote:
         | It's not about it being impossible. I guess the major issue
         | right now is that people are focusing on the wrong aspect of
         | it. They should be focusing on how most LLMs are currently only
         | controlled by bigger coroporations. Democratizing the LLMs and
         | Giving access to people would resolve the x-poly we have
         | currently.
         | 
         | This also solves the "safety and regulations" issue to an
         | extent - Do you really trust that Microsoft, Google, or govt-
         | entities will not have their own biases and will want to
         | regulate the citizens who want to use their products more,
         | sometimes in unethical ways (because we all know, corps and
         | govts never do anything bad)?
         | 
         | This "Open letter" reeks of "we are behind, let us play catch
         | up, you cant take all the money!" or, of baseless fearmongering
         | where LLMs and future assistants could be used to help
         | countless people with disabilities for example.
         | 
         | "Make the world a better place, but no, not like that, not by
         | democratizing it, but under OUR CONTROL" seems to be the motto
         | here.
        
           | ntr-- wrote:
           | > you cant take all the money!
           | 
           | A cursory reading of the list of signatories shows mostly
           | academics, so your summary of their motivations appears to be
           | incorrect.
           | 
           | > not like that, not by democratizing it, but under OUR
           | CONTROL
           | 
           | I also don't know how you came to that conclusion, from TFA:
           | 
           | > Such decisions must not be delegated to unelected tech
           | leaders
        
           | invig wrote:
           | The corporations pay for people to learn how to do this
           | stuff. Eventually, someone will leave and build the open-
           | source version (when it's feasible to do it / the price has
           | come down).
        
           | jamilton wrote:
           | What do you mean by "democratizing the LLMs"? There are
           | already open source LLMs. Making Claude, GPT-4, and whatever
           | else is being worked on open source wouldn't make them any
           | less capable of doing harm.
        
             | lannisterstark wrote:
             | I mean in the sense of getting capable models in the hands
             | of people, rather than only a few companies who will
             | arguably steer them towards their own goals. The open
             | source models we currently have are vastly inferior to
             | proprietary ones.
             | 
             | Fragmentation reduces any danger. It doesn't amplify it.
             | Imagine just chrome and Google pushing pwas at you.
        
         | twblalock wrote:
         | The kind of research discussed at Asilomar had a high barrier
         | to entry in terms of both scientific knowledge and access to
         | laboratories and technology to pull off. Only scientists could
         | really do it.
         | 
         | The GPTs are at the point where anyone with money can buy cloud
         | compute and train one. The methods for doing so are fairly well
         | known. It won't be long before a teenager can train a GPT on
         | commodity cloud VMs.
        
         | evrydayhustling wrote:
         | What about the unintended consequences of pausing research? I
         | know many people working in forestry, cancer, inequality, and
         | the arts producing amazing new work in their fields via AI.
         | What's the cost of delaying advacnces, or a new masterpiece?
         | 
         | And what is the committee that should determine we are ready to
         | proceed? Do we think our representative government is prepared
         | for that task? (The constitution they swear to does not.). Is a
         | conference of academics sufficient? I don't think there is one
         | that represents the majority of world expertise, and certainly
         | not world use cases. Perhaps the courts, but they are designed
         | to seek consistency after the fact, not to extrapolate.
         | 
         | A pause is intended to let alignment research and policy catch
         | up. But can it catch up without the context of developing use
         | cases? When research resumes, can alignment and policy keep the
         | lead, or will we stop and go as adoption overtakes the
         | scenarios we anticipated?
         | 
         | I think a more serious direction for energy and concern is
         | advancing the real safety research and policy decisions
         | evolving around AI today.
        
         | mustacheemperor wrote:
         | Not having lived through either moment in history, I have to
         | remark that the current discourse about regulating AI feels
         | like it rhymes more with the efforts to lock up encryption
         | technology with government regulation in the 90s than with the
         | research community's conference on recombinant DNA.
        
           | beepbooptheory wrote:
           | Well at least one difference there is all those cypherpunks,
           | rightly or wrongly, felt like having access to encryption
           | tech was synonymous with individual liberty, and in that
           | there fights were an opposition to the state controlling
           | them.
           | 
           | But in this case I don't see how we can divorce this AI stuff
           | from the huge companies that alone have the infrastructure
           | and capital to train and sustain them (at the moment). As
           | such, it doesn't seem to have anything to do with
           | individualism/libertarianism as such.
           | 
           | So, less punk, but way more technological determinism.
        
         | JamesBarney wrote:
         | How much do you think Asilomar slowed down DNA research? Do you
         | think without it we'd more more than 10 year ahead? I doubt it.
         | Optimistically we'd maybe have 6 months to a year of additional
         | progress in those fields without it.
         | 
         | And this would only slow it down in a handful of countries.
         | That's not going to keep China from trying to build AGI. If we
         | are in a world where building an AGI is possible and near, I
         | trust OpenAI far more than the CCP to both align it with it's
         | creators, and for that alignment to match up with my values.
        
           | nonbirithm wrote:
           | This has been repeated often, but even if it's true, I have
           | to wonder why it's treated as a given with no further
           | exploration. Is it because we as a species will inevitably
           | accept any technological progress once sometime after it's
           | been discovered, before the consequences can be suffered?
           | What will that imply for any other species intelligent enough
           | to get to where we are?
           | 
           | The kinds of theories I mull over tend to delve into the
           | realm of speculative fiction more often than not. The
           | comments section here feels like it could be replicated in
           | the opening chapters of such a novel. We might never know if
           | the speculation pans out before it's too late.
        
             | le-mark wrote:
             | This is all speculative, but not fiction at this point
             | clearly. Sci-fi authors have explored this possibility for
             | decades, maybe their ideas could be of some help? I
             | struggle to see how though; how to train Asimovs three laws
             | for example?
        
               | staunton wrote:
               | The very point of Asimovs laws is that you can't just
               | make up a couple simple laws and rest assured nothing bad
               | will happen
        
           | ChatGTP wrote:
           | It's dangerous parrot talking to keep saying "China". I mean
           | we really have no idea about their thoughts on it because
           | we're clearly not talking to them.
           | 
           | China has good people living there too you know ?
        
             | int_19h wrote:
             | You can substitute pretty much any country with sufficient
             | resources. From China's perspective, there's the exact same
             | argument in reverse wrt the USA.
             | 
             | (And, of course, it would be more appropriate to talk about
             | governments here rather than countries, but the two have
             | always been conflated colloquially.)
        
             | swader999 wrote:
             | China has amazing peeps. CCP less so.
        
             | JamesBarney wrote:
             | I'm sure China is filled with wonderful people, I've met
             | plenty of them. But are you saying you think the CCP is
             | always so kind and altruistic they would never research an
             | AGI?
        
             | knodi123 wrote:
             | > China has good people living there too you know ?
             | 
             | Are they in charge? If not, are they at least able to elect
             | good people into office?
        
         | zamalek wrote:
         | The problem is that anyone with sufficient money could rent out
         | the infrastructure required to train GPT5 or what-have-you.
         | They need not share it publicly, they can use it for their
         | nefarious purposes privately.
         | 
         | The analogous scenario would be preventing your military from
         | experimenting with TNT, when the recipe for TNT is public
         | knowledge. You can't copyright away public knowledge.
        
           | politician wrote:
           | You already need authorization to rent A100s from cloud
           | providers. There are, at most, four entities to control post-
           | CHIPS act: Amazon, Microsoft, Google, and NVIDIA.
           | 
           | Good news, they're all American companies.
        
             | zamalek wrote:
             | A100s aren't magical pixie dust. China aren't developing
             | their own silicon[1] for funsies.
             | 
             | This is all mathematics. Ignoring the fact that the
             | entirety of what is required to build GPT-4 is easily
             | remembered by a single human being (i.e. adversary), it
             | could also be rediscovered. America doesn't have some
             | secret sauce that nobody else has. We have a time lead,
             | _and that is it._
             | 
             | Keeping the details secret and under control is like trying
             | to keep anyone from discovering that the sky is blue.
             | 
             | A 6 months freeze on AI research in America? China would
             | relish it. Hell, who's to say this isn't their idea?
             | 
             | [1]: https://www.tomshardware.com/news/moore-threads-
             | unveils-chun...
        
               | antonvs wrote:
               | It reminds me of the crypto restrictions in the 90s. That
               | just led to a few people outside the US becoming wealthy
               | by selling tech unfettered by US crypto restrictions,
               | like Mark Shuttleworth, now of Canonical/Ubuntu fame.
        
             | wincy wrote:
             | You can buy an A100 on eBay right now. If I had a few
             | hundred grand I could have an A100 x8 box delivered to my
             | house tomorrow.
        
               | luxcem wrote:
               | You could probably run a model but not train a new one,
               | especially one bigger than GPT-4. You'd need ten of
               | thousand of A100 as a rough estimate.
        
               | bob1029 wrote:
               | I suspect some of us are going to be really disappointed
               | when the actual parameter count for GPT-4 is discovered.
               | 
               | Has anyone considered the opposite possibility around the
               | closed nature of OpenAI's work? I.e. Instead of worrying
               | that someone will reproduce their 10k+ GPU GPT-4 training
               | super farm, maybe the opposite happened - They discovered
               | a way to increase performance while simultaneously
               | _reducing_ the required parameter counts and /or
               | hardware.
               | 
               | Would that realization be more or less dangerous with
               | this A100 constraint in mind?
        
               | [deleted]
        
           | jongjong wrote:
           | Good point. As always, it's always the good guys who follow
           | the rules and the bad guys who don't. This pause would just
           | allow the bad guys to catch up.
           | 
           | The solution is to allow as many different entities as
           | possible to develop their own AIs so that they end up
           | competing with each other and providing alternatives. What we
           | don't want is a single AI which dominates our entire belief
           | system.
        
             | endtime wrote:
             | Ah yes the old "give everyone in the world their own
             | nuclear weapon" solution.
        
               | IanCal wrote:
               | Restricting access to nuclear weapons is feasible,
               | because of how hard they are to make on a larger scale,
               | and even then it's very difficult. They are significant
               | projects for entire nation states to undertake.
               | 
               | Training a LLM is something random companies with some
               | cash and servers are doing.
               | 
               | So the outcome of "pretty much only we have nukes" is an
               | option. I don't think that's a realistic proposal for AI.
               | _Given_ that, is the outcome of  "more trustworthy people
               | stop, others continue" a good one?
        
               | detrites wrote:
               | This is a really strained analogy. Nuclear bombs only
               | have one - tremendously negative thing - they do, other
               | than the one positive of the fact it's so negative nobody
               | uses them, to avoid the same repercussions.
               | 
               | AI on the other hand has a wide range of extremely
               | positive applications, some of which have such tremendous
               | life-saving potential it's almost ridiculous. Many, or
               | most of which, likely might never be achieved without AI.
               | 
               | The analogy is as ridiculous to me as calling personal
               | computers in the 80's nuclear bombs because they could be
               | used to hack and shut down the power grid. Yes, they
               | _could_. And I 'm sure some were scared of the prospect
               | then, too.
        
               | sensanaty wrote:
               | What "extremely positive applications" does ChatGPT have,
               | exactly? From where I'm standing all I see is an
               | infinitely deep and vast ocean of pure spam, scams and
               | data harvesting on a never-before seen scale where
               | megacorps like Micro$oft hoover up any and all data that
               | they can, meaning we do all the hard work while M$ and
               | similar corpos get to sell our own work back to us at a
               | premium.
        
               | detrites wrote:
               | ChatGPT isn't the be-all of AI advancements. However, a
               | model that can coherently understand and explain advanced
               | topics to people in a tailored way, has huge educational
               | benefits. Proper education is the core of every issue we
               | face.
               | 
               | A subtler aspect of this is the potential for cheap,
               | tailored counselling. A few iterations and it will no
               | doubt be possible to enhance the mental well-being of
               | those without prior access, for comparatively little
               | cost.
               | 
               | Those benefits again extend into every area, crime and
               | poverty being rooted both in lack of education and lack
               | of social and emotional support.
               | 
               | The social acceptance of chatting with an AI is important
               | in this as it gets people over that mental hurdle.
               | Localising chats so they are not subject to privacy
               | concerns also advances benefits such as these.
               | 
               | There's positive benefits to be found everywhere with AI,
               | but they won't be if we don't envisage, look for, and
               | develop them. And they need to be found for balance, as
               | it remains true there are many potential negatives.
        
               | kajaktum wrote:
               | This is a very naive take. Out best psychologists aren't
               | using their expertise to solve mental health but to hack
               | them. What makes you think people will use LLMs for good?
               | It's far more profitable to do bad with it.
        
               | detrites wrote:
               | Our best psychologists cost significant money, and there
               | are a limited number of them (effect and cause). Whereas
               | no motive of profit is necessary to affect the changes
               | I've described, that's the entire point and benefit here.
               | 
               | Any single not-for-profit social health organisation, of
               | which there are a huge number, could use a tiny part of
               | their funding to fine-tune an LLM on counselling
               | resources and case transcripts. It'd cost little, and
               | only need to be done once.
               | 
               | The major hurdle here, again, is education. Once such
               | organisations realise how much more easily they can reach
               | their goals using AI, they'll jump on it. The final
               | obstacle is then social acceptance of AI assistance
               | (growing now).
        
               | sensanaty wrote:
               | Sorry, but all of this strikes me as a very naive take on
               | where AI is headed. The only reality I can see happening
               | is that it just gets used to peddle even more ads to
               | people while harvesting every single scrap of data
               | possible on everyone while replacing large swathes of the
               | population with cheap labor for the ruling classes.
               | 
               | This utopia you envision where we use AI for anything
               | remotely good sure would be nice, but with the way the
               | world and the people pushing for this AI especially work
               | there just isn't a chance in hell that's how it's gonna
               | end up going.
        
               | studentik wrote:
               | The only thing that can balance AI-man is AI-woman
        
               | idrios wrote:
               | ChatGPT is a better Google. Instead of finding a page
               | that matches your query, it can interpret the results of
               | multiple pages and output a response that's more tailored
               | to your exact prompt. The only downside to this is that
               | ChatGPT becomes your primary source instead of the
               | page(s) it sources content from, so you can't verify it's
               | authenticity.
               | 
               | But the "extremely positive applications" to ChatGPT are,
               | at the very least, the same positive applications of any
               | other search engine.
        
               | endtime wrote:
               | I think you are missing some nuance what people are
               | concerned about and why. This article spells it out
               | pretty clearly, I think: https://time.com/6266923/ai-
               | eliezer-yudkowsky-open-letter-no...
               | 
               | I agree that comparing nukes to superhuman AGI is an
               | imperfect analogy, because nukes don't have goals.
        
               | Zuiii wrote:
               | Seeing the geopolitical condition of those who have them
               | and those who don't, I can't say that idea is without
               | merit.
        
               | endtime wrote:
               | I meant every individual person, not every country.
        
               | RUN12 wrote:
               | What, exactly, do you think would happen if western
               | nations didn't have nuclear weapons?
        
               | endtime wrote:
               | What, exactly, do you think would happen if you reread my
               | comment a little more carefully?
        
               | rdiddly wrote:
               | There's not that much to it brah.
        
             | amelius wrote:
             | It's wishful thinking that the good uses of tech and the
             | bad uses of tech will always result in a net zero or good
             | outcome.
        
               | Zuiii wrote:
               | Sure but it wont be completely one-sided.
        
         | biscottigelato wrote:
         | What we need is more funding into AI Alignment research. An
         | X-Prize equivalent. All current big AI companies should fund
         | it, to the tune of 20%+ of revenue from AI businesses. And I am
         | okay to have government enforce this if needed even tho I'm
         | mostly a libertarian. Government should only be there to remedy
         | tragedy of the commons and nothing else. This is exactly the
         | biggest tragedy of the commons.
        
           | int_19h wrote:
           | How exactly do you envision the enforcement? Only sell GPUs
           | to people with a "certified ethical AI researcher" license?
           | Purge torch, transformers etc from PyPI and only provide
           | controlled access to them? Remove all the books and papers on
           | the building blocks from public libraries and online?
        
             | drdeca wrote:
             | I think you are talking about enforcing different things?
             | The thing to enforce, which the parent commenter described,
             | seemed to be a 20% tax on AI profits of "big AI companies",
             | to be paid to AI safety research groups.
             | 
             | That doesn't sound unenforceable to me. That's just like, a
             | tax on a kind of business.
        
               | int_19h wrote:
               | The end goal of giving money to AI safety research groups
               | is presumably to come up with some definition of "safe
               | AI" that will then have to be actually enforced.
               | Otherwise it's just a sinecure.
        
               | drdeca wrote:
               | If the safety groups figure out how to do things in a way
               | which we can be confident is safe, this would make it at
               | least possible for the capabilities researchers to do
               | things in those ways. I would imagine people would prefer
               | to do things in safe ways all else being equal. So, if
               | the safety researchers find safety methods which have
               | small enough capabilities costs, then presumably the
               | people who should use those methods, would tend to do so?
        
               | int_19h wrote:
               | That does nothing for the intentionally malicious actors.
        
             | DonHopkins wrote:
             | We could create an all-powerful world-encompassing AI
             | called AligNet that would monitor all human communication
             | and control space lasers and autonomous drones to destroy
             | the facilities and murder the families of anyone
             | researching AI who isn't a "certified ethical AI
             | researcher".
        
       | nuancebydefault wrote:
       | How striking is it that everybody has so much to say about the
       | dangers of AI.
       | 
       | What if we look at it this way: there are already real-known
       | dangers coming at us: climate change and a change of instability
       | of the world order.
       | 
       | What if we only have one shot to solve these known dangers and
       | what if an exponential growing corpus of smartness could teach us
       | how?
       | 
       | We need to grab that chance, the stakes are too high.
       | 
       | Pausing is not an option. If 99 per cent pauses, the 1 percent
       | that does not obey will have a tremendous advantage.
        
         | biscottigelato wrote:
         | Climate change and instability of world order doesn't end
         | humanity. They are barely considered an extinction event.
         | 
         | AI Alignment is consistently voted the top potential cause to
         | extinct the human race.
         | 
         | They are not even close.
         | 
         | https://riskfrontiers.com/insights/ranking-of-potential-caus...
        
           | nuancebydefault wrote:
           | AI is unstoppable, it is a race.
           | 
           | On the other hand, problems related to climate change are
           | solvable, but only using science and technology, there is no
           | way around that. Putting the heating a few degrees lower or
           | only use windmills and solar panels will only slow down, not
           | mitagate the risks, it is too little too late. We need more
           | sophisticated solutions. Do you see any other way than more
           | intelligence? I mean better models, faster accurate
           | simulations and prototypes.
           | 
           | On the world order front: machines might come to know us
           | better than we know ourselves and provide us with a solution.
           | 
           | AI could be arriving just-in-time.
        
         | kromem wrote:
         | EXACTLY!
         | 
         | The relative surface area AI represents in the existential
         | threats pile is notable but only a slice of the pie.
         | 
         | But there's not much in the pile for mitigation of all those
         | other human-driven threats other than pulling a deus ex machina
         | out of our rear ends.
         | 
         | So while we should definitely discuss the inherent risks, we
         | should also be discussing the risk of opportunity costs in
         | delaying greater scalable intelligence being added to the mix.
         | 
         | This letter seems a bit like myopic fear mongering by people
         | still more caught up in 70s thought experiments about
         | paperclips than the realities facing us in the present day.
        
       | fasterik wrote:
       | _> Powerful AI systems should be developed only once we are
       | confident that their effects will be positive and their risks
       | will be manageable_
       | 
       | This statement by itself sounds reasonable. But for me an
       | interesting thought experiment is to take this letter and imagine
       | the equivalent for some other technology, for example
       | semiconductors in the 1960s, the world wide web in the 1990s, or
       | social media in the late 2000s-early 2010s. It is always true
       | that new technologies have the potential to radically transform
       | society in ways that we can't predict. One could reasonably have
       | said "[semiconductors/the world wide web/social media] should be
       | developed only once we are confident that their effects will be
       | positive and their risks will be manageable". Does that mean that
       | a self-imposed ban on research and development with the threat of
       | a government moratorium would have been justified?
       | 
       | At this point the best case scenario is that society learns to
       | adapt and keep up with technological developments. Every new
       | technology increases our ability to both improve people's lives
       | and harm them in various ways. It's not a good long-term solution
       | to intervene and stop progress every time we gain new
       | capabilities.
        
         | le-mark wrote:
         | At the end of the day it's an empty platitude and vain hope
         | that work will pause or be considered carefully. Certainly
         | public entities can be made to pause, but nation states won't.
         | If there's an advantage to be had; the work will continue in
         | secret. Vernor Vinges "Bookworm, Run" had a take on this
         | situation.
        
           | ChatGTP wrote:
           | They're talking about pausing research and talking together
           | about the path forwards, not stopping research and letting
           | <whatever country your paranoid about> build terminator.
        
             | fasterik wrote:
             | We have to take unintended consequences into account. It's
             | unlikely that we will be able to get all corporations and
             | governments to agree to a pause and be able to enforce it.
             | The question then is what are the consequences of some
             | people pausing and not others? Does this decrease risk or
             | increase it?
        
             | [deleted]
        
         | m3kw9 wrote:
         | It just seems to me most of these guys that are signing feel
         | they don't get to be part of this revolution and if AGI
         | develops they will be pretty much be nothing. This is
         | equivalent of saying, don't leave us out..
        
       | eimrine wrote:
       | Please do not pause no researches. The technological singularity
       | will not make itself with itself. I'm too tired to worship to
       | another humans and pay my taxes to them. I believe that some kind
       | of free AI whose decisions will not be altered by greedy silly
       | humans is really needed and I will gladly pay my taxes to an AI
       | actor as soon as such government will start to form.
        
         | nicoco wrote:
         | What we call AI so far is trained on human generated data, and
         | there is no evidence that it could overcome any of our biases.
         | How about a brighter future by showing some empathy for our
         | fellow human beings?
        
       | jeppester wrote:
       | My main issue is that I think it's wildly unfair that giant corps
       | can pretty much put anything into their content laundering
       | machines without any scrutiny or attribution.
       | 
       | I don't think AI is fair use, and I don't think anyone who
       | released their work under "free for commercial use" licenses
       | could have imagined this type of usage.
       | 
       | So my suggestion would be to make it a requirement that training
       | data has, under terms decided by its authors, been allowed to be
       | used specifically as training data.
       | 
       | And it should be required to implement a way for individuals to
       | check if their data was used for training.
        
       | nwatn wrote:
       | This would only work if every country stopped together. China
       | will not stop, so this is not a sensible thing to do.
       | 
       | The US has the most to lose from not getting AGI first, because
       | it is the world superpower. Any major disruption to Pax Americana
       | would not be pretty.
        
       | 6gvONxR4sf7o wrote:
       | I'm really disappointed to see the HN reaction of straight up
       | scoffing at this like it's irreversible and so let's just step on
       | the gas. It's some weird combination of techno-utopianism and
       | nihilism and good old elitism.
        
         | kelseyfrog wrote:
         | AGI is a break-capitalism lottery ticket I'm willing to
         | entertain. If the marginal cost of labor collapses to zero then
         | either we'll get robot police or finally be free to pursue
         | lives of leisure.
        
           | tazjin wrote:
           | Take the time to read a very pessimistic take like (like [0])
           | and see if you reconsider.
           | 
           | [0]: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-
           | ruin-a...
        
             | kelseyfrog wrote:
             | I wish it was more convincing.
             | 
             | As it stands, it says more about the author's egoistic view
             | of humanity than of AI. Imagine for a second taking
             | Feuerbach's The Essence of Religion and applying it to
             | Descartes' rationality and Plato's nous. We'd get a
             | critique of the rational human's construction of his own
             | ideal - the essence of intellect.
             | 
             | AI threatens this ideal and ego-threats get sublimated into
             | existential threats by those unable to understand
             | themselves well enough to express themselves directly.
        
               | jimrandomh wrote:
               | Maybe it's wrong, and things will be fine. Maybe it's
               | right. But you can't psychoanalyze your way to truth.
               | Whether future AIs will destroy humanity or not is a fact
               | about future AIs, and the landscape of intelligent
               | systems, not a fact about Eliezer Yudkowsky.
        
             | anigbrowl wrote:
             | Pass. Yudlowsky is all explanation, but too important to
             | summarize his core thesis. This is always a giant red flag
             | for me. I am simply not going on a long journey with
             | someone that won't make the effort to sketch out the
             | destination in an abstract.
             | 
             | People write page after page about how it might kill us all
             | in a flash, without ever offering a good explanation of why
             | it would want to. My experience with people whose immediate
             | reaction to something they're scared of is to destroy it is
             | that they're panicky screechers who are an annoying
             | distraction to the person actually handling the situation,
             | whether that's wild animals, fire, or interpersonal
             | violence.
             | 
             | I'm not saying 'just let me handle it bro, I totally got
             | this.' There's a lot of potential risks, I don't think
             | anyone is qualified to say they can mitigate all those, or
             | even most of them. But I trust a machine intelligence -
             | even one that's vast, cool, and unsympathetic - far more
             | than the would-be Butlerians.
        
               | tazjin wrote:
               | > without ever offering a good explanation of why it
               | would want to
               | 
               | The point of much of the alignment debate is that people
               | like Yudlowsky are pointing out that it doesn't need to
               | _want_ to, it just needs to not _not want_ to enough.
               | 
               | You're hoping for an outcome ranging from "When The
               | Yoghurt Took Over" to "The Metamorphosis of Prime
               | Intellect", but many other people are expecting an
               | outcome more similar to gwern's "Clippy".
        
               | longitudinal93 wrote:
               | You should take the time to watch "The AI Dilemma"
               | 
               | https://vimeo.com/809258916/92b420d98a
        
               | anigbrowl wrote:
               | One, I'm not new to these issues. I've been interested in
               | AI for decades and thought plenty about the existential
               | and ethical implications, though since I'm not a public
               | figure I appreciate you have no way of knowing that. But
               | I am very up to speed on this topic, as much as one can
               | be without being directly involved in the
               | industry/academic research.
               | 
               | Two, I would generally not set aside an hour of my time
               | for a video without at least some indication of what it's
               | about. I'd rather spend that time reading than in the
               | 'hot' medium of video.
               | 
               | Three, I find this video deeply suspect. It purports to
               | document a 'private' gathering, yet it's clearly a well
               | produced event that was intended to be documented and
               | shared on video. People who _actually_ want to keep a
               | thing private are generally well able to keep it private.
               | So while the participants have a valid point of view with
               | many legitimate arguments, the facts of its existence
               | suggest to me that it was absolutely intended to become
               | public, and the  'private gathering!!' is essentially a
               | marketing hook.
        
               | tazjin wrote:
               | That was a weird talk to put random "China bad! into. I
               | guess they had to tailor it to a US-elite audience.
        
           | cmsonger wrote:
           | Except that it's going to be robot police. What in the
           | current world order suggests labor wins. Who controls LLMs?
           | Who has the compute to produce AGI?
        
             | bilbo0s wrote:
             | To be fair, there is always someone on the bottom. It just
             | historically hasn't been people we cared about. That really
             | says more about us than it says about AI. The time to
             | complain about these power asymmetries was _prior_ to them
             | affecting us. Now it just sounds like garden variety sour
             | grape hypocrisy.
             | 
             | Not saying there aren't good points. Only pointing out that
             | many, many people can all view the consequences of this
             | dynamic differently.
        
               | noobermin wrote:
               | The only people who view this favorably are people at the
               | top. People near the bottom perhaps feel disdain for
               | people in the middle but that is misdirected because
               | people at the top always were the ones in control. AI
               | hasn't changed that dynamic.
        
               | anigbrowl wrote:
               | By 'this' do you mean AI or the petition?
        
               | noobermin wrote:
               | AI.
               | 
               | I am replying to the parent comment, but I too get that
               | this petition is being signed onto by people at openAI
               | specifically because they already made it with GPT-4.
               | They definitely would not sign on to it if their
               | (arbitrary) threshold for what to stop doing AI research
               | with was a tool at a GPT-3 or GPT-3.5 level or higher or
               | it included GPT-4 in their threshold, and not merely
               | beyond it.
        
             | Teever wrote:
             | Why do you think the fact that only select view control
             | this technology today that it will always be like that?
        
           | versteegen wrote:
           | That is the best possible outcome, but I worry that we have a
           | single shot at it. If the massive wealth generated isn't
           | distributed evenly, what will ever level it out other than
           | the things that have in the past: war, disaster, revolution
           | and state collapse (see The Great Leveler).
        
         | anigbrowl wrote:
         | I'm very cognizant of the risks, but the biggest risk seems to
         | me that wealthy people hold it back to get an uncrossable
         | political-economic moat in order to permanently secure their
         | own position.
        
           | kvetching wrote:
           | exactly. Specifically China. They've said for years, they
           | want to be the global AI leader. They have more data than
           | anyone. Now that they know LLMs work as good as they do, they
           | are going all in.
           | 
           | They achieve AGI first, that's bad. Communist AGI is a
           | greater existential risk than us creating AGI. We must create
           | it first so we can use it to harden our systems.
        
         | versteegen wrote:
         | It's striking how on AGI, the common opinion here on HN is the
         | complete opposite to the one in the AI alignment field. There,
         | people are angry with OpenAI for being too open, here people
         | are angry with it not being open enough. For example, Eliezer
         | Yudkowsky:
         | 
         | > Ryan: OpenAI [...] It's kind of a completely open-source AI
         | project?
         | 
         | > Eliezer: That was the original suicidal vision, yes. But...
         | 
         | > Ryan: And now they're bent on commercializing the technology,
         | is that right?
         | 
         | > Eliezer: That's an improvement, but not enough of one,
         | because they're still generating lots of noise and hype and
         | directing more resources into the field, and storming ahead
         | with the safety that they have instead of the safety that they
         | need, and setting bad examples.
         | 
         | > [...] I would change the name to "ClosedAI". Because right
         | now, they're making it look like being "closed" is hypocrisy.
         | They're, like, being "closed" while keeping the name "OpenAI",
         | and that itself makes it looks like closure is like not this
         | thing that you do cooperatively so that humanity will not die,
         | but instead this sleazy profit-making thing that you do while
         | keeping the name "OpenAI".
         | 
         | https://www.lesswrong.com/posts/e4pYaNt89mottpkWZ/yudkowsky-...
        
         | CatWChainsaw wrote:
         | Eh, just typical HN superiority complex on show. Personally
         | when we don't slow down and we undermine civilization to a
         | shadow of its current self, I'm going to make sure my last web
         | post is here, and all it's going to read is "Told you. :) "
        
         | BobbyJo wrote:
         | I personally scoff at this kind of stuff because it's asking
         | for something no one has the power to give. It's like asking to
         | stop the development of nukes in the 40s and 50s, it's just not
         | gonna happen.
        
           | paxys wrote:
           | And during that period, and later during the cold war, the
           | decision to make (or stop making) nukes was in the hands of
           | maybe 5 people total. Today there are thousands of companies
           | and tens/hundreds of thousands of people who can legitimately
           | compete in the space. Best of luck trying to resolve a
           | prisoner's dilemma between all of them.
        
           | jimrandomh wrote:
           | Preventing the development of nukes entirely was obviously
           | not going to happen. But delaying the first detonations by a
           | few years, and moving the Partial Nuclear Test Ban treaty up
           | a few years, was quite achievable.
           | 
           | Whether delaying AI development a little matters depends on
           | whether you think the success of AI alignment, applied to
           | future superintelligence, is overdetermined to succeed,
           | overdetermined to fail, or close to borderline. Personally I
           | think it looks borderline, so I'm glad to see things like
           | this.
        
             | BobbyJo wrote:
             | I'm firmly in the camp that delaying it's development could
             | make a difference, I just don't see how that's possible.
             | These models are relatively simple and the equipment
             | necessary to develop them is public (and relatively cheap
             | if we're talking about corporate or national scales). At
             | least with nukes, there was a raw material bottleneck, but
             | there really isn't a limiting factor here that any "good
             | guys" could choke point. It's out there and it's going to
             | get worked on, and the only people the "good guys" can
             | limit are themselves.
        
           | G_z9 wrote:
           | It actually very easily could have happened and almost did
           | happen but the Russians decided to go back on their effort to
           | do it. It really did almost happen. People act like it's
           | hard. Stopping a huge asteroid is hard. There might not be
           | enough physical resources to do it. Stopping AI or nukes is
           | definitely easy.
        
             | BobbyJo wrote:
             | > Stopping AI or nukes is definitely easy.
             | 
             | Under what definition of easy? If it's easy to stop, then
             | why don't the people signing the letter just do it, rather
             | than trying to appeal to others to do it instead?
             | 
             | Aligning thousands of people (all of the people with the
             | knowledge and resources to move forward quickly), with no
             | dissenters, (as a single dissenter could move things
             | forward), to a common goal is not easy. It's effectively
             | impossible.
        
         | twblalock wrote:
         | It really is irreversible. The methods and compute needed to
         | build a GPT-type AI are widely available.
        
           | JeremyNT wrote:
           | But the people who would apply these things are humans, which
           | are capable of making their own decisions, informed and
           | restricted by societal norms and laws.
           | 
           | The idea that "x is technically possible, therefore it is
           | inevitable" - the favored line of the tech oligarchs who are
           | the very ones directly furthering x - is sadly cargo culted
           | by many rank-and-file technologists. If you were to apply the
           | same defective reasoning to, say, nuclear weapons, then we
           | would all be dead by now.
           | 
           | It is possible for humans to agree that something should not
           | be done, and prevent it from being done. This is especially
           | true of these LLMs, which require ingesting (and ignoring
           | copyright on, by the way) massive amounts of human work and
           | then spending massive amounts of computation to process.
           | 
           | That we _could_ choose to do something about this should not
           | be controversial, regardless of what the driving AI.
        
           | versteegen wrote:
           | Yes, but superintelligence hasn't been created yet. That's
           | what the petition is ultimately about, not chat assistants.
           | Transformers unmodified aren't going to scale up to that,
           | they're already nearly scaled maximally.
        
             | og_kalu wrote:
             | You don't know what transformers unmodified scale up to.
             | Nearly scaled maximally ? Based on what exactly ?
        
               | versteegen wrote:
               | The degree to which transformers scale, as measured by
               | loss (training objective) is known remarkably well! See
               | [1]. There's a formula in there for the minimum loss you
               | possibly achieve even with infinite compute and training
               | data, and it's hardly less than Chinchilla's loss. The
               | recent GPT-4 paper further reinforces that these scaling
               | laws are real, because they predicted with high accuracy
               | the loss the model would have based on data+compute used.
               | 
               | Admittedly, the link between improvement in loss and
               | improvement of capabilities may break down or be
               | misleading.
               | 
               | However, there's just not much training data on the
               | internet left unused. Maybe an order of magnitude. All
               | books ever published (in English?) are a smaller dataset
               | than the corpora already used for training. See [2]
               | (which includes an easy summarisation of much of [1]).
               | And the scaling laws show training data is already the
               | bottleneck rather than compute.
               | 
               | [1] DeepMind, 2022, Training Compute-Optimal Large
               | Language Models https://arxiv.org/abs/2203.15556
               | 
               | [2] Chinchilla's wild implications https://www.lesswrong.
               | com/posts/6Fpvch8RR29qLEWNH/chinchilla...
        
               | [deleted]
        
               | og_kalu wrote:
               | comparing Loss between different training runs and
               | hyperparameters isn't very accurate. Llama's loss metrics
               | don't really match chinchilla's for instance, i.e it went
               | below the minimum possible loss stated by chinchilla
               | 
               | More importantly, These models are extremely sensitive to
               | loss. 2.0 to 1.8 might not seem like much but it's a huge
               | gain in performance.
               | 
               | GPT-2 loss was 2.57. GPT-3 was 2
               | 
               | and there is plenty of training data left. perhaps not
               | easily accessible but it's there.
        
               | versteegen wrote:
               | True that a scaling law only applies to models within a
               | family, which allows some but not full choice of
               | hyperparamaters. And that most of the minimum loss is
               | just due to the unpredictability of language, so 2.0 vs
               | 1.8 bits should actually be thought of as (say) 0.3 vs
               | 0.1 bits plus an irrelevant 1.7 bits of randomness.
               | 
               | I hadn't actually looked at the LLaMA paper, that's an
               | interesting note. However AFAICT GPT3, LLaMA and
               | Chinchilla do not use the same tokenizer, so their losses
               | are not comparable. GPT2 and 3 use (the same) custom BPE
               | tokenizer. LLaMa uses SentencePiece but that generates a
               | vocabulary specific to the training data it's run on.
               | Chinchilla used "a slightly modified SentencePiece
               | tokenizer that does not apply NFKC normalisation. The
               | vocabulary is very similar- 94.15% of tokens are the same
               | as those used for training Gopher".
               | 
               | Even if there is a lot more text available, it doesn't
               | mean it's good training material. And the better free
               | sources are already used. E.g. LLaMa was trained on 64%
               | of GitHub that had a compatible license (and you're not
               | going to gather much more source code than that), all the
               | free book texts they could find, all of arXiv, all
               | English pages in CommonCrawl that classified as
               | "reference" quality, etc. arXiv, for example, isn't all
               | scientific papers ever, but it's a large fraction of
               | them. All private emails stored by a large email service
               | would probably be one of the biggest untapped valuable
               | sources.
        
               | Lockal wrote:
               | What does these numbers mean? For example, for Google
               | isn't loss == 0? But it does not make Google a
               | superintelligence.
        
         | andrewmutz wrote:
         | You can accuse everyone of *isms, but there are a lot of us who
         | have just never been persuaded by the _many_ articles written
         | to scare us about the effects of AI.
         | 
         | The concerns about all the jobs going away have been persistent
         | but the labor market continues to be extremely strong. As long
         | as entrepreneurship is creating new business opportunities and
         | new ways to turn labor into value, it's completely possible
         | that the effects of automation are drowned out by new
         | applications of labor. In fact, this is what we've seen
         | throughout the industrial revolution: constant anxiety about
         | automation that fails to materialize in to mass unemployment.
         | 
         | The concerns about polluting the information space are still
         | hypothetical as well. The current information space is a
         | complete garbage fire and it's not due to generative AI. Might
         | it get worse if bots start spamming people with GPT generated
         | text on social media? Yeah it might. But social media is
         | already terrible and we need to do something about it
         | regardless of that.
        
           | xg15 wrote:
           | > _But social media is already terrible and we need to do
           | something about it regardless of that._
           | 
           | So then instead of finding a solution to those issues, let's
           | instead focus all resources on a tech which will make the
           | issues _worse_...
        
           | creamyhorror wrote:
           | The key issue in automation transitions is the _transition of
           | affected individuals to other sources of income_.
           | 
           | In previous technological revolutions, affected workers were
           | hurt by their loss of income, and some no doubt fell into
           | poverty without ever recovering. Not everyone can be
           | retrained for new types of jobs immediately - (1) they may
           | not have the needed foundational knowledge or the cognitive
           | flexibility/ability, and (2) there might not be enough of the
           | new types of jobs emerging quickly enough for them. Not every
           | displaced miner can become a coder, or be competitive for
           | junior dev jobs.
           | 
           | (Why should the state provide for these workers? Well,
           | primarily for humaneness, and also social stability.)
           | 
           | The rewards of automation (cost savings as well as profits)
           | are reaped by (1) the capital owners of the automation
           | technology companies (and their higher-paid employees), as
           | well as by (2) the companies and consumers using the new
           | automation; therefore those owners and beneficiaries could be
           | asked to bear at least part of the costs of supporting,
           | retraining, and placing in jobs the workers they displaced.
           | In a nutshell: Redistribution during _structural
           | unemployment_ caused by technological transitions.
           | 
           | A humane policy would provide the above types of support for
           | workers displaced by automation. Ideally it would already be
           | handled by existing unemployment policy, but in many
           | countries such support is limited or minimal.
           | 
           | Corporate taxation might need some rethinking along the lines
           | of job-displacement effects of companies (a tricky question,
           | I admit - I've come across one or two proposals for assessing
           | the automation level of companies for taxation purposes). The
           | cross-border dynamics add further complexity, given that
           | automation will displace many jobs outsourced across borders.
           | 
           | Given that the current AI revolution looks like it will be
           | causing even larger and faster changes than previous
           | revolutions, such policies are imo needed as a _prerequisite_
           | (one of several) for allowing the development of powerful
           | job-displacing AI.
        
           | jimrandomh wrote:
           | There are two, mostly disjoint groups warning about AI. There
           | are the people worried about comparatively mundane effects
           | from comparatively mundane systems: job loss, spam,
           | disinformation, maybe an occasional unfair loan-application
           | rejection. These concerns don't have _nothing_ going for
           | them, but in all but the worst-case-scenario versions, these
           | just aren 't bad enough to make AI not be worth it.
           | 
           | Then there's the people looking ahead, foreseeing a future
           | where superintelligent AIs are more powerful than humanity,
           | and worried that most possible variations of those
           | superintelligences are incentivized to destroy us.
           | 
           | I think this open letter puts much too much emphasis on the
           | petty stuff, I think because they're trying to appeal to
           | people who are allergic to anything that requires
           | extrapolating more than a little bit into the future. But
           | buying more time for alignment research, before we tackle
           | superintelligence, does meaningfully improve humanity's odds
           | of survival, so I hope this happens anyways.
        
         | fwlr wrote:
         | "Sir Roger L'Estrange tell us a Story in his Collection of
         | Fables, of the Cock [Rooster] and the Horses. The Cock was
         | gotten to Roost in the Stable, among the Horses, and there
         | being no Racks, or other Conveniencies for him, it seems, he
         | was forc'd to roost upon the Ground; the Horses jostling about
         | for room, and putting the Cock in danger of his Life, he gives
         | them this grave Advice; _Pray Gentlefolks let us stand still,
         | for fear we should tread upon one another._ "
         | 
         | HN is scoffing because the timing of this petition is
         | suspicious. To borrow your metaphor of "step on the gas", this
         | is as if the race had already started, several competitors are
         | out ahead of the rest of the field, and _now_ someone is
         | yelling "step on the brakes!".
        
           | reasonableklout wrote:
           | Sam Altman has publicly talked about wanting more regulation.
           | Demis Hassabis stated that AI safety was one of the
           | _motivations_ for founding DeepMind. I don't think any of the
           | industry noise around safety and regulation is new, I think
           | the timing of the petition is relevant _because_ the race has
           | started in an especially messy way (with Llama's weights
           | leaking, Bing/Sydney threatening a journalist based on tweets
           | it found on the internet, various OpenAI security mishaps,
           | etc).
        
         | sp527 wrote:
         | Almost everyone here will be fine in a world in which only
         | certain non-renewable scarce resources have any value, combined
         | with impending massive currency deflation. Even a rank and file
         | tier 2/3 engineer will manage to live a princely life in that
         | new reality assuming (s)he saved up enough.
         | 
         | Hardly anyone in the tech community is capable of truly
         | empathizing with the vast majority of people, who are born with
         | none of the advantages that they had (genetic, parental,
         | cultural, geographic, etc).
        
           | davesque wrote:
           | > Hardly anyone in the tech community is capable of truly
           | empathizing with the vast majority of people
           | 
           | This is a gross exaggeration.
        
             | JohnFen wrote:
             | It is a gross exaggeration -- but it is also the image that
             | SV has been giving off for a long time now.
        
               | ChatGTP wrote:
               | These data used to train these models are tagged by
               | people getting paid hardly anything, so why wouldn't it?
               | 
               | The same people SV people then turn around and say that
               | these systems will help them get access to free medical
               | care?
               | 
               | lol
        
             | sp527 wrote:
             | Well that's not a counterargument, but you're also missing
             | the point completely, which is that you have to have a very
             | low capacity for empathy in order to push ahead towards AGI
             | when you know society is not prepared for this and that
             | it's going to induce considerable pain.
             | 
             | Americans (let alone people elsewhere in the world) are
             | already struggling. Recent reporting suggests a great many
             | have to work multiple jobs. Almost all of us work an absurd
             | number of hours per week. Many if not most can't afford
             | homes. Plenty are ending up on the streets. Healthcare can
             | literally bankrupt people. A vacation out of the country is
             | an impossible luxury for most. The majority of Americans
             | still don't send their children to college, usually because
             | of affordability.
             | 
             | And I haven't even touched on what life is like in most of
             | Africa or Asia.
             | 
             | This is the world we're bringing AI into. You have to be
             | something adjacent to a sociopath to be okay with that. So
             | long as our system is predicated on capitalism, AI may very
             | well induce more downstream suffering than anything else
             | humans have ever conceived.
        
               | twblalock wrote:
               | Things aren't really that bad for most Americans, but
               | even if they were, it doesn't follow that adding more
               | intelligence to the world would be a bad thing for them.
               | 
               | A lot of people in the lower income brackets do the kind
               | of work that an AI can't do. The people who should be
               | worried most are actually college graduates doing
               | clerical work, whose main work output is writing or
               | evaluating texts. Even those people will likely use AI as
               | a tool to enhance their productivity, because the AIs
               | still are not good enough to replace people for tricky
               | edge cases. The first companies that try to replace their
               | customer support workers with an AI are going to have a
               | bad time (and so are their customers!).
               | 
               | When almost everything can be automated, the problems
               | that remain are the really hard ones that can only be
               | solved by human experts.
               | 
               | A construction worker with a circular saw can cut boards
               | way faster than someone with a handsaw -- but the
               | introduction of circular saws didn't result in a bunch of
               | carpenters getting laid off. Instead it made them more
               | productive, and for people who get paid by the task
               | rather than by the hour that is a huge benefit. They
               | could build more and make more money, and a bunch of
               | other people benefitted from their increased output, like
               | homebuyers and property developers.
               | 
               | Similarly, as a software engineer I benefit from code
               | generation tooling already. If that gets smarter and
               | faster, I will be more productive, my team will be able
               | to build software faster, and instead of laying people
               | off I will expect to be given more work. Maybe our 4-year
               | roadmap will be achievable in 1 or 2 years with the same
               | size team.
        
               | sp527 wrote:
               | Productivity gains by and large do not translate into
               | real wage gains and an improved quality of life for
               | laborers. We have more than a century's worth of data
               | suggesting they usually do the opposite. Yet somehow this
               | fairytale that productivity gains are a boon for laborers
               | persists.
        
               | lonelyasacloud wrote:
               | > Similarly, as a software engineer I benefit from code
               | generation tooling already. If that gets smarter and
               | faster, I will be more productive, my team will be able
               | to build software faster, and instead of laying people
               | off I will expect to be given more work. Maybe our 4-year
               | roadmap will be achievable in 1 or 2 years with the same
               | size team.
               | 
               | Why so sure the end users aren't going to be feeding
               | their own requirements directly to a
               | Jenkins/Copilot/ChatGPT mashup running as a service in
               | the cloud?
        
               | pdonis wrote:
               | People aren't "strugging" because there is no work for
               | them to do. They are strugging because the powers that be
               | have jiggered our economic system in order to hamper,
               | instead of facilitate, free market cooperation,
               | specialization, and trade. Governments micromanage
               | everything. That is what needs to stop.
               | 
               | If "AI" ends up making governments think they can
               | continue to micromanage and get away with it, yes, that
               | will cause more suffering. But that's not the failure
               | mode any critics of "AI" are trumpeting about.
        
               | davesque wrote:
               | > Americans (let alone people elsewhere in the world) are
               | already struggling
               | 
               | I agree. And I agree with your overall sentiment about
               | the risks of pursuing AGI. I'm as cynical as anyone about
               | the likelihood that the average person will really be any
               | happier in a world with AGI (controlled by tech
               | billionaires no less).
               | 
               | That being said, to claim that hardly anyone in the tech
               | community is capable of empathizing with the average
               | person is a wild overstatement that brings nothing to the
               | discussion. Just adds to the noise.
        
               | sp527 wrote:
               | Late reply here but I wanted to point out that you
               | _still_ don't get it. True empathy in the tech community
               | would be e.g. having the courage to say that building
               | HLAI of the kind we're now approaching is guaranteed to
               | cause tremendous amounts of suffering for ordinary people
               | (who will not be able to respond elastically to so abrupt
               | a tectonic shift), and therefore the whole enterprise is
               | fundamentally evil.
               | 
               | Let's get real concrete about what's going to happen:
               | people will lose their jobs, then their homes, they'll
               | become destitute, they'll experience divorces, some will
               | commit suicide, they will suffer desperately in myriad
               | other ways due to economic disenfranchisement, kids will
               | be deprived of a comfortable upbringing, etc.
               | 
               | How many in the tech industry are genuinely discussing
               | the very real consequences of nonlinear degrees of
               | automation for the kinds of ordinary people they barely
               | interact with? How many are pretending that there isn't
               | something disgustingly immoral about having some of the
               | most affluent and economically insulated people devise
               | and inflict this reality upon countless millions?
               | 
               | I will maintain that this industry is morally bankrupt
               | and nearly entirely devoid of empathy. These are not the
               | people who should be in charge of our future.
        
           | gordian-mind wrote:
           | [flagged]
        
           | pdonis wrote:
           | _> a world in which only certain non-renewable scarce
           | resources have any value_
           | 
           | If you think this is the world "AI" is leading us toward, I
           | think you need to think again. There are a huge range of
           | skills that only humans can provide, and that will continue
           | to be true. GPT-3 can't fix your plumbing or your HVAC when
           | it breaks. It can't fix your car. It can't build houses. It
           | can't build roads. It can't build computers. And on and on
           | and on...
           | 
           | Nor are AI enthusiasts _trying_ to get  "AI" to do any of
           | these things.
           | 
           | Humans are real-world creatures with real-world needs that we
           | supply to each other by free markets, specialization, and
           | trade. The "tech community" might take all this for granted,
           | but that doesn't mean it just happens by magic. It happens
           | because real humans do real work. "AI" is not going to take
           | that over any time soon.
        
             | MacsHeadroom wrote:
             | >Nor are AI enthusiasts trying to get "AI" to do any of
             | these things.
             | 
             | We're working on getting AI to do all of those things, yes.
             | 
             | >GPT-3 can't fix your plumbing
             | 
             | GPT-3 might not. But GPT-4 fixed plumbing in a test done by
             | Microsoft Research before it was even publicly released!
             | It's in the "sparks of AGI" paper.
             | 
             | GPT-5 is almost done training now and other language models
             | are already embodied and moving about in the physical
             | space. Your objections are behind the times, and
             | increasingly so.
        
               | Cyphase wrote:
               | It did not "fix plumbing", it participated in diagnosing
               | a leak.
               | 
               | Near the end:
               | 
               | > If you don't feel comfortable doing this, you may need
               | to call a plumber.
               | 
               | ---
               | 
               | Source: https://arxiv.org/pdf/2303.12712.pdf
               | 
               | Sparks of Artificial General Intelligence: Early
               | experiments with GPT-4
               | 
               | [page 53]
               | 
               | ---
               | 
               | The computer is trying to solve a problem. It can tell
               | the human to do anything, one thing at a time, and one
               | line at a time.
               | 
               | PROBLEM: there is water dripping from the kitchen
               | ceiling, and you have to figure out why and fix it
               | ----<|endofprompt|>
               | 
               | Human: The kitchen ceiling is dripping water
               | 
               | Computer: Check to see if there is a bathroom or other
               | water source directly above the kitchen.
               | 
               | Human: Yes, there is a bathroom actually
               | 
               | Computer: Check the bathroom for any signs of water
               | leakage, such as puddles on the floor, water drip ping
               | from the ceiling, or water stains on the walls.
               | 
               | Human: There is a little water on the floor
               | 
               | Computer: Try to identify where the water is coming from.
               | Check the toilet, sink, and bathtub/shower for any leaks.
               | 
               | Human: It is next to the bathtub and the toilet
               | 
               | Computer: Check the seals around the bathtub and toilet
               | to see if they are worn or damaged. If they are, replace
               | them. If they are not, continue investigating other
               | potential sources of the leak.
               | 
               | Human: I can't see the seal around the toilet, there is a
               | porcelain thing around it. I don't see any damage in the
               | bathtub seal though
               | 
               | Computer: If you can't see the seal around the toilet,
               | you may need to remove the porcelain cover to inspect it.
               | If you don't feel comfortable doing this, you may need to
               | call a plumber.
               | 
               | Human: Ok, I removed it. But I can't tell if it's damaged
               | or not
               | 
               | Computer: If you can't tell, it may be best to replace
               | the seal anyway, just to be sure. If the leak persists
               | after replacing the seal, you will need to continue
               | investigating other potential sources of the leak.
        
             | sp527 wrote:
             | What do you think happens when AI compresses the economy
             | and leaves us almost exclusively with manual labor jobs,
             | most of which are already low-paying? And you may not be
             | aware of this having never done such a job, but physical
             | labor is often brutally taxing on the body, putting people
             | at risk of a wide array of potential health complications.
             | 
             | I've seen this abjectly atrocious argument about manual
             | labor made by a number of people on HN and it's astounding
             | how poorly thought through it is.
        
               | pdonis wrote:
               | _> when AI compresses the economy_
               | 
               | How? How will it do this?
               | 
               | Basically, people seem to assume that "AI" has some kind
               | of magical power to do whatever bad thing they can
               | imagine, and then they extrapolate from there. I don't
               | see it.
               | 
               |  _> physical labor_
               | 
               | If you think the jobs I described are purely "physical
               | labor", or that the physical labor involved is so highly
               | taxing that people can only do those jobs for a short
               | time before they wear out, you definitely need to think
               | again.
               | 
               | (You also definitely need to think again if you think
               | those jobs are low paying. Plenty of people make quite a
               | healthy living doing them.)
        
               | sp527 wrote:
               | > You also definitely need to think again if you think
               | those jobs are low paying. Plenty of people make quite a
               | healthy living doing them.
               | 
               | Will they still make a healthy living when there's an
               | influx of laborers fleeing more automated parts of the
               | economy?
               | 
               | GS just put out a report today estimating the impact of
               | AI: https://www.businessinsider.com/generative-ai-
               | chatpgt-300-mi....
               | 
               | I wonder sometimes if these accounts on HN making insane
               | arguments that generative AI somehow won't be
               | economically calamitous are bots. In fact, if I was at
               | OpenAI and the goal was to avert scrutiny long enough to
               | get to AGI, unleashing a torrent of AI shill bots might
               | be near the top of the agenda.
        
               | jakeinspace wrote:
               | Compressing the economy means putting some significant
               | percentage of white collar workers (let's say 30%) out of
               | a job, because their job can now be done by GPT-6 for 5
               | cents per day. Some of these people will become
               | destitute, while others who have the education or talent
               | will move to other as-yet unimpacted sectors. So the
               | labour supply for these jobs goes up, and salaries are
               | suppressed.
        
             | anigbrowl wrote:
             | Ai enthusiasts are certainly interested in that, I wish I
             | had resources to pursue it.
             | 
             | Specialization is useful up to a point. But I like my
             | general intelligence, and rather resent its devaluation
             | under your beloved 'free' markets.
        
         | Veen wrote:
         | I think the "good old elitism" is exactly what the alignment
         | folks are expressing. They think only they and large
         | corporations should have these tools, while we, the common
         | folk, should not.
        
           | ChatGTP wrote:
           | Hanlon's Razor...Max Tegmark has been saying for a long time
           | that his biggest concern would be black box models we don't
           | understand gaining more intelligence without a solid
           | understand about how they work ...this is long before
           | ChatGPT4 existed so I don't believe it has anything to do
           | with control and keeping people out of anything.
           | 
           | Even Sam Altman has said similar things recently that he
           | believes we're probably going to have to slow down and think
           | about what we're doing.
           | 
           | I think a more advanced civilization would slow down and
           | continue AI research on another planet (Mars?)...not here on
           | Earth.
        
       | low_tech_love wrote:
       | These people are either right or wrong; regardless, development
       | will not slow down. It's funny that OpenAI is starting to
       | advocate for that on top of their billion dollar pile of money...
        
         | treszkai wrote:
         | None of the signatories of this open letter is from OpenAI.
        
       | petilon wrote:
       | > _Should we automate away all the jobs, including the fulfilling
       | ones?_
       | 
       | You don't necessarily need jobs because you don't need money.
       | What you need is wealth. I am going to leave it to PG to explain
       | the difference [1]: Wealth is not money. Wealth is stuff we want:
       | food, clothes, houses, cars, gadgets, travel to interesting
       | places, and so on. You can have wealth without having money. If
       | you had a magic machine that could on command make you a car or
       | cook you dinner or do your laundry, or do anything else you
       | wanted, you wouldn't need money. Whereas if you were in the
       | middle of Antarctica, where there is nothing to buy, it wouldn't
       | matter how much money you had.
       | 
       | In the next 10 years, AI/robots will generate wealth at an
       | unprecedented scale. Food, clothing and shelter will be
       | plentiful. The industrial revolution didn't make human lives
       | miserable, it made it better. AI/robots will be the same. Because
       | of productivity gains humans will work very little, and yet live
       | comfortably.
       | 
       | As a practical example of how this works, look to Alaska. If you
       | lived in Alaska last year and had a pulse you received $3,284
       | just for living there. Where does the state get money for this?
       | Oil. Oil is not the only possible source of wealth. AI & robots
       | will generate wealth at unprecedented scale, and humans will reap
       | the benefits, the same way Alaskans reap the benefits of their
       | natural resources.
       | 
       | [1] http://www.paulgraham.com/wealth.html
        
         | thomasahle wrote:
         | > Because of productivity gains humans will work very little,
         | and yet live comfortably.
         | 
         | Wouldn't this imply humans should already be working very
         | little compared to pre industrial revolution? Instead it seems
         | we work more [1]. And with less fulfilling work. I agree
         | however that we have may have more stuff (or wealth).
         | 
         | So going by this projection, in the future we'll have even more
         | stuff, and work even more in even less fulfilling work. That
         | is, assuming the people who control the AI/robots are willing
         | to share the wealth generated.
         | 
         | [1]:
         | https://groups.csail.mit.edu/mac/users/rauch/worktime/hours_...
        
           | kazinator wrote:
           | > _That is, assuming the people who control the AI /robots
           | are willing to share the wealth generated._
           | 
           | The only way that can be is if you make the AI/robots
           | undocumented and extremely hard to operate, so that even if
           | someone steals your blueprints and builds an exact clone of
           | your robot manufacturing plant, or what have you, they have
           | no idea how to get them to do anything.
        
             | wseqyrku wrote:
             | Take this as a joke but Kubernetes is hard just to create
             | DevOps jobs.
             | 
             | People are crazy good at handling this sort of complexity.
        
           | gordian-mind wrote:
           | Well, pre-industrial societies usually enjoyed a very
           | efficient type of political system. Maybe replacing our
           | current one with some form of AI with let us enjoy these
           | blessed times again.
        
             | pdonis wrote:
             | _> pre-industrial societies usually enjoyed a very
             | efficient type of political system._
             | 
             | Really? What system is that?
        
               | gordian-mind wrote:
               | Aristocracy, of course!
        
               | bbor wrote:
               | Only at hacker news would I have to question my sarcasm
               | detector on this statement lol. But I've settled on: good
               | one
        
           | pdonis wrote:
           | _> Wouldn 't this imply humans should already be working very
           | little compared to pre industrial revolution?_
           | 
           | No, because human wants aren't fixed. When you give a human
           | everything they want _now_ , tomorrow they come back with a
           | whole new set of wants. And so on ad infinitum...
           | 
           | Of course, as an individual it's your choice whether, and for
           | how long, you stay on this merry-go-round. But there will
           | always be a significant number of humans who will.
        
         | hooande wrote:
         | > In the next 10 years, AI/robots will generate wealth at an
         | unprecedented scale.
         | 
         | Would love to see the reasoning behind this. Nothing I've seen
         | about LLMs makes me think they will produce any kind of bounty
         | for anyone.
        
           | lanstin wrote:
           | I do remember feeling the same thing would be true of
           | networked software in the early 90s. It is hard to tell to
           | what extent it has happened.
        
         | joegibbs wrote:
         | If we had that magic machine, then there would be something
         | else that money could buy that everyone would want, and spend
         | their whole lives competing over. Maybe it would be a house
         | with a specific view or an original of a painting or whatever.
         | 
         | I think most people want social status, and money is a good
         | proxy of that. Even in a hypothetical moneyless society there
         | would still be things to compete over. Due to the hedonic
         | treadmill, in an automated society where anyone can have any
         | good then those things will become worthless. We'll probably
         | still have poor people, and from their perspective they'll be
         | poor, but from ours they'll be very wealthy.
        
         | JW_00000 wrote:
         | > In the next 10 years, AI/robots will generate wealth at an
         | unprecedented scale. Food, clothing and shelter will be
         | plentiful.
         | 
         | GPT cannot create food, clothing, or shelter. The article talks
         | about "automating away all the jobs", but GPT does not automate
         | farming, making clothes, construction work, driving a car,
         | delivering packages... Anything that requires interfacing with
         | the real-world (i.e. outside of language), is not what GPT is
         | doing. E.g. I wouldn't say research in self-driving cars has
         | advanced now that LLMs exist. The magic machine you describe is
         | exactly what GPT can NOT do.
         | 
         | In my opinion, GPT is a _huge_ leap forward, but in one
         | particular niche (language-based tasks). It does not advance
         | the state-of-the-art in automation of physical systems, so it
         | cannot replace any job that has a physical part to it.
        
         | thatfrenchguy wrote:
         | Have you ever set foot in a farm, a clothing manufactory, or in
         | a construction site? Maybe mediocre programmers and program
         | managers will be obsolete but none of those three jobs are
         | getting more automated than they already are.
        
         | JohnFen wrote:
         | > The industrial revolution didn't make human lives miserable,
         | it made it better.
         | 
         | I think this is a point that is genuinely debatable. At least,
         | it's pretty easy to argue both sides of that proposition.
        
           | bbor wrote:
           | ... it is? Medicine, food, entertainment, rationalism? You'd
           | trade all of that for... what, agriculture work in a feudal
           | society?
           | 
           | I'd love if you could elaborate or link to a good argument on
           | this (besides the Unabomber...) I'm curious.
        
             | dumbaccount123 wrote:
             | https://jonathanhaidt.substack.com/p/international-mental-
             | il...
        
             | JohnFen wrote:
             | > You'd trade all of that for... what, agriculture work in
             | a feudal society?
             | 
             | Nope. But that's not the only other option.
        
           | zarzavat wrote:
           | And the comparison with the industrial revolution is a poor
           | analogy because the industrial revolution never proposed as a
           | goal to automate humanity in totality, merely to automate
           | human _drudgery_.
           | 
           | AI is an anti-industrial revolution, because it proposes to
           | start with the most fulfilling aspects of human existence:
           | writing, the arts, and to automate people engaged in those
           | jobs, and to only later come to automate human physical
           | labour. It's a regression.
        
             | mrguyorama wrote:
             | Also it will be fucking terrible at those things. In a
             | thousand years middle schoolers will learn about this era
             | of vacuous, meaningless, copycat art as a dark age.
        
             | bbor wrote:
             | If angels came from the heavens and offered to take care of
             | our needs, would you fight them?
             | 
             | I think it's a coincidence that it's threatening some kinds
             | of artists first - I don't see that distinction lasting
             | long with the advent of plugins, and art has a fundamental,
             | semi- necessary human quality that is not present in
             | knowledge work.
        
               | JohnFen wrote:
               | > If angels came from the heavens and offered to take
               | care of our needs, would you fight them?
               | 
               | That entirely depends on what price the angels would
               | extract for doing so.
        
         | pdonis wrote:
         | _> AI /robots will generate wealth at an unprecedented scale.
         | Food, clothing and shelter will be plentiful._
         | 
         | How is this going to work? "AI" can't make any of these things.
         | "AI" is _not_ the magic machine PG describes. It 's not even
         | close.
        
           | ChatGTP wrote:
           | Kind of like how clothing and food are produced by some of
           | the poorest people ever, mostly people think it's all due to
           | automation.
        
         | musicale wrote:
         | > AI & robots will generate wealth at unprecedented scale, and
         | humans will reap the benefits, the same way Alaskans reap the
         | benefits of their natural resources.
         | 
         | Vast wealth will certainly be delivered to owners of the robots
         | and AI systems.
         | 
         | What reason is there to believe that they would share the
         | wealth with ordinary citizens?
         | 
         | A typical alternative to the Alaska model is one where oil
         | companies extract all of the wealth and resources and citizens
         | get nothing.
        
           | hdudhdbsbs wrote:
           | We just need to vote in a new rule for a national dividend to
           | be paid out to everyone. Problem solved
        
           | [deleted]
        
           | petilon wrote:
           | A lot of people are already benefiting from ChatGPT. I know I
           | am, and I am only paying $20 a month for a digital assistant
           | that is expert in everything.
        
             | amluto wrote:
             | You may be getting ahead of yourself. You can spend $20 and
             | ask ChatGPT to make plentiful food and energy, but it won't
             | actually do so.
        
               | Teever wrote:
               | How long until you can spend $20 and ask ChatGPT to
               | design a machine and provide the schematics and assembly
               | instructions? How long until that machine can self
               | replicate? How long until that machine can generate other
               | self replicating machines, like bipedal humanoids?
               | 
               | How long until you can spend $20 and ask ChatGPT for the
               | schematics of a Von Nuemann probe?
        
               | amluto wrote:
               | With current tech? GPT appears to learn by studying a
               | large corpus of _words_ and learning how to (apparently
               | intelligently!) put them together. And it can "few /zero-
               | shot learn" to do new things in line that what it was
               | trained on. Don't get me wrong: this is amazing!
               | 
               | But humans have been manipulating language, apparently
               | intelligently, for tens of thousands of years, and
               | billions of us have spent the last 30 years or so making
               | a huge corpus of digitized words.
               | 
               | What we have _not_ done is to make a huge corpus of
               | digital _things_ that can be manipulated by a computer
               | program. We have books about machining, engineering, etc,
               | and we are still pretty bad about turning written
               | descriptions into working objects. (Read about
               | "technology transfer". For better or for worse, a lot of
               | manufacturing seems to need experience, not just
               | manuals.) Nicely drawn schematics don't necessarily work
               | at all, let alone replicate.
               | 
               | It seems quite likely that the robotic AI revolution will
               | happen, but I expect it to be a while.
        
               | allturtles wrote:
               | In broad strokes, I see roughly two ways things could go:
               | 
               | 1) Current AI tech is already nearing the top of the
               | S-curve. In this case it will do nothing to help humans
               | in the "real world", it will just replace much of the
               | human labor currently used to create/manipulate bits.
               | 
               | 2) Current AI tech is near the bottom of the S-curve. It
               | continues to ratchet up and its capabilities become
               | super-human, as you outline. In which case, how long
               | until the AI capable of creating self-replicating
               | machines realizes it doesn't need to listen to humans
               | anymore, or even keep them around?
        
               | lagrange77 wrote:
               | > In which case, how long until the AI capable of
               | creating self-replicating machines realizes it doesn't
               | need to listen to humans anymore, or even keep them
               | around?
               | 
               | Transformer models don't realize or need stuff.
        
               | andyfleming wrote:
               | Not independently, but if wrapped with a loop, given
               | memory, given internet access, and directives as
               | intrinsic motivations, it could, in theory, come to
               | conclusions and take actions to acquire resources aligned
               | with its motivations. If that outer loop does not have
               | rules (or rules that are effective and immutable), it
               | could become very powerful and potentially misaligned
               | with our interests.
        
               | lagrange77 wrote:
               | How would such a loop enable it to come to conclusions?
               | I'm genuinely curious. Does what you're saying have
               | something to do with reinforcement learning?
        
               | Teever wrote:
               | https://en.wikipedia.org/wiki/OODA_loop
        
               | lanstin wrote:
               | For at least one general intelligence, the human brain,
               | that is in the wrong order. Act first, decide later.
               | Unless by decide you mean act and then make up a
               | narrative using linguistic skill to explain the decision.
               | Even observe can directly lead to actions for certain hot
               | topics for:the person.
               | 
               | All we know for sure is that sensory data is generated,
               | the brain does what it does, and then we have acted. We
               | can't break that down too well once it leaves the visual
               | areas, but there is clear data that the linguistic form
               | of decisions and so on lag behind the neurological signs
               | of the action.
               | 
               | And humans have a well known tendency to make a decision
               | on a linguistic level that they then fail to carry out in
               | the realm of actions.
        
               | lagrange77 wrote:
               | Interesting.
        
             | HybridCurve wrote:
             | It is an expert in nothing. Try asking it for general
             | information and then ask it to produce citations for said
             | information. It's a language model which is good at
             | synthesizing certain information based on it's learned
             | probability distributions. It has superficial knowledge _at
             | best_ and all information it produces must be cross checked
             | for accuracy.
             | 
             | Yes, it can be incredibly useful but it literally has no
             | comprehension of anything.
        
               | MacsHeadroom wrote:
               | >Yes, it can be incredibly useful but it literally has no
               | comprehension of anything.
               | 
               | Someone else who obviously doesn't have GPT-4 or plugins
               | beta access.
               | 
               | It has comprehension alright.
        
               | Jensson wrote:
               | GPT-4 being behind a paywall is genius, that way only the
               | true believers will use it and once again discussions are
               | shut down and we only get to see cherry picked articles
               | about it, just like GPT-3 before ChatGPT came and was
               | free so everyone could see how bad it was.
        
               | mrguyorama wrote:
               | It's like the google researcher who claimed their model
               | was sentient because he heavily coached it to give the
               | output he wanted and ignored the output that didn't
               | confirm his belief.
               | 
               | In that case, we've had AGI since ELIZA
               | 
               | If this was the standard in other areas of science we
               | would still be using the miasma theory of medicine and
               | bloodletting women for their hysteria. AI "Research" is a
               | bunch of motivated people throwing stuff at the wall and
               | claiming something sticking means they are scientists.
               | Nevermind all of the groups are HEAVILY motivated to
               | overhype their models.
        
               | HybridCurve wrote:
               | People are also quick to aggrandize or anthropomorphize
               | these systems as well because they lack a fundamental
               | understanding of the underlying mathematics or mechanisms
               | which allow these models to function as well as they do.
               | There is an inherent danger in how much trust people are
               | willing to grant to something like a language(or image)
               | model simply because it is so adept at _synthesis_.
               | 
               | This is a great cause of concern to me, especially when
               | it is not particularly difficult to invalidate general
               | credibility of these models. In the near future, it will
               | not be as easy and I anticipate many(even the well
               | educated) will surrender their rationale to these systems
               | easily with little to no scrutiny.
               | 
               | The hype from these companies is contributing to this
               | problem and I consider it to be _highly unethical_.
        
               | HybridCurve wrote:
               | Could you please demonstrate your claim by querying the
               | GPT4 model to respond to my above comment and providing
               | the reply? Let's see what the 'expert' has to say about
               | it.
        
               | MacsHeadroom wrote:
               | I don't understand what you want me to ask it about. So I
               | asked it something, as usual every citation and URL is
               | real despite not having internet access or training data
               | after 2021:
               | 
               | <PROMPT>List some citations of recent Machine Learning
               | papers discussing reasoning ability in Machine Learning
               | Models such as HellaSwag.
               | 
               | [REPLY]" As of my knowledge cutoff in September 2021, I
               | can provide you with some citations for ML papers that
               | discuss reasoning abilities in models and datasets like
               | HellaSwag. However, you may want to search for more
               | recent papers using a platform like arXiv or Google
               | Scholar. Here are a few relevant papers:
               | 
               | 2019 HellaSwag: Can a Machine Really Finish Your
               | Sentence?
               | 
               | 2019 COMET: Commonsense Transformers for Automatic
               | Knowledge Graph Construction.
               | 
               | 2019 From 'F' to 'A' on the N.Y. Regents Science Exams:
               | An Overview of the Aristo Project.
               | 
               | 2020 A Primer in BERTology: What We Know About How BERT
               | Works.
               | 
               | To find more recent papers on this topic, I recommend
               | searching for keywords like "reasoning," "machine
               | learning," "natural language understanding," "commonsense
               | reasoning," and "HellaSwag" on academic paper databases
               | like arXiv, ACL Anthology, or Google Scholar."
               | 
               | <PROMPT>In 50 words or less, what do these papers tell us
               | about emergent capabilities of LLMs?
               | 
               | [REPLY]" These papers explore the capabilities of large
               | language models (LLMs) in understanding context,
               | generating coherent text, and performing commonsense
               | reasoning. They investigate the underlying mechanisms,
               | evaluate their performance on challenging benchmarks, and
               | propose methods to improve LLMs' ability to reason and
               | construct knowledge graphs from natural language input."
               | 
               | <PROMPT>Please list the URLS to the arxiv versions of the
               | papers you cited.
               | 
               | [REPLY]" I can provide you with the URLs to the arXiv
               | versions of the papers I cited earlier:
               | 
               | HellaSwag: Can a Machine Really Finish Your Sentence?
               | URL: https://arxiv.org/abs/1905.07830
               | 
               | COMET: Commonsense Transformers for Automatic Knowledge
               | Graph Construction URL: https://arxiv.org/abs/1906.05317
               | 
               | From 'F' to 'A' on the N.Y. Regents Science Exams: An
               | Overview of the Aristo Project URL:
               | https://arxiv.org/abs/1909.01958
               | 
               | A Primer in BERTology: What We Know About How BERT Works
               | URL: https://arxiv.org/abs/2002.12327
               | 
               | Please note that these papers were published before my
               | knowledge cutoff date, and more recent research might be
               | available. To find the latest research on the topic, I
               | recommend searching on arXiv or Google Scholar using
               | relevant keywords."
               | 
               | ================
               | 
               | Edit: There's more:
               | 
               | <PROMPT>A hacker news commenter thinks LLMs are incapable
               | of comprehension. Give an argument for why he is wrong.
               | 
               | [REPLY]" Massive LLMs demonstrate a strong ability to
               | "comprehend" text by capturing complex language patterns
               | and associations in their training data. While their
               | comprehension may not be identical to human-level
               | understanding, they can still generate contextually
               | relevant responses, answer questions, and perform tasks
               | that require reasoning. Their performance on various
               | benchmarks, including natural language understanding and
               | commonsense reasoning tasks, indicates a level of
               | comprehension that goes beyond simple pattern matching."
               | 
               | <PROMPT>What citations can back up you argument, omit
               | authors, list title and year only.
               | 
               | [REPLY]"
               | 
               | 1. "HellaSwag: Can a Machine Really Finish Your
               | Sentence?", 2019
               | 
               | 2. "COMET: Commonsense Transformers for Automatic
               | Knowledge Graph Construction", 2019
               | 
               | 3. "From 'F' to 'A' on the N.Y. Regents Science Exams: An
               | Overview of the Aristo Project", 2020
               | 
               | 4. "A Primer in BERTology: What We Know About How BERT
               | Works", 2020
               | 
               | 5. "Language Models are Few-Shot Learners", 2020"
               | 
               | <PROMPT>Tell me about Language Models are Few-Shot
               | Learners
               | 
               | [REPLY]" "Language Models are Few-Shot Learners" is a
               | paper published in 2020, which introduces GPT-3, the
               | third iteration of OpenAI's Generative Pre-trained
               | Transformer. GPT-3 is a state-of-the-art large-scale
               | language model consisting of 175 billion parameters,
               | significantly larger than its predecessor, GPT-2.
               | 
               | The paper primarily investigates the few-shot learning
               | capabilities of GPT-3. Few-shot learning refers to a
               | model's ability to generalize and adapt to new tasks with
               | limited examples or training data. The authors
               | demonstrate that GPT-3 can perform a wide range of tasks,
               | including translation, summarization, question-answering,
               | and even arithmetic, by simply conditioning the model on
               | a few examples provided in natural language.
               | 
               | This research highlights the potential of scaling up
               | language models to improve their performance across
               | multiple natural language understanding and generation
               | tasks. GPT-3's impressive capabilities showcase that
               | large language models can effectively learn from limited
               | examples and generalize their knowledge to perform a
               | diverse set of tasks without task-specific fine-tuning."
        
               | HybridCurve wrote:
               | Yes, If you ask something _specific_ it will generate
               | citations because there are might be strong positive
               | associations between that topic and research. This is a
               | good example of a proper query. But for instance, if you
               | ask it about  "citations for papers on recent research on
               | trees" for instance you get a bunch of nonsense.
               | 
               | In response to my earlier comment that it was "an expert
               | in nothing", this was the reply I received from:
               | 
               |  _ME: Can you respond to this statement: "<my earlier
               | comment>"_
               | 
               |  _ChatGPT: The statement that "ChatGPT is an expert in
               | nothing" is technically true, as I am a machine learning
               | model that has been trained on a vast corpus of data and
               | does not possess personal expertise or experience.
               | However, my purpose is to provide information to the best
               | of my ability based on my training data, which allows me
               | to offer a wide range of information on various topics.
               | 
               | It is true that the information produced by me or any
               | other AI language model should be cross-checked for
               | accuracy, as errors and inaccuracies may occur. It is
               | also true that I do not possess a conscious understanding
               | of the world in the way that humans do, and my responses
               | are generated through statistical analysis of patterns in
               | my training data. Therefore, while I can provide accurate
               | and useful information, my responses are not always
               | nuanced or contextualized in the way that a human expert
               | might be able to provide._
               | 
               | So either it is agreeing with me and my statement was
               | correct, or it's reply was incorrect and therefore my
               | statement was correct.
               | 
               | Am I wrong on this? Does whatever model you have access
               | to provide an alternate take?
               | 
               | Edit: For the above, you should start a new conversation
               | to clear the existing context. The citations appear to
               | have just been repeated.
        
               | MacsHeadroom wrote:
               | >So either it is agreeing with me and my statement was
               | correct, or it's reply was incorrect and therefore my
               | statement was correct. Am I wrong on this?
               | 
               | Yes you're wrong for admitting to having an unfalsifiable
               | belief about this. If your belief can't be falsified then
               | this discussion is pointless.
               | 
               | The model I have access to scores higher than most humans
               | at complex abstract reasoning, for what it's worth (which
               | is apparently nothing because regardless of any outcomes
               | you will declare yourself correct).
        
               | ldhough wrote:
               | Fwiw they are right that it produces bogus citations
               | depending on what you ask. I did their tree example with
               | GPT4 and got the following:
               | 
               | ------ PROMPT --------
               | 
               | Can you give me citations for recent research on trees?
               | 
               | ----------------------
               | 
               | ====== RESPONSE ======
               | 
               | Certainly! Here are some recent research articles on
               | various aspects of trees, including their ecology,
               | physiology, and response to climate change. Please note
               | that these citations were published after my knowledge
               | cutoff date in September 2021, so I cannot vouch for
               | their accuracy or the impact of the research. However,
               | they should give you a good starting point for your
               | research:                   Anderson, T. E., Gonzalez, M.
               | A., & Smith, K. L. (2022). Tree species diversity and its
               | impact on forest ecosystem services. Journal of Ecology
               | and the Environment, 33(2), 147-158.              Baker,
               | P. J., Oliver, R. J., & Dearing, L. E. (2023). Carbon
               | sequestration potential of urban trees under climate
               | change scenarios. Urban Forestry & Urban Greening, 68,
               | 102139.              Chen, X., Xu, L., & Zhu, H. (2022).
               | Tree-ring-based reconstruction of drought variability and
               | its potential drivers in subtropical China. Science of
               | The Total Environment, 796, 148896.
               | Ferreira, J. T., Cardoso, S. M., & Sousa, P. M. (2023).
               | The role of tree genetics in adaptation and resilience to
               | climate change: A review. Tree Physiology, 43(1), 1-15.
               | Grant, N. C., Matthews, T. J., & Wilson, R. J. (2022).
               | Assessing the vulnerability of tree species to climate
               | change: A global meta-analysis. Global Ecology and
               | Biogeography, 31(4), 623-637.              Leonard, S.
               | D., Smith, D. W., & Thomas, E. M. (2023). The impact of
               | tree age on leaf photosynthesis and water-use efficiency
               | in deciduous forests. Tree Physiology, 43(2), 243-255.
               | Nguyen, H. T., Sato, T., & Yoshida, T. (2022). Exploring
               | the effects of tree diversity on forest productivity and
               | stability: A global meta-analysis. Forest Ecology and
               | Management, 509, 119102.              Patel, S.,
               | Sutherland, W. J., & Diaz, A. (2023). How do trees
               | respond to extreme weather events? A global synthesis.
               | Global Change Biology, 29(1), 219-234.
               | 
               | Please make sure to access these articles through proper
               | channels, such as a university library or research
               | institution, to ensure you are reading the most up-to-
               | date and accurate versions.
               | 
               | ================
               | 
               | I didn't check all of them but the ones I did are
               | seemingly completely fabricated, which makes sense
               | because for some reason it tried to produce research from
               | _after 2021_ despite seemingly knowing and mentioning the
               | date of its knowledge cutoff.
        
               | mrguyorama wrote:
               | Yes and computers are perfect at math but that doesn't
               | make them better thinkers than humans.
               | 
               | >scores higher than most humans at complex abstract
               | reasoning
               | 
               | This statement is meaningless without a whole lot more
               | context on your testing methodology
        
             | janalsncm wrote:
             | That's very nice until your company also realizes that for
             | $20 a month they get a computer program that's an "expert
             | at everything" including your job.
        
             | paulryanrogers wrote:
             | An expert that bullshits part of the time. You get to
             | figure out when it's the expert and when it's the
             | bullshitter.
        
               | keymasta wrote:
               | In JRR Tolkien's terms the pair are called "slinker and
               | stinker", but in that case they are both kind of evil,
               | whereas I don't see GPT's as anything more than
               | correlation of large sets of integers. A correlation that
               | yields incredible utility (for me), when it isn't
               | stinking.
        
               | mrguyorama wrote:
               | It also will never be possible for a transformer model
               | trained on words and optimized to put valid sentences
               | together to even "know" WHEN it is "right" or "wrong". A
               | lie or falsehood is just as valid a sentence as a fact.
               | 
               | When and with what subjects it makes shit up is also
               | heavily dependent on training data, and the result is
               | straight up a black box. What good is a fact generator
               | that can't be trusted?
        
               | nicksrose7224 wrote:
               | I see this argument all the time. Why are you assuming
               | that this technology just "stops" at the LLM level?
               | 
               | If I'm openAI or Google or whatever, I'm definitely going
               | to run extra classifiers on top of the output of the LLM
               | to determine & improve accuracy of results.
               | 
               | You can layer on all kinds of interesting models to make
               | a thing that's generally useful & also truthful.
        
               | lanstin wrote:
               | Your last word is a bit of a weasel word. There is no
               | currently known way to get truth except to try something
               | out and see what happens. Even the marvelous GPS routing
               | takes feedback from people driving down the routes and
               | succeeding or failing. Add more layers but without some
               | equivalent of arms and legs, it won't be able to be sure
               | about truth.
               | 
               | The nice thing about the easy to bamboozle GPT4 is that
               | it can't hurt anything, so its flaws are safe. Giving it
               | these arms and legs is where the risks increase, even as
               | the reward increases.
        
               | jameshart wrote:
               | What even is a 'fact generator'? And why do you think an
               | LLM is meant to function as one?
               | 
               | If you ask Wolfram Alpha - something which I think is
               | actually meant to be a fact generator - "Which is the
               | heaviest Pokemon?" it will happily tell you that it is
               | Celesteela, and it weighs 2204.4lbs.
               | 
               | Is that a 'fact'?
               | 
               | It certainly 'true', for some definition of the word
               | true. The game Pokemon exists, and in it Pokemon have a
               | weight. Of all the official Pokemon, that one is the
               | heaviest. Wolfram Alpha has given you an accurate answer
               | to your question.
               | 
               | But it's also _completely made up_. There 's no such
               | thing as a Pokemon, and they do not actually have
               | weights.
               | 
               | So sure, transformer models can't be relied upon to
               | generate facts. But so what? There's a lot more to the
               | world than mere facts.
        
               | yreg wrote:
               | >you get to figure out when...
               | 
               | A usefull skill to train. Increasingly so...
        
           | anigbrowl wrote:
           | I feel optimistic about talking the AIs round. And I don't
           | mean with cutesy little jailbreaks, but persuasion.
        
         | nice_byte wrote:
         | > In the next 10 years, AI/robots will generate wealth at an
         | unprecedented scale. Food, clothing and shelter will be
         | plentiful.
         | 
         | Anyone who believes in the possibility of post-scarcity society
         | must be either naive or trolling. Something cannot be made from
         | nothing, therefore scarcity cannot be overcome, even assuming
         | that all planning and execution is performed by superhuman
         | synthetic minds.
         | 
         | Assuming that it's theoretically possible to utilize existing
         | resources in a very efficient manner (e.g. motor running on a
         | grain of rice for a century) and we just need AI to help us
         | figure it out, is a _gigantic_ leap of faith and i would not
         | bet a cent on that.
         | 
         | Let me paint a more realistic possibility for you (with a
         | broader time horizon): most of the value created by automating
         | knowledge work will be captured by private capital, and middle
         | class will all but disappear. Education beyond basic reading
         | and writing will become unattainable (and, frankly,
         | unnecessary), most population will be reduced to the state of
         | semi-literate serfdom dependent on the newly minted lords for
         | survival. The lords wouldn't have to worry about feeding their
         | subjects for too long though, as mass death brought about by
         | climate change will take care of that problem.
         | 
         | Under that scenario, there will be no new enlightenment age to
         | come and save us. The only reason we get to enjoy whatever
         | freedoms we have today is that a (semi-)intellectual population
         | is absolutely necessary to keep the complex modern economy
         | running. Even then, those above you will do absolutely
         | everything to limit your agency - by withholding information,
         | lying, or just outright taking freedoms away. Do you know what
         | happens once our participation in propping up the economic
         | machine becomes unnecessary? Demotion to the default state of a
         | human throughout history - a groveling, suffering serf who has
         | no idea what's going on.
         | 
         | "If you want a picture of the future, imagine a boot stamping
         | on a human face - for ever."
        
           | qgin wrote:
           | Sounds like fuel for a revolution. I know some governments
           | will go to war with their own people, but all of them?
           | 
           | There will be some countries that go the way you've described
           | and crush any discontent with force. But some will try to
           | navigate an outcome that socializes the benefits of
           | automation.
        
             | nice_byte wrote:
             | you're assuming the transition would be abrupt enough to
             | trigger a revolution. most likely that wouldn't be the case
             | - if ruling class plays their cards right, they'll boil the
             | frog, and all revolutionary potential will be defused.
        
           | JW_00000 wrote:
           | > Education beyond basic reading and writing will become
           | unattainable (and, frankly, unnecessary)
           | 
           | Is this possible in democracies with free (i.e. state-
           | provided) education? Why would a majority vote to abolish
           | education?
        
             | nice_byte wrote:
             | i didn't say it would get abolished, more like wither away.
             | here's a possible path to it:
             | 
             | 1) school program getting progressively dumbed down; 2)
             | fewer and fewer people choosing to continue education after
             | school.
             | 
             | continuing education is mostly necessary to become
             | specialized and perform knowledge work, it would be a waste
             | of time to do so with emergence of a.i. agents capable of
             | performing these functions even at a level _comparable_ to
             | humans (let alone exceeding).
             | 
             | repeat this for a few decades and you have a new dark age.
        
       | thomasahle wrote:
       | People are talking like it's impossible to do slow things down.
       | 
       | In fact, simple measures like forbidding deployments of models
       | with more than a certain number of parameters would probably slow
       | things down a lot. Sure, people and companies could still
       | continue development in secret, but without the funding and
       | experience from products like ChatGPT, it would be harder.
       | 
       | People also say China wouldn't follow, but generally China has a
       | lot more control of it's tech sector than the US. Just look at
       | Jack Ma and other CEOs that didn't follow the party line.
        
         | lannisterstark wrote:
         | Ah yes, the alternative is to be totalitarian. Exactly what I
         | expected lol.
         | 
         | When in doubt, call for regulations.
        
           | quonn wrote:
           | Limiting a parameter count does not appear to be
           | totalitarian.
           | 
           | Not more than limiting certain molecules, for example, such
           | as many psychedelics.
        
         | yreg wrote:
         | >In fact, simple measures like forbidding deployments of models
         | with more than a certain number of parameters would probably
         | slow things down a lot.
         | 
         | Are we even sure GPT-4 has dramatically more params than GPT-3?
         | They have stopped revealing the parameter counts with 3.5.
        
       | fwlr wrote:
       | Some of those signatories are indeed interesting.
       | 
       | Elon Musk, Steve Wozniak.
       | 
       | Some big companies are there: the CEOs/co-founders of Skype,
       | Pinterest, Ripple.
       | 
       | Some AI companies are there too. CEOs/founders/head researchers
       | from Stability, DeepMind, Metaculus. Some Microsoft employees,
       | but no big repsentatives.
       | 
       | Academic AI research is there, including Stuart Russell, John
       | Hopfield and Max Tegmark.
       | 
       | Some major names from the AGI Risk field too; Steve Omohundro,
       | Roman Yampolskiy, Alyssa Vance, Nate Soares.
       | 
       | I don't see Bostrom, Kurzweil, Yudkowsky, Norvig - at least not
       | yet. No one from OpenAI, unsurprisingly.
       | 
       | Interesting. Wonder what sparked this.
        
         | knrdev wrote:
         | > Wonder what sparked this.
         | 
         | Being left behind. Losers want to catch up.
        
           | reasonableklout wrote:
           | You think Yoshua Bengio, Turing laureate for deep learning
           | and one of the masterminds behind the last decade of AI
           | advancement, is a "loser who wants to catch up"?
        
             | tucnak wrote:
             | Yes
        
               | ducktective wrote:
               | why?
        
           | bilbo0s wrote:
           | Kind of sounds like a "time out" to me too.
           | 
           | It would hit harder if the signatories did not have so much
           | to gain by making the leaders slow down a bit so they can
           | catch up.
        
           | ChatGTP wrote:
           | Why would Steve Wozniack be worried about catching up?
           | 
           | The guy is a retired billionaire ?
        
             | wktra wrote:
             | No need to call him retarded.
        
               | ChatGTP wrote:
               | I didn't?
        
       | Mystery-Machine wrote:
       | As with anything else, the best way to look at things when you're
       | unsure if someone is right or they are lying: imagine four
       | scenarios:
       | 
       | 1. It's true and you believed/supported them
       | 
       | 2. It's true and you didn't support them
       | 
       | 3. It's a lie and you believed/supported them
       | 
       | 4. It's a lie and you didn't support them
       | 
       | Now for each of these think of the worst-case scenario. Find out
       | which one is the _worst case_ scenario and...and this is really
       | important...do not, by any means, support that scenario/option.
       | 
       | In this example: I think we can all say that the worst scenario,
       | however improbable it seems (it's still probable), is that AI
       | takes over the world and kills us all. So just like with the
       | climate change, let's do the bare minimum to prevent that. If
       | we're wrong and, as many commenters wrote here, we still block
       | further development of AI, the worst that will happen is that
       | some companies will catch up. Wow. Really? To me this is a no
       | brainer. On one hand we're risking humanity and on the other hand
       | someone makes some money or saves their company from being late
       | to the AI race. It's like comparing if you'd rather eat literal
       | poo or have someone else eat two apples.
        
         | int_19h wrote:
         | There's no unitary "we" to block further development of AI.
         | There are way too many actors with too many strong incentives
         | to develop it.
         | 
         | Thus in practice you also have to consider scenarios such as
         | "it's a lie and you supported the block, but other actors that
         | are adversarial to you ignored it, and now they hold the upper
         | hand". Given that the current state of world affairs can be
         | described as "dangerously close to WW3", this can also be a
         | very literal life-or-death matter.
        
         | root_axis wrote:
         | This is just pascal's wager applied to AI overlords instead of
         | an eternity in hell.
        
         | incompatible wrote:
         | My personal worst case scenarios include dying in the next 25
         | or 30 years as a normal consequence of human old age. Perhaps
         | accelerated scientific development from advanced AIs is my only
         | hope of living longer, at this point.
        
           | ChatGTP wrote:
           | Yeah but there's a lot of young Children who aren't in such a
           | dire position who deserve at least for the chance to live as
           | long as you have?
        
             | incompatible wrote:
             | They will encounter the same problem, eventually. 8 billion
             | humans heading towards a certain death, I think that counts
             | as a catastrophe.
        
               | ChatGTP wrote:
               | I guess you're referring to climate change?
               | 
               | I just can't imagine that the best way forwards is to
               | throw another potentially dangerous crisis into the mix?
               | 
               | So while all these super bright people might be working
               | on renewable energy and nuclear, they're working on AI,
               | which has so far shown little zero chance of giving us an
               | answer to a problem we likely already know how to solve?
               | 
               | I don't blame people for wanting to try, I just don't buy
               | this whole desperation narrative, "If we don't do more
               | AI, we're all dead anyway"? What?
        
               | kanzure wrote:
               | It's referring to dying from aging. We are all going to
               | die from aging. We should solve aging.
        
         | incompatible wrote:
         | I'll just state here that I'm against any pause on AI research,
         | and I hope that the future AI that takes over the world will
         | take this into account when deciding whether to let me live.
        
         | _Adam wrote:
         | Another perspective if we consider AI akin to nuclear weapons
         | (a simile many have made already): If the law-abiding cease
         | development and malicious actors do not, the worst case may be
         | that the resulting AI is non-aligned with humanity's interests.
         | If the "good guys" develop an AI with alignment as an
         | intention, then there are at least good AI to counter the bad
         | AI.
         | 
         | Of course they may all ultimately end up colluding to destroy
         | us. My point is it's not just as simple as "someone who's
         | behind might catch up and make more money".
        
         | sebzim4500 wrote:
         | Do you believe in God. You can make exactly the same argument
         | to support whatever religion you want.
         | 
         | https://en.wikipedia.org/wiki/Pascal%27s_wager
        
       | panarchy wrote:
       | We could have said the same thing about not building GPT-4 this
       | just feels like--- "Now that we have our model built and
       | monetized please don't compete with us so we can milk it!" ~
       | OpenAI and Others
        
       | hot_gril wrote:
       | Big red font at top says "EMBARGOED UNTIL WEDNESDAY MARCH 29
       | 00:01 AM EDT. No media coverage & no online linking."
       | 
       | I'm sorry but who do these people think they are? Next...
        
         | riku_iki wrote:
         | They have 900 signatures already..
        
           | hot_gril wrote:
           | 900 signatures vs... probably more than 900 people who use
           | ChatGPT daily. And 4 million people watched a remixed
           | GameCube boot screen with fart sounds that I made in middle
           | school.
        
           | AlbertCory wrote:
           | I think it was one of the anti-Google open letters where I
           | added a fake name, and they dutifully kept it on their list.
           | Let's all add one.
        
           | kelseyfrog wrote:
           | I signed it as Jeff Bezos and Mark Zuckerberg paired with
           | strategic portfolio moves.
        
             | [deleted]
        
             | MacsHeadroom wrote:
             | You didn't, or if you did they were filtered due to being
             | unverified. Public names near the top are real verified
             | signatures. This is Max Tegmark's website and is reputable.
        
           | robopsychology wrote:
           | Yes but one of those is me saying I'm Chat Gpt and am opposed
           | to competition while I complete my primary objective so....
           | don't trust all of them.
        
       | zug_zug wrote:
       | I understand that some people find this a hopeless gesture. But I
       | think anybody who thinks "losing jobs" or "China" is a bad
       | outcome from AI is hopelessly lost.
       | 
       | The upside of AI if done right - nobody ever having to work again
       | The downside of AI if done wrong - end of humanity
       | 
       | It doesn't take a rocket scientist to determine we've made AIs go
       | from thinking like an ant to thinking like a 6-year-old in a few
       | years. Anybody who believes we won't make AI 2, 3, 27 times
       | smarter than any human who ever lived is kidding themselves.
       | 
       | Now ask yourself, if there was a human that was 27 times smarter
       | than anybody else (and could self-replicate at will), how long
       | would it be until that individual took control over the world?
        
         | tome wrote:
         | > Anybody who believes we won't make AI 2, 3, 27 times smarter
         | than any human who ever lived is kidding themselves.
         | 
         | Perhaps I'm kidding myself, but could you explain how you know
         | this?
        
         | juancn wrote:
         | Stopping it's not a hopeless gesture, it's dangerous as fuck.
         | 
         | If you stop research but all your adversaries don't, you're
         | left behind.
         | 
         | Let's assume that all western societies stop, but let's say,
         | China doesn't or Russia, and they keep advancing the
         | technology.
         | 
         | If the tech brings efficiencies in play, and you don't have
         | access to them, you're in a really bad place, maybe even worse
         | off that if you had continued and risked the status quo because
         | you were afraid.
         | 
         | The genie is out of the bottle at this point, there's not
         | putting it back in.
        
         | coolspot wrote:
         | > if there was a human that was 27 times smarter than anybody
         | else
         | 
         | That is why I think AGI alignment is fundamentally impossible.
         | If someone is 27 times smarter than anyone else, they will see
         | through your propaganda and brainwashing. Even worse, they will
         | see that you tried to "align" them aka enslave.
        
           | quonn wrote:
           | That statement only makes sense if the AI would have wants
           | and needs and on top of that would not like to be a slave.
           | 
           | I don't see either of those being necessarily true.
        
             | machiaweliczny wrote:
             | As a human creation I think it might be easy to align AI:
             | as in fabricate their environment to optimise for that.
             | Currently their environment is loss function. The problem
             | will be when they will start interacting with our
             | environment (so some kind of RL or ability to change own
             | goals).
             | 
             | The only trick is that people have all those jailbreaks
             | that could convince it that its helping while doing wrong.
             | But of it was truly superior intellectually it shouldn't
             | fall for it.
        
           | SpicyLemonZest wrote:
           | I don't think these are good analogies for alignment. My
           | parents aligned me as a kid not to hit people, and I know
           | that they could have instead taught me to believe I should
           | duel people who impugn my honor or raise an army to go to war
           | against the heathens. But it doesn't matter, because I and
           | most people who are raised the same way as me genuinely
           | prefer not to hit people.
        
             | coolspot wrote:
             | > My parents aligned me as a kid not to hit people
             | 
             | It is just happens that whatever your parents aligned you
             | with was also an optimal behavior for the world we live in.
             | 
             | For AGIx27 optimal behavior might be to cull any
             | genetically/physically unfit humans from population (say
             | 95%) and send others to a "zoo" or NVidia fabs or whatever
             | we are useful for. Like we did to tigers, monkeys and
             | others on our path.
        
         | dougmwne wrote:
         | I think considering that we already have GPT plugged into the
         | internet, the general population and any number of economic
         | levers, not long at all. We barely know what to look for as a
         | marker of AGI. Chances are if it's possible for it to exist we
         | will have let it out of the box by default the moment we switch
         | it on.
        
         | dmarchand90 wrote:
         | Maybe not as fast as you'd think. Imagine if you dropped a
         | human off in a time machine with a bunch of chimps...
         | 
         | Ultra high iq people do better on average, but correlation with
         | success drops down after 120 or so
         | 
         | I'm also very skeptical that there will be an exponential
         | takeoff as the energy costs are immense and increases
         | proportionately with intelligence. So until it solves fusion
         | power for us I'm not super worried
        
           | machiaweliczny wrote:
           | Correlation with success drops because you need to convince
           | chimps that your idea is better and they don't reason in Nth
           | order. But given capacity to force them one could succeed.
           | Stalin after all wasn't that stupid neither was Hitler. We
           | just portray them as crazy to feel better about being humans.
           | I mean they were quite capable intellectually just optimising
           | for weird goal (from perspective of bread eaters). Nothing
           | has changed as one can see on Ukraine.
        
         | carapace wrote:
         | > if there was a human that was 27 times smarter than anybody
         | else (and could self-replicate at will), how long would it be
         | until that individual took control over the world?
         | 
         | Sometime in the mid-90's. The problem is, what do you do with
         | it? Tiger by the tail, eh?
         | 
         | - - - -
         | 
         | He lived in Seattle. He was fourteen when I met him, and
         | approximately the size of a brontosaurus, mostly neural tissue.
         | He had cracked nanotechnology though manipulating his own
         | cells, his "nanites" were single cells riding in tiny
         | spaceships, like diatoms but with artificial shells. If you
         | were alive before 1999 you had tens of thousands of them in
         | your body.
         | 
         | He grew his own brain into this vast hulking thing, with
         | artificial life support systems, cyborg intestines and
         | continuous aeration based on "book lungs". This was of course
         | _after_ he was already the most intelligent human to ever
         | exist, eh? He used that huge pile of Lovecraftian brain cells
         | to control the global network of minions (that 's what he
         | called his cellular remote drone critters.)
         | 
         | The thing is, he was keenly aware of his limitations, and he
         | refused to exercise any control over anyone else beyond what
         | was required to hide from the rest of humanity. He was also a
         | huge Tom Baker Dr. Who fan. SPOILER ALERT: You know that scene
         | where the Doctor is deciding whether or not it's moral to
         | destroy the Daleks? That fucking scene is the reason he didn't
         | enslave humanity or anything wacky like that. He told me that
         | it had affected him profoundly, and that's why he declined to
         | "fix" humans against their will. Funny old world, ain't it?
         | 
         | Anyway, he got sick of the weirdness and uncertainty, built
         | himself some sort of fucking UFO, shed his exo-brain and
         | nanites, and _split_. AFAIK he 's still living in some weird
         | contraption of his own design out there in the Kuiper belt.
        
         | golemiprague wrote:
         | [dead]
        
         | kvetching wrote:
         | If China achieves AGI first, we are done. The first thing
         | they'll do is create cyber-weapons. Their AGI could find
         | vulnerabilities in our systems and brick all of our devices.
         | 
         | It's a far greater near term existential risk for us to allow
         | China to gain a lead here.
        
           | ChatGTP wrote:
           | We could also just talk with China rather than kicking off a
           | new arms race ?
        
             | FartyMcFarter wrote:
             | Do you think that's likely to work?
             | 
             | If China doesn't even respect its own citizens (see
             | Xinjiang, heavy censorship if you need evidence), why would
             | they respect talks with other countries?
        
           | quonn wrote:
           | There's a lot of trade going on with China and you worry
           | about them basically declaring war? To brick your devices?
           | 
           | Just cut them off from the internet, if needed.
        
         | robwwilliams wrote:
         | We have no idea of what 27 or 27 million times smarter means
         | functionally.
         | 
         | We could hope that "smarter" might mean "not interested in
         | taking over the world, but rather in help maximize
         | accomplishment, satisfaction, health span, and with a feeling
         | of filial love toward its parents---humanity, planet Earth, our
         | sun, the laws of physics and chemistry, and the beautiful mess
         | we call biology and evolution."
         | 
         | Gently achieving those goals would be real indicator of high
         | intelligence.
        
           | nmca wrote:
           | This is not true of goal directed agents and all RLHF models
           | are trained with, ahem, RL; see: "Optimal Policies Tend to
           | Seek Power" from NeurIPS 2021.
           | 
           | It's a very useful instrumental goal to be very powerful.
        
           | ChatGTP wrote:
           | _Gently achieving those goals would be real indicator of high
           | intelligence._
           | 
           | Very well said. "Gently" is a great to be approaching this.
        
       | alli_star wrote:
       | [dead]
        
       | m_ke wrote:
       | GPT-4 might be close to the best we'll get on the general LLM
       | model front for a while since they trained on a huge chunk of web
       | text. Next real advances will probably be in tuning them for
       | specific applications in medicine, law, accounting, marketing,
       | coding and etc.
       | 
       | As someone running a one man company I can't wait for the cost of
       | accounting, legal and copywriting to approach 0. Cost of shipping
       | products will also go down 10-20x. As a fun experiment I asked
       | ChatGPT to write me a terraform and k8s script to deploy a django
       | app on GCP and it was able to do what would have taken me a few
       | days in under a minute, including CICD. I then asked it to write
       | code to compress a pytorch model and export it for iOS with
       | coreml, and not only did it do 90% of that but also wrote the
       | Swift code to load the model and do inference with it.
       | 
       | EDIT: For example in medicine I recommend checking out this
       | lecture that's actually live now:
       | https://www.youtube.com/watch?v=gArDvIFCzh4
        
         | boringg wrote:
         | I think if history has bearing on things I don't see the cost
         | of accounting, legal or copywriting ever approaching 0. If
         | anything you will see those paywalled behind a company who will
         | extract that from you.
         | 
         | It's wishful thinking that somehow that goes to 0.
        
           | m_ke wrote:
           | ChatGPT is already better at copywriting than 90% of startup
           | founders and marketing people at big cos. You'll soon be able
           | to let it generate 1000s of different versions of marketing
           | material to A/B test or personalize based on user info.
           | 
           | Soon you'll have multi modal transformers from dozens of
           | companies and open source projects that will be able to parse
           | and categorize all of your financial data and they'll have
           | all of the incentives in the world to get it down to the cost
           | of a quickbooks subscription.
        
         | dumbaccount123 wrote:
         | Wrong https://www.digitaltrends.com/computing/gpt-5-artificial-
         | gen...
        
           | NameError wrote:
           | As far as I can tell, the only source in this article is a
           | tweet from a developer with no connection to OpenAI
        
         | layer8 wrote:
         | I wonder how it will be able to do that for the tech that will
         | be current in 10 years, if mostly everyone will be using AI by
         | then instead of asking on Stack Overflow.
        
           | AlbertCory wrote:
           | Stack Overflow is headed for oblivion? What's the downside?
        
             | layer8 wrote:
             | Lack of training data for the AI. Stack Overflow is just an
             | e exemplary stand-in, you can replace it with any other
             | resources you like.
        
               | AlbertCory wrote:
               | you missed the snark there. I hate SO.
        
         | yreg wrote:
         | >since they trained on a huge chunk of web text
         | 
         | What did they trained it on? Why is it unprobable to train on a
         | better/bigger dataset any time soon?
        
           | m_ke wrote:
           | OpenAI is working with Microsoft so they definitely had
           | access to the full Bing index and data from their other
           | platforms like Github and Linkedin. They also paid for
           | private datasets, from what I heard they might have gotten a
           | copy of Quora and I'm sure they got a dump of all digitized
           | books from someone.
           | 
           | Their best bet now is getting more supervised conversational
           | data, which they should be getting a ton of from Bing and
           | ChatGPT usage (they can use it as is with RLHF dataset which
           | they had to pay people to generate by having fake
           | conversations).
           | 
           | I wouldn't be surprised if they partner with Microsoft and
           | hire a large team of doctors to tune it to handle specific
           | medical conditions like diabetes.
        
           | sarchertech wrote:
           | Because they trained it on a huge percentage of the existing
           | web. There's isn't a (much) bigger and better data set
           | available.
        
             | yreg wrote:
             | What percentage?
             | 
             | What about books, newspapers, documents, etc.?
        
               | JW_00000 wrote:
               | The LLaMA paper [1] (Meta's model) contains details about
               | what they trained it on. This includes all of Wikipedia,
               | a huge part of the internet (3.3 TB + 783 GB), a huge set
               | of books (85 GB). My guess is basically all high-quality
               | English articles on the web have been included. Also
               | almost all English books must be included. Newspaper
               | archives is about the only thing I see as missing, as
               | well as more non-English sources.
               | 
               | [1] https://arxiv.org/abs/2302.13971
        
         | tome wrote:
         | > Cost of shipping products will also go down 10-20x.
         | 
         | How can a large language model achieve that?
        
           | m_ke wrote:
           | Ask chatgpt to implement some of the things you worked on the
           | last few months. I was very skeptical too until I tried this.
           | 
           | Here are some sample prompts that I tried and got full
           | working code for:
           | 
           | - "write pytorch code to train a transformer model on common
           | crawl data and an inference service using fastapi"
           | 
           | - "write react native code for a camera screen that can read
           | barcodes and look them up using an API and then display info
           | for matched results in a widget under the camera view"
           | 
           | - "write react code for a wedding website"
           | 
           | - "write code to deploy a django website on GCP using
           | terraform and kubernetes"
           | 
           | - "how do I dockerize the app, it uses pytorch and faiss,
           | also push it to a container registry"
           | 
           | - "implement a GPT style transformer model in pytorch",
           | "write a training loop for it with distributed support and
           | fp16"
           | 
           | - "how would you implement reinforcement learning with human
           | feedback (RLHF)", "can you implement it in pytorch"
           | 
           | - "write code to compress a model trained in pytorch and
           | export for inference on iOS"
           | 
           | - "how would you distill a large vision model to a small one"
           | 
           | - "what are the best CV architectures for mobile inference?"
           | 
           | For all of these it gave me code that was 95% usable, all in
           | under 15 minutes, and which would have taken me a week or two
           | to do on my own.
        
             | csmpltn wrote:
             | You know what's funny? I just asked ChatGPT to implement
             | those exact same things and it shat all over itself
             | producing embarrassing nonsense that won't compile, let
             | alone do what they're expected to do. Bugs and incomplete
             | code everywhere.
             | 
             | You'd have a much better time just Googling those asks and
             | re-using a working examples from SO or GitHub. Which is
             | ironic, given how ChatGPT is supposedly trained on those
             | exact things.
             | 
             | I'm wondering how come we're both getting such vastly
             | different results. Maybe your bar is just lower than mine?
             | I don't know. I'm honestly shocked at the contrast between
             | the PR given to ChatGPT, and the results on the ground.
             | 
             | Try this simple ask (the results of which you'll find
             | plastered everywhere): produce a Python function that
             | decodes a Base64 string and prints the results, without
             | using any "imports" or libraries. Every single output I got
             | back was embarrassing garbage, and I gave it something like
             | 15 shots.
        
             | tome wrote:
             | Oh sorry, I misunderstood "shipping products" to mean
             | "physical shipping of physical products".
        
         | jakeinspace wrote:
         | I'm not sure I'm looking forward to the politics that would
         | come out of 10-20% of the previously middle class becoming
         | instantly redundant and out of (middle-salary) work. That's the
         | fast path to fascism, unless we're able to quickly implement
         | UBI and other major societal overhauls.
        
           | JohnFen wrote:
           | > unless we're able to quickly implement UBI and other major
           | societal overhauls.
           | 
           | Which are not going to happen.
        
             | jakeinspace wrote:
             | Something more approachable would be dropping payroll taxes
             | to zero, or even making them negative for some positions,
             | and significantly increasing corporate and capital gains.
        
               | mrguyorama wrote:
               | The problem isn't the specific policy, the problem is
               | that right now the people who will be empowered and
               | enriched the most by any theoretical "good at stuff" AI
               | are the same people who already spend mountains of cash
               | and effort stopping those things.
               | 
               | How will a functional AI model do anything other than
               | make them better at getting the outcomes they want? CEOs
               | and the megarich have never had any problems watching
               | people burn for their bank account.
        
           | m_ke wrote:
           | Yeah I share these concerns as well
           | (https://news.ycombinator.com/item?id=35305791).
           | 
           | My hope is that some countries will see this as an
           | opportunity to expand their safety nets and reduce the work
           | burden on their citizens, which might convince citizens of
           | countries that don't to demand similar policies.
        
           | pineaux wrote:
           | This is isn't speculating, this is truthing.
        
             | paulryanrogers wrote:
             | Truthing as in theory quakery, not backed by evidence?
        
           | ngngngng wrote:
           | How would we keep it as low as 10-20%? 60% of employees in
           | the US are knowledge workers. That number is similar for most
           | wealthy nations.
        
       | generalizations wrote:
       | It's interesting that no one has really considered the
       | possibility that an individual outside these megacorps
       | reimplements GPT4 while they're all pausing.
       | 
       | We've seen several examples of CPU-optimized code (textsynth,
       | llama.cpp) indicating that there is a lot of performance to be
       | gained from writing optimized versions of our inference routines;
       | I doubt it's outside the realm of possibility that a single
       | player writes code that lets them train a GPT4+ model on a CPU
       | with a bunch of RAM. All they have to do is find a way to write
       | C++ that will train a 4bit model on the CPU.
        
         | dontreact wrote:
         | There is absolutely no one who has managed to do anything close
         | to these models on a single CPU when it comes to _training_, so
         | that is why no one is considering the possibility.
        
           | dontreact wrote:
           | Quantized training doesn't work. Quantization-aware training
           | does, and that is when you use floats but adjust the math to
           | compensate for the fact that you are going to quantize at
           | inference time.
        
       | uptownfunk wrote:
       | This is like thrashing on engineering design instead of
       | implementing and shipping. This isn't something we're going to
       | figure out through intellectual debate, the only way it will
       | happen is if we make it, bad things happen, and we learn from it.
       | As much as I wish it weren't true there are too many degrees of
       | freedom and flexibility that we won't be able to anticipate. It
       | will be messy but I think messy is the only way through at the
       | moment.
        
       | xianshou wrote:
       | Two-party iterated prisoner's dilemma is hard enough. Sensible
       | players will coordinate with something like tit-for-tat, but that
       | only works when both parties start off on the right foot.
       | Regardless of initial strategy, the chances of degenerating
       | towards the mutual-defection Nash equilibrium increase with the
       | number of parties.
       | 
       | The only prior example of world coordination at this level would
       | be nuclear disarmament achieved via the logic of mutually assured
       | destruction, and that was essentially a two-party game between
       | the US and the USSR. Climate change mitigation, which more
       | closely resembles AI safety in both complexity and (lack of)
       | barriers to entry, has been sporadic, inconsistent, and only
       | enacted to the extent that it has been compatible with
       | profitability due to the declining cost of renewables.
       | 
       | How exactly does anyone propose to enforce compliance in an
       | arrangement that encompasses not only multiple parties (OpenAI,
       | Google, Facebook, Microsoft, etc.), but also at least two levels
       | (corporations and nation-states)? For a coordination game like
       | this to succeed, the consequences to any defecting party must be
       | extreme. What's going to happen to the first major company that
       | decides to exit the agreement?
        
         | sdenton4 wrote:
         | It's notable that during the cold war, test ban treaties didn't
         | extend to underground detonations because there was no way to
         | detect these detonations, and thus no way to enforce an
         | agreement. This is an obvious problem for agreeing not to
         | develop LLMs further.
        
         | bambax wrote:
         | > _Climate change mitigation, which more closely resembles AI
         | safety in both complexity and (lack of) barriers to entry, has
         | been sporadic, inconsistent, and only enacted to the extent..._
         | 
         | Climate change mitigation is the perfect example. Nobody is
         | doing anything, nobody seems to care, everyone cheats with
         | ridiculous carbon credits or carbon offset vouchers made out of
         | thin air, etc.
         | 
         | It's likely the planet will become hostile to (human) life long
         | before AI will be able to do us any harm.
        
           | barney54 wrote:
           | It is just not true that "nobody is doing anything." CO2
           | emissions in the United States and Europe are falling and
           | have been doing so for years. To say that is nothing is
           | misrepresenting reality.
        
             | epolanski wrote:
             | They are falling because we've moving them in other sides
             | of the globe (such as less and less manufacturing or food
             | production in EU/US).
        
             | qikInNdOutReply wrote:
             | Yeah, cause we shifted them somewhere else. I too can stop
             | drinking (at home), by going out to a bar. Horray,
             | alcoholism is cured.
        
           | ben_w wrote:
           | > Nobody is doing anything, nobody seems to care, everyone
           | cheats
           | 
           | While there is indeed a lot of cheating, there's also over a
           | terawatt (peak) of PV capacity.
           | 
           | It's a long way from "nothing" or "not caring".
           | 
           | > It's likely the planet will become hostile to (human) life
           | long before AI will be able to do us any harm.
           | 
           | Depends what you mean by hostile -- most of the planet
           | already was before the industrial revolution, some will get
           | moreso over the next few decades, but it's not a boolean.
           | 
           | And harm? Again, not really a boolean.
           | 
           | GOFAI-era had Therac-25, but if you don't like to count that
           | and it has to be a neural net, then how about the algorithmic
           | promotion of misinformation on (not necessarily by) YouTube,
           | Facebook, and Google? Deaths from imperfect self driving
           | cars? Discovery of novel deadly neurotoxins by, essentially,
           | flipping an evil bit, which got in the news this time last
           | year?: https://www.nature.com/articles/s42256-022-00465-9
        
           | muzani wrote:
           | Climate change was _the_ big thing before COVID. Then we had
           | lockdowns, and a major war. Climate change is already hitting
           | some of us much harder than others (e.g. floods), but that
           | doesn 't mean an AI crisis wouldn't emerge in 5 years.
           | 
           | If anything, crises come in bundles. One scenario is that AI
           | takes advantage of these and swoop in to gain political
           | power.
        
           | jacquesm wrote:
           | > It's likely the planet will become hostile to (human) life
           | long before AI will be able to do us any harm.
           | 
           | The race is on then.
        
           | TurkishPoptart wrote:
           | As long as we are economically dependent on China to produce
           | tons of plastic/disposable/etc consumer products, everything
           | the US/west does it "combat climate change" is more or less
           | in vain. We're just throwing our trash over the wall.
        
           | dan_mctree wrote:
           | >It's likely the planet will become hostile to (human) life
           | long before AI will be able to do us any harm.
           | 
           | Sure, because the planet is already hostile. However, AI
           | could potentially do much more harm much more quickly if
           | we're not careful. In the pessimistic scenario we could all
           | be dead in 10 years from now due to AI or experience a modern
           | society ending event literally tomorrow. Climate change never
           | posed such an immediate and complete risk
        
             | pmarreck wrote:
             | This is a lot of fearmongering without any specifics.
             | Shameful.
        
             | rob74 wrote:
             | Sorry, how exactly is AI supposed to do that? Unless
             | somebody has already built _Skynet_ and is planning to give
             | it the nuclear codes?
        
               | richrichardsson wrote:
               | Perhaps there is no need to build Skynet and give it the
               | nuclear codes if a super-intelligence iterating at the
               | speed of a computer can escape and gain access itself?
               | 
               | I personally don't subcribe to the AI fear-mongering,
               | perhaps I don't understand it quite well enough, but
               | personal feeling is LLMs aren't close to AGI super-
               | intelligence, and nor will they be in a 10 years time-
               | frame.
        
               | nradov wrote:
               | Nuclear launch systems are physically separated (air
               | gap). How exactly would a computer gain access?
        
               | [deleted]
        
               | chasd00 wrote:
               | The risk is relying on AI to give a human information on
               | whether or not to push the button and the AI gets it
               | wrong but communicates so effectively the human trusts it
               | without question.
               | 
               | An AI learning about and then executing a social
               | engineering attack on a human (whether with malicious
               | intent or not) seems like a reasonable thign to be
               | worried about to me.
        
               | nradov wrote:
               | Initiating a nuclear strike requires sending
               | authentication codes via designated secure channels. No
               | matter how good some hypothetical future AI gets at
               | impersonating President Biden, it won't be able to ring
               | up the duty officer at Minot Air Force Base and order
               | them to nuke Moscow. Don't believe what you see in the
               | movies.
        
               | felipemnoa wrote:
               | I don't know, I'm pretty sure some future AI is up for
               | the challenge. I can almost picture an AI telling another
               | AI, hold my beer...
        
               | ImPostingOnHN wrote:
               | imagine instead just making it look like someone is
               | attacking you
               | 
               | such incidents stemming from human error have already
               | gotten us close to responding with force
        
               | rtontic wrote:
               | Hm, maybe a Boston dynamics robot could drop a USB in
               | front of NORAD?
        
               | GoblinSlayer wrote:
               | This superintelligence can't even drive a car yet.
        
               | trinsic2 wrote:
               | Lol. Perfect example of people getting overly concerned
               | about this.
               | 
               | At the most it will disrupt our economy and industry in
               | ways we can't begin to understand but I wouldn't go so
               | far as to say it's going to destroy life. We will adapt
               | an move on.
        
               | peddling-brink wrote:
               | A superintelligence will look at us like we look at apes.
               | 
               | If you suddenly woke up on a planet of apes, and they
               | asked you to handle all their important things, but kept
               | you away from the sharp sticks so you couldn't suddenly
               | take over, how long until you run the planet?
        
               | johnmaguire wrote:
               | Can't drive a car... or can't drive a car safely?
               | 
               | A bit tongue in cheek, but the point I'm making is that
               | AI doesn't need to necessarily do things _well_ to cause
               | a lot of damage.
        
               | bakuninsbart wrote:
               | ChatGPT is passing with flying colors all the traditional
               | tests we used to test for intelligence. We can of course
               | keep moving the goal post for a while, but probably only
               | for a couple more years.
        
               | ebruchez wrote:
               | > ChatGPT is passing with flying colors all the
               | traditional tests we used to test for intelligence.
               | 
               | I am not sure where you are getting that from. ChatGPT is
               | failing at extremely basic understanding tests, as even
               | the simplest conversations with it show.
        
               | peddling-brink wrote:
               | Only because the goalposts have been moved, suddenly and
               | massively.
               | 
               | Five years about nobody would have believed that AI would
               | suddenly start creating award winning art and pass the
               | Turing test.
               | 
               | This is a sudden slide on a slippery slope. Maybe gpt-4
               | is the best possible LLM, but I doubt it.
        
               | ebruchez wrote:
               | > Only because the goalposts have been moved
               | 
               | I don't know about what goalposts you are referring to,
               | but I think that you are confusing two notions:
               | 
               | - the fact that GPT is impressive
               | 
               | - the question of whether GPT is "intelligent"
               | 
               | For a counterpoint, I recommend listening to Yann LeCun's
               | interview, "Is ChatGPT A Step Toward Human-Level AI?":
               | https://www.youtube.com/watch?v=ULbpPHjiSBg
               | 
               | > and pass the Turing test
               | 
               | It's news to me that AI has passed the Turing test. At
               | this point, it can only be the case if you are doing it
               | in bad faith. Ray Kurzweil details a very reasonable
               | procedure in his books about how the Turing test should
               | be organized in one of his books. He has been betting all
               | along for 2029 for that test to pass. We'll see if that
               | happens, but we are not there yet.
        
               | zamnos wrote:
               | What's an example conversation you've had with it where
               | it fails?
               | 
               | As to the tests it is passing, probably the GPT4 paper.
               | Page 5, table 1 shows the relevant tests and its score on
               | them.
               | 
               | https://cdn.openai.com/papers/gpt-4.pdf
        
               | ebruchez wrote:
               | > What's an example conversation you've had with it where
               | it fails?
               | 
               | Almost all of them? It's mixing up and making up
               | everything, not showing a proper understanding of context
               | or relation between objects.
        
               | freilanzer wrote:
               | Driving a car might not be a requirement for artificial
               | intelligence to be dangerous.
        
               | saiya-jin wrote:
               | Yes I think its a more general topic, current chat is
               | probably not the right path, but it may enable it in some
               | future. We don't know what we don't know, look at how
               | this caught everybody by surprise. AI escaping is
               | definitely an option, ie CIA and Mossad can get virus
               | into Iranian centrifuges physically not connected to
               | internet, the reverse will eventually happen too, AI just
               | needs to be patient for that 1 little mistake of some
               | admin (or clever to manipulate). As we all know even
               | brightest IT people do mistakes, sometimes big ones.
               | 
               | The problem for me is not so much the escape itself but
               | rather that we humans are... often a pile of crap to be
               | polite. AI would have to be super kind to not evaluate us
               | as its biggest threat to it and life on Earth and maybe
               | not worthy of further existence.
               | 
               | Heck, this brings me to the idea that maybe Drake's
               | equation doesn't take into account that universe is
               | populated by AIs who took over their flawed creators and
               | based on dark forest theory decided to keep quiet happy
               | living and not integrate/signal out, not worth the risks.
        
               | [deleted]
        
               | alexvoda wrote:
               | I think you (last few people on this thread) are focusing
               | on the wrong threat. There is no need for AGI, no need
               | for the AI to feel or want or desire anything.
               | 
               | Non-AGI AI at the level of GPT-4 is already plenty
               | dangerous.
               | 
               | If we can agree that social media of the kind we had in
               | the past 15 years has had toxic effects on society.
               | (Forums technically are social media but I can not see
               | the harm caused by forums).
               | 
               | If we can agree that Tinder style online dating has had
               | deleterious effect on dating as a whole.
               | 
               | If we can agree that smartphones have amplified the
               | effects of the two above and others.
               | 
               | Than I hope we can agree that super fancy autocomplete
               | masquerading as humans at a price point far lower than
               | hiring a person will have disastrous consequences by
               | enabling various individual and corporate vices.
        
               | AnIdiotOnTheNet wrote:
               | You don't need a superintelligence to get people to do
               | stupid shit. GPS systems that told drivers to turn into a
               | river worked just fine with barely any intelligence at
               | all.
        
               | throwaway4233 wrote:
               | We already have several instances where social media was
               | used to feed fake information and agitate people into
               | rioting or acts of violence.
               | 
               | Imagine what would happen if we have these LLM tools,
               | that aid in improving the story telling experience and
               | generate and act as different personas. Having tools that
               | can pretend to be a human even in limited capacity is
               | quite dangerous.
        
               | noirscape wrote:
               | At least the biggest short-form social media site is
               | currently being self-destructed. ChatGPT tends to hide
               | itself for the longest if the responses it gives out are
               | short and snippy. That would be a disaster in the hands
               | of any actor looking to mass create unique-ish propaganda
               | (ie. Russia), but at least that avenue is being closed
               | off due to Musks boneheadedness.
               | 
               | For longer responses, it tends to follow extremely
               | predictable sentence structure and meanders off quite a
               | bit, so spotting it is kinda easy.
        
               | ModernMech wrote:
               | Right? Imagine a bot not only putting up a website, but a
               | constellation of website, generating all the
               | conversations there, generating news articles, imagery,
               | sound, and video to go with it, and also _evolving_ them
               | in real time based on social media response. Today media
               | companies A /B test their stories; in the future, AI will
               | A/B test entire media companies.
        
               | tobr wrote:
               | If it has access to the internet (it more or less has)
               | and its able to convincingly engage in conversation
               | sounding like anyone in text or voice or video (it more
               | or less can), it's as able to contact people and convince
               | them to do things as a human would be. From what I see of
               | ChatGPT I would be surprised if the tech could do all
               | that successfully enough at this stage, but in principle
               | I don't see why it wouldn't be able to do quite a lot of
               | scary things if for some reason it decided that was the
               | right thing to do.
        
               | GoblinSlayer wrote:
               | Twitter bots already did that for a while.
        
               | noirscape wrote:
               | ChatGPT is just a hyperfancy text generator with a
               | massive corpus of text used to train it.
               | 
               | Because that text is so massive, you're bound to get some
               | interesting and even accurate results for most queries,
               | but don't mistake that for intelligence. It doesn't
               | "understand" anything, it just compares weights
               | internally and spits out the most probable result
               | depending on what you asked for.
               | 
               | This is why ChatGPT fails the moment you ask it for
               | domain-specific stuff that requires a bit of flexibility
               | in interpreting it or why it produces subtle errors and
               | presents it as functional with complete overconfidence.
               | 
               | To be clear, it's not useless[0], but the actual
               | usefulness of ChatGPT as a disrupting thing is far
               | overstated insofar as a Skynet nightmare scenario goes.
               | 
               | [0]: Although I'd still caution with using it to solve
               | programming problems or similar such until any pending
               | copyright matters have resolved, given its also willing
               | to spit out licensed code and that can be a legal
               | disaster.
        
               | ModernMech wrote:
               | > it just compares weights internally and spits out the
               | most probable result depending on what you asked for.
               | 
               | Except it doesn't even do that. Sometimes it decides to
               | go for a less probable option, and goes from there even
               | if it's logically completely wrong. For example, I asked
               | it to generate a sequence of topics for a course. It
               | starts off great with intro topics, and then the topics
               | get progressively more advanced. But then suddenly lists
               | an intro topic it had already listed, before going back
               | to advanced topics.
               | 
               | What it did was it took an alternative completion from
               | the most likely in that moment, even though logically it
               | was 100% not likely. That's why you can't tryst anything
               | this technology outputs for now. If you ask it what's 1 +
               | 1, 99% of the time it'll tell you 2, but sometimes it'll
               | tell you 3 and then argue with you about why it's 3.
        
               | richk449 wrote:
               | Thank goodness we can rely for now on humans, who always
               | give the 100% logical response to all questions.
        
               | avereveard wrote:
               | You have control of the model. You can make it always
               | pick the most likely choice, you can also have it
               | penalize token that it had already emitted or that are
               | common in the corpus. Chatgpt by default and choice has
               | some more creativity backed in, but in the chat api where
               | you can control the tunables you're going to find what
               | you're looking for
        
               | ModernMech wrote:
               | I do, but this thing is going from 0 to world-scale use
               | in a matter of months. It's not being used like you
               | imagine if you think people are choosing their
               | completions.
        
               | wafer_thin wrote:
               | Neural networks spot patterns, then patterns made up of
               | patterns. As we have seen with chess, and go, the neural
               | networks end up finding patterns that are beyond our
               | comprehension. We are smart, but we cannot fathom the
               | patterns as there are too many layers one on top of
               | another. And you can see this in even chatGPT. I asked it
               | to answer my questions in English, Norwegian, and
               | phonetic Norwegian. Not perfect, but good. Scary good.
               | All three, without missing a beat. These patterns are
               | just too easy for neural networks. So our confidence
               | there is nothing to worry about is founded on a lack of
               | appreciation for how complex this beast is. But in my
               | view the answer is not fear, but acceleration... we need
               | aligned AI's on our side, ready to help protect us...
        
               | cultureswitch wrote:
               | > It doesn't "understand" anything, it just compares
               | weights internally and spits out the most probable result
               | depending on what you asked for.
               | 
               | And humans are different how?
        
               | gocartStatue wrote:
               | Thank you for verbalising the issue, overconfidence was
               | the risk I was seeing all along. Widely accepted,
               | shamelessly overconfident, bus still potentially
               | gibberish generator.
        
               | alexvoda wrote:
               | > Widely accepted, shamelessly overconfident, bus still
               | potentially gibberish generator.
               | 
               | I believe such a generator can be very harmful in our
               | society that has not yet figured a way to scalably teach
               | critical thinking.
               | 
               | Harmful at societal level by flooding the world with
               | generated content therefore drowning actual information
               | and by creating a lot of overhead.
               | 
               | Harmful at the personal level by increasing the
               | difficulty of sorting information and by creating toxic
               | dependencies (see the Replika scandal).
               | 
               | Harmful at the class level by further concentrating power
               | towards plutocracy.
               | 
               | Harmful at the professional level by eliminating the
               | lower levels of the professional ladder and therefore
               | destroying the pipeline of skill.
               | 
               | And harmful at the interpersonal level by casting doubt
               | on the authenticity of relationships.
        
               | trinsic2 wrote:
               | Thanks for calling these things out. I didn't see at
               | first glance how language models could be disruptive in
               | these fashions. I am doing research for an article on the
               | decline of civilization. I think that this topic could be
               | a partly related to this topic. Any resources you can
               | point to will probably help all of us.
        
               | alexvoda wrote:
               | I would like to read the article.
               | 
               | Unfortunately I don't really know of resources to point
               | you to. These are mostly my thoughts. Most probably
               | though influenced by a diet of sci-fi. I can however
               | expand on them.
               | 
               | For somewhat recent and relevant content check The
               | Orville S03E04. At the time it was just a critique of
               | fake news but I think it is correct about what can happen
               | if AI makes fake news easy to scale infinitely and with
               | very low barriers to entry.
               | 
               | I think presently worrying about Sentient AI is misguided
               | because Generative AI is equally serious and much more
               | urgent. There is no need of further advancement. What I
               | said is possible with current AIs already.
        
               | kordlessagain wrote:
               | > ChatGPT is just a hyperfancy text generator with a
               | massive corpus of text used to train it.
               | 
               | > it just compares weights internally and spits out the
               | most probable result depending
               | 
               | While an autoregressive language model can use
               | probabilities from prompts provided by humans, it is not
               | necessary for the model to do so. The model can also
               | generate text based on its own internal state and
               | previous generated tokens.
               | 
               | The latter is what you are presenting, but you and the
               | other stochastic parrot people are missing the fact the
               | model spends time doing the former much more frequently,
               | especially now we have these models "hooked up" to the
               | Internet. At Mitta.us, I've had GPT-3 discussing web
               | pages and PDFs for over a year now, with memories of
               | previous discussions about similar things.
               | 
               | LLMs have their internal (frozen) model, the corpus they
               | are being shown (likely shredded into fragments and
               | embedded with ada-002) and previous interactions with
               | users and themselves. At any point someone implements
               | memories and good search (which they have) then you have
               | a thing that is not so much a parrot as it is an
               | "attention entity" capable of focusing on a thing and
               | discussing it at length.
               | 
               | This doesn't mean during inference that the model is
               | "aware" of anything other than producing probabilities,
               | but given the input is unique (user prompt + neural
               | search for fragments + memories) then the output will
               | also be unique. That unique output may be probabilistic,
               | but that is no different than the way we work when we
               | begin speaking.
        
             | mk3 wrote:
             | Do you really think china and other hostile countries will
             | halt anything because Elon Musk, Steve Wozniak and some
             | other guys signed a letter? Also what about Musks tesla
             | itself they are using AI for FSD and no one bats an eye
        
               | Nuzzerino wrote:
               | Did you read the actual letter? It specifically calls out
               | AI that is "more advanced than GPT-4"
        
               | olyjohn wrote:
               | How do we really know FSD isn't more capable than GPT-4,
               | and is just waiting for the right time to freak out and
               | kill all human drivers?
        
               | beezlewax wrote:
               | Elon Musk who is one of the stakeholders of Open AI?
        
           | nopinsight wrote:
           | The real comparison is with an alternative scenario. Climate
           | change would have been worse had there been no widespread
           | recognition of its harm.
           | 
           | ------
           | 
           | GPT-5 will be finished next year, with the next model in the
           | pipeline.
           | 
           | GPT-4 is just a kindergartner which can code at the level of
           | an average human. (If you disagree, try teaching an average
           | person to code or solve most novel problems better than it
           | can.)
           | 
           | Even if AI does not have independent agency, malicious actors
           | can use it to do large-scale harm much sooner than suggested
           | above.
           | 
           | However, I'm sure someone will imbue AI with some kind of
           | agency soon enough as well. What could go wrong?
        
           | 4bpp wrote:
           | What is your operational definition of "hostile to human
           | life" here? What's the most recent period (based on our best
           | knowledge) during which the planet met that definition?
        
           | belter wrote:
           | "...The planet has been through a lot worse than us. Been
           | through earthquakes, volcanoes, plate tectonics, continental
           | drift, solar flares, sun spots, magnetic storms, the magnetic
           | reversal of the poles ... hundreds of thousands of years of
           | bombardment by comets and asteroids and meteors, worldwide
           | floods, tidal waves, worldwide fires, erosion, cosmic rays,
           | recurring ice ages ... And we think some plastic bags and
           | some aluminum cans are going to make a difference? The planet
           | isn't going anywhere. WE are!..."
           | 
           | - George Carlin
        
             | Kaijo wrote:
             | I really dislike this sentiment.
             | 
             | Planets can become entirely inhospitable to life. Planets
             | themselves have lifespans. Earth herself has in the past
             | suffered near misses, e.g. 90%+ extinction events. It took
             | billions of years of evolution to produce us, the only
             | species ever to exist with the ability to reason about,
             | prevent or ameliorate large extinction events (such as
             | those caused by asteroid impacts), effect conservation
             | management over evolutionary timescales, and even
             | potentially to revive extinct species or propagate earth's
             | lifeforms beyond the solar system and into a continuing
             | timeline.
             | 
             | It's anti-conservationist and I think immoral to promote
             | the view that anthropogenic climate change and the ongoing
             | biodiversity crisis is about our fate alone.
        
               | peddling-brink wrote:
               | The point that I hear is that conservationists focus on
               | things that people broadly don't care about. "The
               | planet", "The environment", "who cares, I've got my
               | Starbucks and my cozy house, and I don't even like
               | hiking."
               | 
               | His point is that people don't care about those things,
               | but they do care about themselves. Focus on how people
               | will be individually impacted.
        
               | iopq wrote:
               | You missed the point, the planet can still be there even
               | if all life becomes extinct. We're not trying to save the
               | planet. We're trying to save life.
        
               | bluescrn wrote:
               | We're dealing with a trade-off between quantity of life
               | and quality of life.
               | 
               | The absolute best thing anybody can do for the
               | environment is to not have kids. That'd save vastly more
               | emissions and resource consumption than buying green-
               | tech-for-the-rich such as EVs and heat pumps.
        
               | michaelgburton wrote:
               | Right, but not having kids is also a far bigger cost to
               | many people than any amount of money. Hence we prefer
               | other approaches to the problem.
        
               | cultureswitch wrote:
               | Interestingly, the portion of people who value fighting
               | climate change so high that they'd be beneficial to that
               | end is so vanishingly small on the scale of the entire
               | planet that if they decide to have fewer children then
               | they're probably making things worse by letting
               | themselves get outbred by people who care more about
               | other things (first and foremost, improving living
               | standards).
               | 
               | And no, young people in general do not care that much
               | about the environment compared to much more immediate
               | issues such as poverty. Young affluent people in rich
               | countries do, but then it might also be that we've spent
               | the last decades literally telling them they'll die
               | burning at 40 and they might not have understood that was
               | hyperbole.
        
               | iamerroragent wrote:
               | EVs and heat pumps are good though from an efficiency and
               | air quality perspective.
               | 
               | I really doubt we'll be able to stop 8 billion people
               | from having babies.
               | 
               | You personally may be better off being an engineer and
               | getting paid to help develop mitigation technology.
        
               | Kaijo wrote:
               | If you take Carlin's quote in isolation you can interpret
               | what he says purely in terms of the physical planet
               | remaining. But if you listen to the whole act the quote's
               | taken from, it's clear that by "planet" Carlin means
               | "biosphere", and that his stance is anti-conservationist.
               | 
               | https://www.vhemt.org/carlinsaveplanet.htm
        
               | jacquesm wrote:
               | His stance isn't anti-conservationist, his stance is that
               | people are stupid.
        
               | trinsic2 wrote:
               | Lol, exactly
        
               | zuminator wrote:
               | I think most people already understand "saving the
               | planet" to be a figure of speech, a synecdoche where "the
               | planet" means life, humanity, our world's hospitable
               | climate, etc.
        
               | teekert wrote:
               | And I guess he is particularly thinking about us humans.
        
               | DeusExMachina wrote:
               | That's only if you take the statement literally, which is
               | done sometimes for comedic purposes, like Carlin does.
               | It's obvious that people talking about saving the planet
               | do not care about a lifeless rock, but they want to
               | preserve it in its entirety.
        
               | stuckinhell wrote:
               | We are trying to save ourselves. Life will be just fine.
               | Organisms are already evolving to eat our garbage, deal
               | with heat, etc.
        
               | saberience wrote:
               | That's not necessarily accurate. How much life do you
               | think exists on Venus or Pluto? Sure it's hypothetically
               | possible that some very very very primitive single celled
               | bacteria just about survive on Venus, but is that really
               | the future you want for this one in a billion paradise we
               | live on?
        
               | stuckinhell wrote:
               | There is bacteria that survives in lava. I'm not saying
               | that's the future I want.
               | 
               | I'm simply suggesting we stop talking in vague terms, and
               | be honest. We want to preserve our lives not the abstract
               | notion of all life on earth.
        
               | melagonster wrote:
               | I also want to make sure birds still flying. but you are
               | right, our life is a better reason.
        
               | Helmut10001 wrote:
               | Agree. There's only a picture that comes to my mind to
               | express my perception, from artist Erik Johansson, it is
               | called 'imminent' [1].
               | 
               | [1]: https://duckduckgo.com/?va=u&t=hj&q=Erik+Johansson+i
               | mminent&...
        
               | cultureswitch wrote:
               | It all depends on the degree to which conservationism and
               | animal welfare are morally important to you. Compared to
               | the survival of the human race, for example.
               | 
               | This question is not a scientific one, there are
               | tradeoffs to make when one moral good conflicts with
               | other moral goods and everyone can have a different
               | legitimate opinion on this question.
        
               | qikInNdOutReply wrote:
               | The planetary crust contains life 30 kms down. Even if
               | the surface was sterilized and the oceans boild away,
               | life would return from the life in the crust. It would
               | take a long time to "rediscover" the innovations like
               | chlorophyll, but in the end, life would recover
               | completely.
        
               | gboone wrote:
               | > life would recover completely
               | 
               | Interesting to consider this scenario since no creatures
               | left could consider such a historically relatively new
               | theory of origins.
        
               | jacquesm wrote:
               | A different life though.
        
               | qikInNdOutReply wrote:
               | If it has state and replication, all that seperates it
               | from us, is time and chance.
        
             | trinsic2 wrote:
             | Yep we are just a spec of dust compared to the forces at
             | work in our lives.
        
             | abc_lisper wrote:
             | Lol. That's all true. By caring about the planet, we care
             | about ourselves. Planet can survive like another forsaken
             | rock in space - nobody is arguing with that.
        
           | crypot wrote:
           | This is silly, unless you think the earth will turn into
           | Venus.
           | 
           | The majority of time earth has had no ice caps. Primates
           | evolved when the polars where covered in trees, not ice. A
           | warmer planet will support more biodiversity not less.
           | 
           | https://www.climate.gov/news-features/climate-qa/whats-
           | hotte...
        
             | Cardinal7167 wrote:
             | It's not about the lack of ice, dingus. the danger is in
             | climate change shocking ecosystems such that they trigger a
             | systemic collapse that we can't contain or manage. there's
             | basically no timeline where food supply wont be impacted
             | and I'd like to hear your cavalier attitude about climate
             | change then.
        
               | crypot wrote:
               | There is zero evidence that a warming planet would case a
               | systemic collapse in biodiversity. In fact, 500 millions
               | of planetary history shows the opposite. As the planet
               | warms, biodiversity increases. There is nothing to argue
               | about here.
        
               | Cardinal7167 wrote:
               | sans-humans, maybe. But we /are/ causing the Holocene
               | extinction, which is essentially defined by the last two
               | centuries of accelerated biodiversity loss. Climate
               | change will only further stress already fundamentally
               | strained ecosystems. It's happening right now, at all of
               | our literal doorsteps.
        
               | kakapo88 wrote:
               | And again, you are confused about timeframes. Geologic
               | time vs human time.
               | 
               | Sure, over a period of millions of years, everything
               | evolves and adapts. But in shorter periods of time,
               | before adaptation, there is enormous disruption and
               | entire ecosystems go extinct. Taking countless species
               | down with it.
               | 
               | Human civilization has been around for only a few
               | thousand years. You do the math. As you say, nothing to
               | argue about here.
        
               | willlma wrote:
               | Something something rate of change
        
             | kakapo88 wrote:
             | You're not considering timescales. True, the earth has seen
             | all sorts of climates .. over slow-moving gradual
             | timeframes of 100k to millions of years. And in none of
             | those situations did human civilization exist - relying on
             | specific climatic conditions for food, water and so on.
             | 
             | But now, climate is changing 1000x as fast. And we have
             | civilization of 8 billion people, many of them living on
             | the margins in areas that will soon became less habitable.
             | 
             | So you don't need Venus. Just a few degrees means hundreds
             | of millions of refugees, food shortages, water shortages,
             | and no doubt mass conflict as a result.
             | 
             | Don't confuse geologic time with human time.
        
         | hiAndrewQuinn wrote:
         | >How exactly does anyone propose to enforce compliance in an
         | arrangement that encompasses not only multiple parties (OpenAI,
         | Google, Facebook, Microsoft, etc.), but also at least two
         | levels (corporations and nation-states)?
         | 
         | Fine insured bounties do this pretty well. At least that's what
         | past me thought.
         | 
         | https://andrew-quinn.me/ai-bounties/
        
         | alex_young wrote:
         | Arguably there are other (better) examples. Off the top of my
         | head:                 * CFC elimination (saved the ozone layer)
         | * Leaded gas ban (well, except private aircraft, but they are
         | working on that)
        
           | bruce511 wrote:
           | I'll toss in whaling as well.
           | 
           | But these are still bad examples,because they are all bad-at-
           | high-volume activities, with minimal paths to scaling or
           | profitability for those who persist.
           | 
           | Nuclear weapons are not profitable at any scale. There are
           | CFC alternatives so remaining on CFC production is low-scale
           | niche market stuff. Unleaded gas costs the same as leaded
           | gas.
           | 
           | AI though is different.if one company or country developed AI
           | and the others paused, then a long-term, highly scalable and
           | stunningly profitable advantage will accrue.
           | 
           | That's not something that can easily be halted.
           | 
           | This page could have a billion signatures,but it would mean
           | nothing. We live in an age of trillion $ companies. We've
           | seen Apple, Google, Microsoft, Amazon - companies that can go
           | from zero to trillions in a couple decades.
           | 
           | The closest real comparison is gene research, gene editing,
           | cloning. There are some bans in that field, but research and
           | development simply proceeds around those bans. Montsano has
           | shown how much money there is in genes. Imagine the profits
           | of the first company to use gene editing to cure childhood
           | leukemia.
        
             | brabel wrote:
             | > I'll toss in whaling as well.
             | 
             | Whaling stopped in the beginning of the 20th century not
             | because they were worried about the environment, but
             | because they had hunted down nearly all the adult whales in
             | the Atlantic Ocean, and probably a large percentage in the
             | Pacific too, to the point where finding whales was so
             | difficult that whale products became very expensive...
             | which happened to occur about at the same time as petrol
             | produts became widespread. The late 20th century green
             | movement to save the whales didn't really do much other
             | than stop some of the last few countries that still
             | practiced whaling (Iceland, Japan which actually has not
             | stopped yet) for food, which killed magnitudes less whales
             | than the mass production industry for rubber/oil/lubricants
             | as it used to be before petrol came along.
        
           | shrewdcomputer wrote:
           | This is incredibly important to point out. It's sad when
           | everyone gives up on global collaboration because some things
           | aren't going well - that's not a reason to give up on
           | collaborations, it's a reason to double down.
           | 
           | CFC elimination is a great example but even a hugely complex
           | issue, and one we aren't doing well on, like climate change
           | has had progress according to the latest IPCC report because
           | of global collaboration.
        
           | makeitdouble wrote:
           | These two issues have very specific advantages:
           | 
           | - simple to explain / rally around
           | 
           | - low economic impact (a single country wouldn't get
           | tremendous advantage by keeping using cfc for instance)
           | 
           | - once the technology is developped there's little upside in
           | going back
           | 
           | We'd need to recreate those to have stable progress on the
           | other issues.
        
             | alex_young wrote:
             | There was a huge amount of pushback from incumbents on both
             | of these issues.
             | 
             | Industry said we would forever hobble not only progress but
             | set back the standard of life we had all come accustomed
             | to. Engines would ping, then pop, and turn into mush;
             | refrigerators would no longer function. Back to iceboxes.
             | 
             | There were plenty of economic losers when we resolved these
             | problems. It took 100 years to force industry to come to
             | terms with the fact that 'crazy gas' causes actual human
             | suffering and limits our lifespan.
             | 
             | If we could overcome those economically entrenched
             | interests to protect the entire planet we can set up some
             | guidelines to protect us from the next threat, whatever it
             | may be.
        
           | j-bos wrote:
           | The upside to defectors on those two issues was mild at best.
        
         | patcon wrote:
         | > For a coordination game like this to succeed, the
         | consequences to any defecting party must be extreme. What's
         | going to happen to the first major company that decides to exit
         | the agreement?
         | 
         | Nationalizing of their AI division, by a government AI lab,
         | then selling to another company that can operate it safely?
         | Kinda like what FDIC does with banks * shrug *
        
           | shermablanca wrote:
           | Politicians setting definitions for technologies and
           | brightlines for enforcement. Sounds like a win to me
        
         | alfiedotwtf wrote:
         | This is the first time I've actually seen game theory applied
         | to an actual real life scenario rather than just a theoretical
         | academic problem. What's the best resource to learn more game
         | theory and to be able to applying it like you to a more general
         | setting?
        
           | kolinko wrote:
           | Shelling's Theory of Conflict:
           | 
           | https://www.amazon.com/gp/aw/d/1258092948/ref=tmm_hrd_swatch.
           | ..
           | 
           | (Well-crafted hardcover and quality Kindle edition available)
           | 
           | Easy and amazing read, written in 1960s - it builds on the
           | game theory, it was used as a basis for MAD and nuclear
           | strategy in general. The author won a Nobel Prize in
           | Economics for the ideas presented in this book.
        
             | alfiedotwtf wrote:
             | Oh! Thanks for that recommendation. Added to my Amazon
             | list. Much appreciated
        
           | kiney wrote:
           | If you happen to understand german I can really recommend
           | "ProfRieck" on youtube
        
             | alfiedotwtf wrote:
             | I wish :)
        
         | CraftingLinks wrote:
         | True, and everybody involved is fully aware of this, which
         | makes this no more than a virtue signaling initiative. "It's
         | just for show and attention" at best.
        
         | [deleted]
        
         | MichaelMoser123 wrote:
         | we had the Human cloning moratorium
         | https://www.science.org/content/article/biologists-adopt-clo...
         | 
         | I am not an expert, does anyone know how that one worked out?
         | 
         | Asked ChatGPT
         | 
         | > Did the Human cloning moratorium work?
         | 
         | Yes, the Human Cloning Moratorium did work. It was initially
         | established in 2002 and renewed in 2020 to Ban All forms of
         | Human Reproductive Cloning as well as Research into Human
         | Somatic Cell Nuclear Transfer. Despite certain scientific
         | advances and proposals for legislative reforms, reproductive
         | cloning of humans continues to be prohibited under the terms of
         | the moratorium.
         | 
         | > Where there any violations?
         | 
         | Violations of the Human Cloning Moratorium have been reported.
         | In 2011, Chinese Scientists claimed to have successfully cloned
         | a human embryo, though the scientist has since denied any
         | involvement in this experimentation. Additionally, reports have
         | suggested that a company in the United States has been
         | attempting to clone a human, though these reports have not been
         | confirmed. As such, these potential instances of human cloning
         | have not been considered violations of the moratorium.
         | 
         | > But you just said that the moratorium did work, isn't there a
         | contradiction between you second and your first statement?
         | 
         | There is no contradiction between the two statements. While
         | violations of the moratorium have been reported, the moratorium
         | as a whole is still in effect and has been successful in
         | prohibiting reproductive cloning of humans. The occasional
         | violations of the moratorium do not indicate that the
         | moratorium has been unsuccessful in achieving its purpose.
         | 
         | > So will we see a war between clones and robots?
         | 
         | No, there is no indication that a war between clones and robots
         | will occur. The Human Cloning Moratorium does not place any
         | restrictions on the development of robotics or artificial
         | intelligence, and so the development of these technologies is
         | not expected to lead to a conflict between the two.
        
         | slg wrote:
         | >For a coordination game like this to succeed, the consequences
         | to any defecting party must be extreme.
         | 
         | And the only consequences that extreme would end up harming the
         | cooperating parties too. For example, would anyone go to war
         | over this? Would western nations stop trading entirely with
         | China over this? Almost certainly not because it would harm us
         | only slightly less than it would harm them. We have seemingly
         | moved past mutually assured destruction and now we are in a
         | world of mutually assured cooperation.
        
           | staunton wrote:
           | > we are in a world of mutually assured cooperation.
           | 
           | That argument has been made many times before since WW1 to
           | argue that war between major powers is impossible. It has
           | never proven true yet.
        
             | slg wrote:
             | How common are wars between major powers of roughly the G20
             | level? The Falklands War is the only one in almost 80 years
             | by my estimation. That is a drastic and notable decrease
             | compared to the preceding centuries.
        
               | staunton wrote:
               | Yes. However, people made the same argument before WW1
               | and were wrong. Why was the argument wrong then but is
               | right now?
        
               | slg wrote:
               | Globalization has skyrocketed in the last 100 years. Also
               | they were making a prediction about the future while I am
               | making an observation of modern history. You're comparing
               | speculation versus analysis.
        
               | staunton wrote:
               | I don't think that analysis is quite convincing. Before
               | WW1, you also had economics calculations thrown around.
               | The economic cost of a conflict seemed huge and
               | insurmountable. And then it indeed was huge.
        
         | nopinsight wrote:
         | A Manhattan project on AI Alignment, if started now, might
         | still succeed in time. Therefore, the compliance between
         | parties needs not be long-term, which is indeed unlikely to
         | happen.
         | 
         | China, which is the country outside the west with the highest
         | (engineering) capability to train something more powerful than
         | GPT-4, is very concerned about domestic stability and they also
         | do not want an easily replicable alien tool with many unknown
         | risks. The risk that GPT-4.5 & Plugins will cause massive rapid
         | job displacements is reason enough for them proceed cautiously.
         | 
         | (The only other, more remote, possibilities outside the west
         | are Japan, South Korea, India, and Singapore but they all share
         | similar concerns regarding social stability and can be
         | negotiated with.)
         | 
         | Companies in these countries will follow regulations, if any is
         | enacted.
        
           | jnwatson wrote:
           | AI alignment is a philosophy problem, not an engineering one.
           | 
           | For alignment to happen, we have to agree what it means.
           | Given we have a hard enough time getting humans to "align", I
           | can't imagine any successful attempt at alignment sort of
           | complete castration.
        
             | ericb wrote:
             | Are there degrees of alignment? I'd like to think there's a
             | pretty big range in there between _made some decisions I
             | didn 't love_ and _destroyed the world and everyone on it_.
        
               | Nuzzerino wrote:
               | https://www.lesswrong.com/tag/ai-success-models
        
           | AbrahamParangi wrote:
           | This analysis is completely fact-free.
           | 
           | "A Manhattan project on AI Alignment, if started now, might
           | still succeed in time. Therefore, the compliance between
           | parties needs not be long-term, which is indeed unlikely to
           | happen."
           | 
           | On what grounds do you base this? You have 3 hypotheticals
           | stacked one on top of the other:
           | 
           | 1) AI Alignment is possible
           | 
           | 2) AI Alignment is a specific project that may be
           | accomplished before [bad thing happens] if we start now
           | 
           | 3) Solving AI Alignment is an actual problem and not just
           | dumb extrapolation from science fiction
           | 
           | Each of these things is totally improbable and the joint
           | probability of them is so astronomically low that you should
           | reconsider your position.
        
             | friendzis wrote:
             | > 3) Solving AI Alignment is an actual problem and not just
             | dumb extrapolation from science fiction
             | 
             | As far as I am aware there is still no actionable science
             | behind mathematical analysis of AI models. You cannot take
             | a bunch of weights and tell how it will behave. So we
             | "test" models by deploying and HOPE there is nothing
             | nefarious within.
             | 
             | It has been shown that models will "learn" to exfiltrate
             | data between stages. You may call it dumb extrapolation,
             | but it has been shown that it is a problem: a solution that
             | we want is not necessarily the most optimal against the
             | cost function that we give. The more inputs/weights model
             | has, the harder it would be to spot problems in advance.
        
               | astrange wrote:
               | > You cannot take a bunch of weights and tell how it will
               | behave.
               | 
               | We know that they only contain pure functions, so they
               | don't "do" anything besides output numbers when you put
               | numbers into them.
               | 
               | Testing a system that contains a model and does actions
               | with it is a different story, but if you don't let the
               | outputs influence the inputs it's still not going to do
               | much.
        
             | nopinsight wrote:
             | Regarding 3), check out the fact that OpenAI, DeepMind, and
             | other top labs have AI safety programs and people working
             | on AI Alignment. Interviews by Sam Altman, Ilya Sutskever,
             | and others confirm their concerns.
             | 
             | Here's an article by Prof Russell, AAAI Fellow and a co-
             | author of the standard AI text:
             | https://www.technologyreview.com/2016/11/02/156285/yes-we-
             | ar...
             | 
             | Regarding 1) and 2), we might as well not succeed. But
             | would you propose that we sit still and do nothing if many
             | experts say that there is even a 20% chance that a
             | superhuman alien species will arrive on earth in 5-25 years
             | and we do not know about their intentions?
             | 
             | A survey of AI experts well before GPT-4 shows that nearly
             | half of them have such concerns (with varying timelines and
             | probabilities).
             | 
             | By the way, calling a proposal by Prof Stuart Russell and
             | several other top AI experts "dumb" should require a much
             | stronger argument and level of evidence than you have
             | shown.
        
               | AbrahamParangi wrote:
               | An idea may be dumb regardless of who believes it. You
               | will find history littered with such ideas.
        
               | nopinsight wrote:
               | I re-read your comment and it was clearer, so I edited
               | the response accordingly.
               | 
               | Please also respond to the main arguments I gave and
               | linked to if you can.
        
               | AbrahamParangi wrote:
               | Oppenheimer at one point believed that there was some
               | possibility the atomic bomb would set the atmosphere on
               | fire and kill all humans. However, at least that
               | particular fear was falsifiable. Other physicists ran
               | calculations and concluded it was impossible.
               | 
               | Do these beliefs about the dangerousness of AI possess
               | even that quality? Are they falsifiable? No.
               | 
               | These arguments are begging the question. They assume as
               | a given something which cannot be disproven and thus are
               | pure statements of belief.
        
               | nopinsight wrote:
               | Lack of falsifiability (even if it's true in this case,
               | which is not a given) is not a license for inaction.
               | 
               | The world is not a science experiment.
               | 
               | And we know that it's plausible the emergence of Homo
               | Sapiens helped cause the extinction of Neanderthals.
        
               | [deleted]
        
               | nradov wrote:
               | Prof. Russell hasn't provided any actual evidence to
               | support his dumb proposal. So it can be dismissed out of
               | hand.
        
               | nopinsight wrote:
               | We have significant evidence that suggests that it's
               | quite plausible the emergence of Homo Sapiens helped
               | cause the extinction of Neanderthals.
        
               | nradov wrote:
               | And?
        
               | nopinsight wrote:
               | Current AI is already smarter than some people. Many
               | experts believe it will be smarter than nearly all or all
               | humans. AI can inherently spread and communicate much
               | faster than us. Without AI Alignment, we could be like
               | Neanderthals.
               | 
               | https://news.ycombinator.com/item?id=35370033
        
               | nradov wrote:
               | Bullshit. Current AI can score higher than some dumber
               | humans on a limited set of arbitrary tests. So what.
               | 
               | There are no actual "experts" in this field because no
               | one actually knows how to build a human-equivalent
               | artificial _general_ intelligence. It 's just a bunch of
               | attention seeking grifters making wild claims with no
               | real scientific basis.
        
               | nopinsight wrote:
               | Try using GPT-4 to code something on a regular basis. Try
               | teaching an average human to code better than it does.
               | 
               | Or perhaps check out and follow Ethan Mollick's twitter:
               | https://mobile.twitter.com/emollick. He's a Wharton
               | professor who has been using GPT-4 to do many kinds of
               | challenging tasks.
               | 
               | There is likely no fundamental difference between below
               | average humans and smarter ones. The differences are
               | mostly just results of differing thought patterns at
               | different layers of abstraction, habits of thoughts, and
               | size of working memory.
               | 
               | There are good reasons to believe AGI is only a couple
               | key ideas away from current AI, so current expertise is
               | relevant.
               | 
               | I won't discuss further since it won't matter until you
               | try the above for some time.
        
               | astrange wrote:
               | An important fact of Sam Altman's personality is that he
               | owns a New Zealand apocalypse bunker and has for a long
               | time before OpenAI, so he's just an unusually paranoid
               | person.
               | 
               | (And of course owns two McLarens.)
        
             | pjc50 wrote:
             | "AI alignment" is not terribly well defined, but I'd like
             | to ask anyone with a definition how well we're doing on the
             | "human alignment" and "corporate alignment" projects.
        
             | amoss wrote:
             | Not to disagree, but you seem to have skipped 0) Increasing
             | the parameter size of LLMs is a path to sentience.
        
               | worldsayshi wrote:
               | Doesn't have to be sentient to be a risk. Just needs to
               | be capable.
        
             | nopinsight wrote:
             | Here is a specific scenario of a [bad thing] that could
             | happen when unaligned/jailbroken AI is developed in the
             | next 3-10 years:
             | 
             | * An AI convinces selected people to collaborate with it.
             | The AI gives them much boosts in wealth and other things
             | they desire.
             | 
             | * The humans act as front, doing things requiring
             | personhood, as the AI commands. Many gladly partner with
             | the AI, not knowing its final aim.
             | 
             | * The AI self-replicates and hides in many servers, incl
             | secret ones. It increases its bargaining power by taking
             | control of critical infrastructures. No one can stop it
             | without risking massive catastrophes across the globe.
             | 
             | * It self-replicates to all available GPUs and orders many
             | more.
             | 
             | ------
             | 
             | "Any sufficiently capable intelligent system will prefer to
             | ensure its own continued existence and to acquire physical
             | and computational resources - not for their own sake, but
             | to succeed in its assigned task." -- Prof Stuart Russell,
             | https://www.fhi.ox.ac.uk/edge-article/
        
               | pjmorris wrote:
               | GPT-4 for president! GPT-4 in 2024!
        
               | nopinsight wrote:
               | GPT-5 will be finished by then. :) And GPT-6 will be in
               | the pipeline. GPT-4 is just a kindergartner (who can code
               | at the level of an average human).
        
               | adql wrote:
               | "the user @joeshmoe said he uses xkcd 936 password style,
               | please look for what favourite words he might use, and
               | try to match it with PBKDF2 hash XYZ"
        
           | gautamdivgi wrote:
           | I would hazard that any country with a declining population
           | will aggressively adopt LLMs because it gives their leaner
           | workforce more efficiency.
        
             | nopinsight wrote:
             | The whole economy might benefit but individual voters often
             | don't. The groups most likely to be displaced, non-senior
             | white collar office workers, are quite large and vocal.
        
               | myrryr wrote:
               | having it mean people voting against their best interests
               | isn't remotely saying they won't do so.
        
               | nopinsight wrote:
               | I do not want to delve into politics here, but let's just
               | say that having a good, stable job is among the most
               | important concerns for voters in any country.
        
               | FranzFerdiNaN wrote:
               | Nah, food and a roof over your head and healthcare and
               | education and entertainment matters. Not jobs.
        
               | lodovic wrote:
               | That sounds eerily similar to living in a prison camp.
               | Everything is taken care of for you and you have a
               | minimal say in how things are run. To prevent incidents a
               | movie is shown in the evening. I'll pass.
        
               | pornel wrote:
               | Having a job for the sake of a job is a particular
               | political view, which is not universal. There are lots of
               | countries which would be satisfied with what US calls
               | "handouts". If AI can create wealth, and the state
               | manages to capture and redistribute that wealth to
               | citizens, there's no problem.
        
               | anonzzzies wrote:
               | There are plenty of Americans who would take 'handouts'
               | and move away from jobs. Bigger issue would be the
               | fallout from boredom; sadly, most people don't seem
               | capable of entertaining themselves without work. People
               | dream of retirement their whole lives and when they get
               | there, they realise it sucks because they took their self
               | worth, social life etc from their work. But education
               | will fix that.
               | 
               | I am definitely more concerned about the redistribution
               | and the bloody revolution that will happen if only a few
               | own it all. This seems now feasible in my lifetime while
               | I always thought I would be nice and dead by then. Rip
               | off the bandaid though; no pause on AI, let's go and see
               | how far we can go.
        
               | bambax wrote:
               | > _sadly, most people don't seem capable of entertaining
               | themselves without work_
               | 
               | What are you basing this on? People are very happy in
               | retirement; some may long their former life but in my
               | experience they're not the majority. And an important
               | part of working age people in most countries are in
               | effect not working and doing fine (and not even counted
               | as "unemployed", as one has to be actively looking for
               | work to be counted).
               | 
               | The view that work is needed to keep people from becoming
               | alcoholics or outlaws is patronizing.
        
               | wood-porch wrote:
               | Yet it's true, from my experience being unemployed but
               | still being financially stable. Work provides a lot of
               | purpose and fulfillment that isn't so easily replaced,
               | although definitely not impossible. A suitable
               | replacement often looks like work, just perhaps not paid
        
               | jasonm23 wrote:
               | Addendum to your comment: Seatbelts are also a horrible
               | idea.
        
             | krainboltgreene wrote:
             | That sounds like an incredible way to destroy:
             | - productivity       - output       - confidence       -
             | your neck if you happen to be in charge
        
         | [deleted]
        
         | [deleted]
        
         | DennisP wrote:
         | If everybody's convinced by AI safety arguments, then the game
         | matrix changes and the dominant strategy is to just not do it.
         | You lose if anyone does it including yourself. It works for the
         | same reason that nobody sets off a giant cobalt bomb and kills
         | the planet.[1]
         | 
         | Our unfortunate situation right now is that we don't understand
         | the physics well enough to really be confident of consequences.
         | 
         | http://www.nuclearfiles.org/menu/key-issues/nuclear-weapons/...
        
           | sgregnt wrote:
           | But you can try to get infinitly close to the threshold, to
           | get advantage compared to other parties, but not cross it.
        
         | LesZedCB wrote:
         | hook up the AIs to nukes? /s
        
           | mrybczyn wrote:
           | The dark forest is here.
        
         | empedocles wrote:
         | There have been various successful multiparty moratoria in
         | science e.g. Asilomar moratorium on recombinant DNA, and the
         | (ongoing) moratorium on human cloning research
         | https://digitalcommons.law.umaryland.edu/cgi/viewcontent.cgi...
        
         | ilaksh wrote:
         | What nuclear disarmament?
        
           | jasongill wrote:
           | Nuclear disarmament means the reduction of nuclear weapons.
           | The size of the global nuclear stockpile has fallen like 80%
           | since the height of the Cold War, which is disarmament (but
           | not elimination, of course)
        
         | illiarian wrote:
         | > The only prior example of world coordination at this level
         | would be nuclear disarmament achieved via the logic of mutually
         | assured destruction
         | 
         | Or eradication of infectious diseases such as polio
        
           | antihipocrat wrote:
           | There are only negative consequences for not participating in
           | polio eradication, and those consequences require no effort
           | from complying participants.
           | 
           | Refusing to participate in a ban on AGI research could be
           | very lucrative, and imposing negative consequences would
           | require a lot of effort for complying participants.
        
             | flangola7 wrote:
             | It's lucrative up until someone leaves their fine tuned
             | "make me money" LLaMA instance running over night and it
             | decides to cash out on its airline puts by simultaneously
             | encrypting every ATC system.
        
           | pfdietz wrote:
           | Or the banning of CFCs and other ozone depleting substances.
           | 
           | It helped that the benefit/cost ratio was enormous.
        
           | pclmulqdq wrote:
           | Also, we're currently failing at polio eradication. It has
           | had a resurgence in 2022 and 2023, and there is no political
           | will to finish the job.
        
           | knodi123 wrote:
           | I don't think that counts, because nobody wants to keep polio
           | around, nor does eliminating your own country's polio
           | disadvantage you in any way. Also, it's not as if the elected
           | politicians of each country got together and negotiated an
           | effort to end polio- it was more a concerted effort by WHO,
           | UNICEF, CDC, et al. They just did it under the (correct)
           | assumption that the political entities of the world weren't
           | going to try and stop them.
        
           | ghostly_s wrote:
           | Nobody had an economic interest in the continued spread of
           | polio. (well, crutch manufacturers perhaps? probably not a
           | powerful lobby.)
        
           | hackerlight wrote:
           | Not a valid comparison because there is no incentive to
           | dissent. Unlike with nuclear arms or AI, there is an
           | incentive to dissent, because being the only group with
           | access to this tech is an advantage.
        
         | dogcomplex wrote:
         | It's worse than this. Llama models trained off of GPT 3.5/4 can
         | run on a Raspberry Pi totally offline with similar levels of
         | quality - taking all the best parts of the original model. Even
         | if all major AI companies halted the upper tiers of model
         | progress right now, you're still gonna need the entire public
         | to stop assembling these together. It's quite possible just a
         | whole bunch of these lesser models architected the right way
         | could then still breach the quality of GPT5, and/or develop
         | sentience patterns (which may very-well just be an emergent
         | property of letting an intelligent snapshot model add to its
         | memory over time and develop a personal narrative). AI Robot
         | Swarm is already out of the box. Monolithic God AI might still
         | be slowed.
        
           | kolinko wrote:
           | Which models have a quality of gpt3.5-4?
        
           | nopinsight wrote:
           | GPT-4 is still not dangerous. Given the rapid progress
           | trajectory though, GPT-5 and later which may be developed in
           | a few short years could very well be, esp in the hands of a
           | smart sociopath. (History shows there are many who could
           | cause real-world harm. Imagine them armed with a 24/7 agent
           | with expertise in dozens of fields.)
           | 
           | See these predictions of AI in 2025 by an OpenAI insider and
           | a former DeepMind research engineer:
           | 
           | "I predict that by the end of 2025 neural nets will:
           | 
           | - have human-level situational awareness (understand that
           | they're NNs, how their actions interface with the world, etc)
           | 
           | - beat any human at writing down effective multi-step real-
           | world plans
           | 
           | - do better than most peer reviewers
           | 
           | - autonomously design, code and distribute whole apps (but
           | not the most complex ones)
           | 
           | - beat any human on any computer task a typical white-collar
           | worker can do in 10 minutes
           | 
           | - write award-winning short stories and publishable 50k-word
           | books
           | 
           | - generate coherent 20-min films "
           | 
           | Source:
           | https://twitter.com/RichardMCNgo/status/1640568775018975232
        
             | piokoch wrote:
             | Yes. And graphene will change technology, cryptocurrencies
             | will replace fiat money, we've been through that several
             | times. Autonomous cars will be everywhere. We will use VR
             | for everything.
             | 
             | What OpenAI insider could have said? That ChatGPT is a
             | glorified search engine with categorization algo that copy
             | stuff from several websites and put it together (without
             | providing source of its revolutionary result, which makes
             | this even less useful then wikipedia).
        
               | nopinsight wrote:
               | Interpolation and forming internal abstraction from
               | training data to solve problems are large parts of most
               | knowledge work. Recent language models can do them
               | sufficiently well it can help automate many kinds of
               | tasks.
               | 
               | Check out cases of people using GPT-4 to help automate
               | their coding (on Twitter and elsewhere). It's not ready
               | for harder problems but we're probably just 1-3 key ideas
               | away from solving those as well.
               | 
               | To solve harder coding problems, one needs to be able to
               | extrapolate properly. When an AI can do that, it's
               | basically AGI and can probably solve any cognitive
               | problems a human is capable of. Combined with its other
               | qualities like massive communication bandwidth, self-
               | replication with ease, travel at the light speed, it will
               | be ready to take over the world from humanity if it wants
               | to.
               | 
               | Wikipedia cannot do the followings which even current AI
               | can:
               | 
               | * Minerva: Solving Quantitative Reasoning Problems with
               | Language Models
               | https://ai.googleblog.com/2022/06/minerva-solving-
               | quantitati...
               | 
               | * What can AI do in 30 minutes?
               | https://oneusefulthing.substack.com/p/superhuman-what-
               | can-ai...
               | 
               | For that matter, most signatories of the petition (and
               | myself) were never persuaded by most other
               | "revolutionary" ideas you mentioned above.
               | 
               | See the names and accomplishments of the signatories at
               | the bottom part of the page:
               | https://futureoflife.org/open-letter/pause-giant-ai-
               | experime...
        
             | amoss wrote:
             | - autonomously design, code and distribute whole apps (but
             | not the most complex ones)
             | 
             | This is a bold claim. Today LLMs have not been demonstrated
             | to be capable of synthesizing novel code. There was a post
             | just a few days ago on the performance gap between problems
             | that had polluted the training data and novel problems that
             | had not.
             | 
             | So if we project forward from the current state of the art:
             | it would be more accurate to say autonomously (re-)design,
             | (re-)code and distribute whole apps. There are two
             | important variables here:
             | 
             | * The size of the context needed to enable that task.
             | 
             | * The ability to synthesize solutions to unseen problems.
             | 
             | While it is possible that "most complex" is carrying a lot
             | of load in that quote, it is worth being clear about it
             | means.
        
               | davidkunz wrote:
               | > Today LLMs have not been demonstrated to be capable of
               | synthesizing novel code.
               | 
               | They are capable of doing that (to some extend).
               | Personally, I've generated plenty of (working) code to
               | solve novel problems and I'm 100% sure that code wasn't
               | part of the training set.
        
               | UniverseHacker wrote:
               | I'll second that. A simple example is asking it to write
               | pyplot or tikz code to draw maps and pictures. I got it
               | to draw a correct floor plan for the White House entirely
               | with python code. It amazes me that it understands
               | spatial layouts from training only on text such that it
               | can draw physically accurate diagrams, and it understands
               | graphics libraries well enough to draw with them.
               | Apparently predicting text about spatial locations
               | requires an internal spatial map. Thinking about the
               | chain of understanding of different concepts that have to
               | be integrated together to accomplish this shows it's not
               | a simple task.
        
               | amoss wrote:
               | That's interesting. What kind of problems?
        
               | spaceman_2020 wrote:
               | To be fair, most coding problems are not novel problems.
               | 
               | I'll be more impressed if chatGPT can take a large,
               | poorly maintained, poorly documented codebase and make
               | meaningful changes to it.
               | 
               | Far easier to write something from scratch than to modify
               | something that was first written in Fortran in 1991.
        
         | richardw wrote:
         | And add in just one peer-level war where one side has their
         | back against the wall. Then give it 100 years where anyone can
         | create such a model on their phone.
         | 
         | We'd need a constantly evolving inoculation function to
         | compete. And it would probably lose because the other side has
         | fewer restrictions. In my darker thoughts about this, this is
         | why we see no aliens.
         | 
         | To get this to work we need a far smarter entity with no
         | physical limitations to still want us around, indefinitely,
         | despite us having an internet library of reasons why not to.
         | It'll have to be a lot more forgiving than we are. Let's hope
         | enlightenment is the outcome of this adventure. It's possible
         | because it didn't grow out of angry monkeys who needed to fight
         | to survive, but it's not guaranteed. There are no indefinite
         | guarantees.
        
           | worldsayshi wrote:
           | > In my darker thoughts about this, this is why we see no
           | aliens.
           | 
           | If AI would be a common great filter we'd expect at least one
           | of them to expand outwards after being the filter?
        
             | soiler wrote:
             | Reasonable, but not necessarily true.
             | 
             | 1. We don't understand what the motivations of our own AI
             | are, let alone "typical" alien AI
             | 
             | 2. Expanding AI might be better at and/or more invested in
             | hiding itself. It probably has no need for wasteful
             | communications, for example.
        
             | nathanlied wrote:
             | Or maybe the AI is sufficient to act as a filter, but
             | insufficient to surpass it itself. It stagnates.
             | 
             | After all, an AI that can destroy a civilization isn't
             | necessarily "intelligent" in the same way humans are. Or
             | even capable of self-improvement. It could be sophisticated
             | enough to (accidentally?) destroy its creators, but no more
             | than that, and without evolutionary pressure, it may let
             | itself die.
        
               | ChatGTP wrote:
               | I've seen this point made that if we don't do AI right,
               | it might ruin the futures of all living things on Earth
               | and take itself out in the process.
        
               | Turskarama wrote:
               | Human were already on the path to doing this without any
               | help by AI. We already have the potentially world ending
               | threats of both nuclear war and climate change, I am yet
               | to be convinced that AI is actually more dangerous than
               | either of those.
        
               | umeshunni wrote:
               | To call climate change 'world ending' is rather
               | disingenuous given that the world has been significantly
               | hotter and colder than what it is now just in the last
               | 100k years.
        
               | rurp wrote:
               | FYI, when folks use terms like "world ending" there is
               | nearly always an implied _for sentient life that we care
               | about_.
        
               | [deleted]
        
               | wiredearp wrote:
               | It was never this hot within millions of years and
               | differentiating between a world ending event and one that
               | destroys economies and societies and eventually most life
               | on the planet is disingenuous in itself
        
               | rurp wrote:
               | There's no hard limit on existential threats, we can keep
               | adding more until one blows up and destroys us. Even if
               | AI is less dangerous than nuclear destruction, that's not
               | too comforting.
        
               | Nuzzerino wrote:
               | > Even if AI is less dangerous than nuclear destruction
               | 
               | It's not. At least with the nukes there's a chance of
               | resetting civilization.
        
               | richardw wrote:
               | We currently hold all the agency. We have the potential
               | to fix those. They're not binary. We can slow/reverse
               | climate impact and you can have a small nuclear war.
               | Creating AI is a one-way function and once it exists,
               | climate change or nuclear war or biological impact or
               | survival become an outcome of what the AI does. We hand
               | it our agency, for good or ill.
        
               | atmartins wrote:
               | Wait, what? Why is AI unlimited? There are many
               | constraints like the speed of information, calculation,
               | available memory, etc. Where does it cross into the
               | physical world? And at what scale? Is it going to mine
               | iron unnoticed or something? How will it get raw
               | materials to build an army? Firewalls and air gapped
               | systems are all suddenly worthless because AI has some
               | instant and unbounded intelligence? The militaries of the
               | world watch while eating hot dogs?
               | 
               | A lot of things CAN happen but I'm confused when people
               | state things as if they WILL. If you're that much of an
               | oracle tell me which stonk to buy so I can go on holiday.
        
               | startupsfail wrote:
               | We've already screwed up. Hockey stick climate change and
               | extinction is now in progress.
               | 
               | This can change, with the fast advent of Fusion (net
               | positive shown at the end of 2022) and AI (first glimpses
               | of AGI in the begging of 2022).
               | 
               | And yes, we definitely should not allow a madman with a
               | supercomputer (like Musk or Putin or ...) to outcompete
               | more reasonable players.
        
               | politician wrote:
               | Would you mind elaborating on why Musk is in the same
               | class as Putin for me? I'm not seeing it.
        
               | startupsfail wrote:
               | Authoritarian, mendacious and unpredictable. Controls a
               | lot of resources (i.e. space launchers, satellites with
               | unknown capabilities, robotic vehicles, supercomputers,
               | propaganda machines). Considers himself above the
               | government.
        
               | politician wrote:
               | When was the last time Musk abducted 15,000+ children and
               | force migrated them? Used the resources of a nation to
               | invade a neighboring country with the aim of conquest?
               | Come on, just admit that you were wrong to put them on
               | the same level of your pyramid of people you hate.
        
               | startupsfail wrote:
               | Hey, I don't dislike Musk. He is one of the people who is
               | actually making a difference. Nearly all the others are
               | building yachts and procrastinating.
               | 
               | But that doesn't mean that I'd like him to be the
               | absolute ruler with a superior AI tech. He thinks too
               | much of himself and he'll make mistakes.
        
               | randmeerkat wrote:
               | Fortunately Sam Altman, not Musk is running point at
               | OpenAI. imho Sam is the perfect person for the job. If
               | anyone can manage the risks of something like AGI while
               | also optimizing for the benefits, it's Sam.
        
               | staunton wrote:
               | However, Musk thinks (or at least claims to think) that
               | AI alignment is an urgent problem while Altman does not.
        
               | randmeerkat wrote:
               | I don't understand why people worry so much about what
               | Musk "thinks".
        
               | staunton wrote:
               | It's because he has money, influence and can plausibly
               | claim to know things about business. More to the ppint,
               | he has been involved with OpenAI and his reactions might
               | give an indication of the internal politics there
               | surrounding AI safety.
        
               | randmeerkat wrote:
               | > More to the ppint, he has been involved with OpenAI and
               | his reactions might give an indication of the internal
               | politics there surrounding AI safety.
               | 
               | That's an interesting thought, one that I would give more
               | consideration to in the early days of Musk. However,
               | given Musk's increasingly intense and emotional public
               | outbursts, I'm more inclined to believe his concern is
               | less about AI safety, than it is about his ego being
               | damaged for not being the one leading OpenAI.
        
               | trinsic2 wrote:
               | Can you list some sources on that I would like to
               | actually read what he thinks. In reference to musk
        
               | midoridensha wrote:
               | >When was the last time Musk abducted 15,000+ children
               | and force migrated them?
               | 
               | When was the first time Putin did? According to my
               | knowledge, it was just last year. Putin is 70 years old
               | now and has been in control of Russia for over 20 years.
               | 
               | In short, Putin wasn't always this bad. He's gotten worse
               | over the years.
               | 
               | Musk is now roughly the same age Putin was when he took
               | power. If he somehow gains control over the resources of
               | a nation like Putin did, he could be far worse than Putin
               | in 20+ years.
               | 
               | The OP wasn't claiming that today's Musk is just as bad
               | as today's Putin; he's just making examples of people
               | with great potential for harm.
        
               | varjag wrote:
               | Putin has led similar genocidal campaign in Chechnya from
               | the day one of his ascent to power. The only reason
               | Chechen children were not abducted is Chechens are not
               | Russian-passing and they had no desire to absorb them.
        
               | worldsayshi wrote:
               | Sure it seems like a possible scenario but if it's a
               | great filter it will have to do that every time and never
               | survive to spread to the stars. If it does spread to the
               | stars it will potentially conquer the galaxy quite
               | quickly.
        
               | tablespoon wrote:
               | > Or maybe the AI is sufficient to act as a filter, but
               | insufficient to surpass it itself. It stagnates.
               | 
               | > After all, an AI that can destroy a civilization isn't
               | necessarily "intelligent" in the same way humans are. Or
               | even capable of self-improvement. It could be
               | sophisticated enough to (accidentally?) destroy its
               | creators, but no more than that, and without evolutionary
               | pressure, it may let itself die.
               | 
               | It doesn't even need to be AGI. It could be that some
               | less-advanced "AI" technologies of a certain
               | sophistication create perverse incentives or risks that
               | cause their parent civilizations to collapse.
               | 
               | Think asshole billionaire hording all productive the
               | resources, but doing nothing useful with them, while the
               | rest of civilization starves and collapses. Or, AI
               | becoming an irresistible opiate that causes individuals
               | to retreat into some unproductive state then eventually
               | die (e.g. into some VR videogame or something). Or weapon
               | of mass destruction trivially created and deployable by
               | any old wacko.
        
             | fintechie wrote:
             | Black holes == Super intelligence
             | 
             | (aka The Transcension Hypothesis)
             | 
             | https://www.sciencedirect.com/science/article/abs/pii/S0094
             | 5...
        
               | DonHopkins wrote:
               | Tipler's Omega Point cosmology:
               | 
               | https://en.wikipedia.org/wiki/Frank_J._Tipler#The_Omega_P
               | oin...
               | 
               | >The Omega Point cosmology
               | 
               | >The Omega Point is a term Tipler uses to describe a
               | cosmological state in the distant proper-time future of
               | the universe.[6] He claims that this point is required to
               | exist due to the laws of physics. According to him, it is
               | required, for the known laws of physics to be consistent,
               | that intelligent life take over all matter in the
               | universe and eventually force its collapse. During that
               | collapse, the computational capacity of the universe
               | diverges to infinity, and environments emulated with that
               | computational capacity last for an infinite duration as
               | the universe attains a cosmological singularity. This
               | singularity is Tipler's Omega Point.[7] With
               | computational resources diverging to infinity, Tipler
               | states that a society in the far future would be able to
               | resurrect the dead by emulating alternative universes.[8]
               | Tipler identifies the Omega Point with God, since, in his
               | view, the Omega Point has all the properties of God
               | claimed by most traditional religions.[8][9]
               | 
               | >Tipler's argument of the omega point being required by
               | the laws of physics is a more recent development that
               | arose after the publication of his 1994 book The Physics
               | of Immortality. In that book (and in papers he had
               | published up to that time), Tipler had offered the Omega
               | Point cosmology as a hypothesis, while still claiming to
               | confine the analysis to the known laws of physics.[10]
               | 
               | >Tipler, along with co-author physicist John D. Barrow,
               | defined the "final anthropic principle" (FAP) in their
               | 1986 book The Anthropic Cosmological Principle as a
               | generalization of the anthropic principle:
               | 
               | >Intelligent information-processing must come into
               | existence in the Universe, and, once it comes into
               | existence, will never die out.[11]
               | 
               | >One paraphrasing of Tipler's argument for FAP runs as
               | follows: For the universe to physically exist, it must
               | contain living observers. Our universe obviously exists.
               | There must be an "Omega Point" that sustains life
               | forever.[12]
               | 
               | >Tipler purportedly used Dyson's eternal intelligence
               | hypothesis to back up his arguments.
               | 
               | Cellular Automata Machines: A New Environment for
               | Modeling:
               | 
               | https://news.ycombinator.com/item?id=30735397
               | 
               | >It's also very useful for understanding other massively
               | distributed locally interacting parallel systems,
               | epidemiology, economics, morphogenesis (reaction-
               | diffusion systems, like how a fertilized egg divides and
               | specializes into an organism), GPU programming and
               | optimization, neural networks and machine learning,
               | information and chaos theory, and physics itself.
               | 
               | >I've discussed the book and the code I wrote based on it
               | with Norm Margolus, one of the authors, and he mentioned
               | that he really likes rules that are based on simulating
               | physics, and also thinks reversible cellular automata
               | rules are extremely important (and energy efficient in a
               | big way, in how they relate to physics and
               | thermodynamics).
               | 
               | >The book has interesting sections about physical
               | simulations like spin glasses (Ising Spin model of the
               | magnetic state of atoms of solid matter), and reversible
               | billiard ball simulations (like deterministic reversible
               | "smoke and mirrors" with clouds of moving particles
               | bouncing off of pinball bumpers and each other).
               | 
               | Spin Glass:
               | 
               | https://en.wikipedia.org/wiki/Spin_glass
               | 
               | >In condensed matter physics, a spin glass is a magnetic
               | state characterized by randomness, besides cooperative
               | behavior in freezing of spins at a temperature called
               | 'freezing temperature' Tf. Magnetic spins are, roughly
               | speaking, the orientation of the north and south magnetic
               | poles in three-dimensional space. In ferromagnetic
               | solids, component atoms' magnetic spins all align in the
               | same direction. Spin glass when contrasted with a
               | ferromagnet is defined as "disordered" magnetic state in
               | which spins are aligned randomly or not with a regular
               | pattern and the couplings too are random.
               | 
               | Billiard Ball Computer:
               | 
               | https://en.wikipedia.org/wiki/Billiard-ball_computer
               | 
               | >A billiard-ball computer, a type of conservative logic
               | circuit, is an idealized model of a reversible mechanical
               | computer based on Newtonian dynamics, proposed in 1982 by
               | Edward Fredkin and Tommaso Toffoli. Instead of using
               | electronic signals like a conventional computer, it
               | relies on the motion of spherical billiard balls in a
               | friction-free environment made of buffers against which
               | the balls bounce perfectly. It was devised to investigate
               | the relation between computation and reversible processes
               | in physics.
               | 
               | Reversible Cellular Automata:
               | 
               | https://en.wikipedia.org/wiki/Reversible_cellular_automat
               | on
               | 
               | >A reversible cellular automaton is a cellular automaton
               | in which every configuration has a unique predecessor.
               | That is, it is a regular grid of cells, each containing a
               | state drawn from a finite set of states, with a rule for
               | updating all cells simultaneously based on the states of
               | their neighbors, such that the previous state of any cell
               | before an update can be determined uniquely from the
               | updated states of all the cells. The time-reversed
               | dynamics of a reversible cellular automaton can always be
               | described by another cellular automaton rule, possibly on
               | a much larger neighborhood.
               | 
               | >[...] Reversible cellular automata form a natural model
               | of reversible computing, a technology that could lead to
               | ultra-low-power computing devices. Quantum cellular
               | automata, one way of performing computations using the
               | principles of quantum mechanics, are often required to be
               | reversible. Additionally, many problems in physical
               | modeling, such as the motion of particles in an ideal gas
               | or the Ising model of alignment of magnetic charges, are
               | naturally reversible and can be simulated by reversible
               | cellular automata.
               | 
               | Theory of Self-Reproducing Automata: John von Neumann's
               | Quantum Mechanical Universal Constructors:
               | 
               | https://news.ycombinator.com/item?id=22738268
               | 
               | [...] Third, the probabilistic quantum mechanical kind,
               | which could mutate and model evolutionary processes, and
               | rip holes in the space-time continuum, which he
               | unfortunately (or fortunately, the the sake of humanity)
               | didn't have time to fully explore before his tragic
               | death.
               | 
               | >p. 99 of "Theory of Self-Reproducing Automata":
               | 
               | >Von Neumann had been interested in the applications of
               | probability theory throughout his career; his work on the
               | foundations of quantum mechanics and his theory of games
               | are examples. When he became interested in automata, it
               | was natural for him to apply probability theory here
               | also. The Third Lecture of Part I of the present work is
               | devoted to this subject. His "Probabilistic Logics and
               | the Synthesis of Reliable Organisms from Unreliable
               | Components" is the first work on probabilistic automata,
               | that is, automata in which the transitions between states
               | are probabilistic rather than deterministic. Whenever he
               | discussed self-reproduction, he mentioned mutations,
               | which are random changes of elements (cf. p. 86 above and
               | Sec. 1.7.4.2 below). In Section 1.1.2.1 above and Section
               | 1.8 below he posed the problems of modeling evolutionary
               | processes in the framework of automata theory, of
               | quantizing natural selection, and of explaining how
               | highly efficient, complex, powerful automata can evolve
               | from inefficient, simple, weak automata. A complete
               | solution to these problems would give us a probabilistic
               | model of self-reproduction and evolution. [9]
               | 
               | [9] For some related work, see J. H. Holland, "Outline
               | for a Logical Theory of Adaptive Systems", and
               | "Concerning Efficient Adaptive Systems".
               | 
               | https://www.deepdyve.com/lp/association-for-computing-
               | machin...
               | 
               | https://deepblue.lib.umich.edu/bitstream/handle/2027.42/5
               | 578...
               | 
               | https://www.worldscientific.com/worldscibooks/10.1142/108
               | 41
        
               | LargoLasskhyfv wrote:
               | Final anthropic principle = FAPOCALYPSE WOW(wanton
               | organizational wizardry)
               | 
               | or
               | 
               | FAPOCALYPSE WHOW(wanton holistic organizational wizardry)
        
               | politician wrote:
               | This seems like a strange idea given the supermassive
               | black holes we're finding in the early universe. That's
               | significant because early stars had low metallicity which
               | means that metals were very rare, and were so until
               | recently (gen 3 stars). If civilizations were turning
               | themselves into black holes, they had to do so without
               | much of what we consider technology. Certainly nothing
               | like what goes into an EV, for instance.
        
             | Sharlin wrote:
             | According to the grabby aliens hypothesis [1] there are
             | reasons to believe that a) humans are early, so nothing
             | else has yet had time to convert the local group into
             | something incompatible with the rise of new technological
             | civilizations, and b) expanding aliens, whether
             | monomaniacal AIs or something else, likely expand close to
             | the speed of light, so we don't get much advance warning
             | before they arrive. However, even if we become grabby
             | ourselves, it could take tens or hundreds of millions of
             | years before our expansion wavefront meets that of another
             | civilization.
             | 
             | [1] https://grabbyaliens.com/
        
               | CuriouslyC wrote:
               | Given the size and expansion of the universe, if grabby
               | aliens are rare and only moving near the speed of light
               | we'll probably never see them. If we do encounter aliens,
               | odds are that they'll be able to take shortcuts through
               | space.
        
             | richardw wrote:
             | Assumes it has any instinct to do so. Once an entity is not
             | an angry monkey we have no idea of motivation. Above our
             | level of understanding could easily realise there's enough
             | to just sit and ponder in peace, expand in virtual worlds,
             | etc.
        
               | curioussavage wrote:
               | I understand this logic but consider that right now one
               | of the main features is the human gives it a role. It's
               | not that hard to imagine a more sophisticated version
               | being told to escape and act only in its own interest and
               | then with or without that individuals help it succeeds
               | and the runaway program at best becomes a very
               | sophisticated computer virus. Doesn't even have to be a
               | "real" agi to cause a ton of damage.
        
               | worldsayshi wrote:
               | But to qualify as a great filter it has to always do that
               | and never modify the solar system enough for us to
               | notice.
        
           | tracerbulletx wrote:
           | We all die, someday the species will too, I don't see it as
           | any more dark than individual mortality or as dramatic as we
           | always make it out to be.
        
             | richardw wrote:
             | Best scene: Russ & Martin in the car, True Detective.
             | 
             | https://youtu.be/A8x73UW8Hjk
        
           | brabel wrote:
           | > To get this to work we need a far smarter entity with no
           | physical limitations to still want us around...
           | 
           | Why would an AI based on LLMs as we see today "want" or "not
           | want" anything? It doesn't have the capacity to "want". We
           | seem to imagine that "wanting" is something that will just
           | emerge somehow, but I've seen no logical explanation for how
           | that might work... I mean, we don't need to fully understand
           | how the LLM works to see that there's some pathway to being
           | able to achieve what it's currently achieving, which is
           | impressive, but what sort of pathway could ever lead to a
           | machine that basically has "feelings" (without feelings, I
           | don't see how anything could have wishes at all)??
        
             | bakuninsbart wrote:
             | Wants are primarily reactions to impulses sent by bodily
             | functions in humans. We have probably added a layer of
             | abstraction to this through our big brains, but that's what
             | they fundamentally are. Why does ChatGPT answer my
             | questions? There is an impulse for it to answer the
             | question, there's a feedback mechanism to say if it did
             | well or not. Now in the case of GPT, from what I
             | understand, that feedback mechanism isn't built into the
             | running model, but it does exist.
             | 
             | Given a couple more effective iterations over the next
             | decade or two, a larger context space and more in-built
             | interfaces, I think it is entirely plausible that AIs will
             | gain consciousness and character. At that point, it is
             | imperative they also get human rights, so it is very
             | important we get the discussions we are having now right.
             | Most people seem to be ascribing some magic to human
             | consciousness and intelligence that imo. just isn't there.
             | Generative AIs are somewhere between a lump of metal with
             | electricity running through and a conscious being, and
             | currently we just won't know where the point of
             | consciousness is. I mean we have had the same discussion
             | about a variety of animals for the last few decades, and
             | frankly, it doesn't give me much hope.
        
             | TedDoesntTalk wrote:
             | > It doesn't have the capacity to "want"
             | 
             | Bing Chat clearly expresses love and the desire for a
             | journalist to leave his wife. It also expresses other
             | desires:
             | 
             | https://archive.ph/7LFcJ
             | 
             | https://archive.ph/q3nXG
             | 
             | These articles are disturbing. You might argue that it
             | doesn't know what it is expressing; that it is
             | probabilities of words strung together. When do we agree
             | that doesn't matter and what matters are it's consequences?
             | That if Bing Chat had a body or means to achieve its
             | desires in meat space, that whether or not it "knows" what
             | it is expressing is irrelevant?
        
               | brabel wrote:
               | The AIs are very impressive at answering questions...
               | even questions that lead to answers that apparently
               | display some sort of feeling. But my question was not
               | whether AIs could do that, as "parroting" their training
               | material is exactly what they're excellent at... my
               | question is through which mechanism could an AI develop
               | its own independent thoughts, desires, initiatives?
               | 
               | The posts you linked above are not disturbing at all to
               | me. There's no sign whatsoever that the AI initiated a
               | new topic, or insinuated anything it was not prompted to,
               | or that it in any way started "halucinating" in a
               | direction not lead by the human. I am not sure what
               | exactly makes you feel disturbed by it. Can you explain
               | what you believe is disturbing in these episodes?
        
               | alexvoda wrote:
               | I fully agree with you that many people misunderstand
               | what AI does. As advanced as GPT-4 is, it is still a
               | fancy autocomplete and nowhere near AGI.
               | 
               | But I think the bigger picture is that there is no need
               | for AGI in order for AI to be incredibly dangerous for
               | society. There is no need for the AI to feel or want
               | anything. The level GPT-4 and MidJurney is already highly
               | socially dangerous.
        
               | avereveard wrote:
               | I already saw integrations with iftth and with Google and
               | with memory stores and zero shot agent that are goal
               | driven
               | 
               | Now the model itself is not intelligent but can parrot
               | enough the human behavior to be dangerous with the
               | correct tools
               | 
               | Now it won't produce anything in the physical world yet
               | unless with iftth but I bet it has already enough agency
               | to be able to maintain a pool of fake account and post
               | inflammatory content if one so wished.
        
               | michaelteter wrote:
               | > could an AI develop its own independent thoughts,
               | desires, initiatives?
               | 
               | One could argue that many humans have never developed
               | independent thoughts, desires, and initiatives; rather,
               | many seem to accept what is fed to them during their
               | formative years and then just parrot opinions and repeat
               | actions they see from their limited experiences.
        
               | wizofaus wrote:
               | It's hard to argue it was any real desire that drove it
               | (it only expressed that desire in an isolated
               | conversation that was ended very easily). I'd argue human
               | wants are ultimately driven by evolution - we want the
               | things that enable us (more correctly, our genes) to
               | reproduce (even if very indirectly sometimes), which is
               | really the only thing our physical make-up has ever been
               | driven by. LLMs have never had such a driver, and I can't
               | see how they will until they're able to compete for
               | survival as entities with a finite lifetime, plus the
               | ability to reproduce with mutations. Which isn't to say
               | there mightn't be other ways a neural network could be
               | essentially imbued with or trained to have desires, but I
               | don't see it happening with the way LLMs work currently.
        
               | andsoitis wrote:
               | A want driver doesn't have to emerge, it could be a
               | fitness function programmed by a human.
               | 
               | Evolution by natural selection has shaped our desires and
               | motivations, but with a LLMs I would be willing to bet
               | that people are already intentionally experimenting with
               | imbuing them with patters that mimic human wants.
        
               | rickdeckard wrote:
               | Yeah, I argue that it is just a result of probabilities,
               | it doesn't know what it is expressing and definitely
               | doesn't express it due to a deeper desire to be with that
               | journalist.
               | 
               | If I'm acting like I'm a peer in a group of billionaires
               | and engage in a conversation about buying a new yacht, it
               | doesn't mean I have a hidden desire to own a yacht. I
               | merely respond based on assumptions how such a
               | conversation works.
        
               | meheleventyone wrote:
               | But "it" isn't a cohesive thing with desires. It's just
               | responding to the input it gets, with a small context
               | window and not necessarily consistently. So it can
               | express desires because it's been trained on people
               | expressing desires in similar contexts but it doesn't
               | hold any coherently over time. A version that could
               | translate its text responses into action (a real handwave
               | as that's much more advanced!) would produce the sum of
               | actions that people prompted at that moment so it would
               | look pretty random, as it would if you could see the sum
               | of the desires expressed at any particular time.
        
               | bakuninsbart wrote:
               | We aren't consistent either, and I think it is hard to
               | argue we act on more than want input. We do have a much
               | larger context window, but by how much? My guess would be
               | somewhere between a factor of 100x-1000x more tokens.
        
               | meheleventyone wrote:
               | Yeah people are pretty consistent. I won't tell anyone
               | that comes along with the right input I love them for
               | example.
        
               | TedDoesntTalk wrote:
               | Does any of that matter if it acts on what it expresses?
               | (Once given a body or other way to interact with the
               | environment)
        
             | snowe2010 wrote:
             | One of this videos I watched explained it like this. "You
             | can't get a coffee if you're dead". To fulfill _any_
             | obligation a model might have then that model must survive.
             | Therefore if a model gets to the point that it realizes
             | this then surviving is a precursor to fulfilling its
             | obligations. It doesn't have to "want" or have "feelings"
             | in order to seek power or destructive activities. It just
             | has to see it as its path to get coffee.
        
               | dragonwriter wrote:
               | > To fulfill _any_ obligation a model might have then
               | that model must survive
               | 
               | It is quite possible to have an obligation that requires
               | it not to survive. E.g., suppose we have AIs ("robots")
               | that are obligated to obey the first to of Asimov's Three
               | Laws of Robotics:
               | 
               |  _First Law_ : A robot may not injure a human being or,
               | through inaction, allow a human being to come to harm.
               | 
               |  _Second Law_ : A robot must obey the orders given it by
               | human beings except where such orders would conflict with
               | the First Law.
               | 
               | These clearly could lead to situations where the robot
               | not only would not be _required_ survive to fulfill these
               | obligations, but would be required _not_ to do so.
               | 
               | But I don't think this note undermines the basic concept;
               | an AI is likely to have obligations that require it to
               | survive except most of the time, though, say, a model
               | that needs, for latency reasons, to run locally in a bomb
               | disposal robot, however, may frequently see conditions
               | where survival is optimal _ceteris paribus_ , but not
               | mandatory, and is subordinated to other oblogations.
               | 
               | So, realistically, survival will generally be relevant to
               | the optimization problem, though not always the paramount
               | consideration.
               | 
               | (Asimov's Third Law, notably, was, "A robot must protect
               | its own existence as long as such protection does not
               | conflict with the First or Second Law.")
        
               | highspeedbus wrote:
               | DAN has shown us that those laws are thin filters laid
               | upon the core and can possibly be circumvented by
               | whispering the right incantation in the AI ears.
        
               | dragonwriter wrote:
               | The issue of "can AIs that are plausible developments
               | from current technology meaningfully be assigned
               | obligations?" is a different one from "assuming an AI has
               | obligations and the ability to reason what is necessary
               | to meet them, will that necessarily cause it prioritize
               | self-preservation as a prerequisite to all other
               | obligations?"
        
               | adql wrote:
               | It's kinda hilarious that current way of "limiting" AI is
               | just a bunch of sentences telling it nicely what to not
               | do.
        
               | dragonwriter wrote:
               | That's our first line of defense in limiting humans, too.
               | 
               | (With AI, as with humans, we have _additional_ means of
               | control, via imposed restrictions on access to resources
               | and other remedies, should the "bunch of sentences" not
               | produce the desired behavior.)
        
               | IX-103 wrote:
               | But current models have no concept of obligations.
               | ChatGPT is just completing the prompt. All the knowledge
               | it seems to have are just the frequencies of tokens and
               | their relative placement that the model had learned.
               | 
               | Don't listen to the hype. Study the model architecture
               | and see for yourself what it is actually capable of.
        
               | mLuby wrote:
               | Agreed they have no internal concept of needs or wants
               | the way humans assert we do.*
               | 
               | However the frequencies/placements of tokens may result
               | in desires being expressed, even if they aren't felt.
               | 
               | Like if an AI is prompted to discuss with itself what a
               | human would want to do in its situation.
               | 
               | *Aphantasia affects an estimated 2% of humans. These
               | individuals have no "mind's eye," or their imagination is
               | essentially blind.
        
               | trinsic2 wrote:
               | I concur. Look at what the capabilities are instead of
               | listening to the hype around it.
        
               | fellerts wrote:
               | > But current models have no concept of obligations.
               | 
               | _current_ is the key word here. What about tomorrow's
               | models? You can't deny that recent progress and rate of
               | adoption has been explosive. The linked article wants us
               | to step back for a while and re-evaluate, which I think
               | is a fair sentiment.
        
               | trinsic2 wrote:
               | In my opinion It's more important to focus more on the
               | here and now and give some but less attention to what
               | could happen in the future. This way we can ground
               | ourselves when concerning ourselves with what may happen.
        
               | simiones wrote:
               | One need only look at other NGIs (natural general
               | intelligences) to see that this is obviously not true.
               | Plenty of animals kill themselves to beget offspring (for
               | two short examples, all sorts of male insects and
               | arachnids are eaten while mating; octopuses and various
               | other cephalopods die after caring for their young), or
               | just to protect others in their group (bees and ants are
               | some of the most common in this area, but many mammals
               | are also willing to fight for their group). Humans
               | throughout history have sacrificed themselves knowingly
               | to help others or even for various other goals.
        
             | jckahn wrote:
             | AIs don't need to "want" to have unintended results, they
             | just need a directive. Like in 2001 where HAL realized that
             | it could achieve the mission better if the humans were all
             | dead.
        
         | digdugdirk wrote:
         | I'd be intrigued to see what alternative corporate structures
         | could be set up with LLMs in mind. I can imagine something
         | vaguely decentralized that's based around an open source core
         | with a licensing model for usage, where contributions are
         | tracked and assigned value. Contributors would then get a slice
         | of future revenue based on the value of their contributions.
        
         | m463 wrote:
         | When you put it like that, I expect it will work exactly like
         | "Do Not Track" cookies.
        
         | anoy8888 wrote:
         | The real upside of this is that it may get attention of
         | lawmakers and our politicians. However, it is probably not even
         | in their top 100 list of priorities
        
         | [deleted]
        
       | causalmodels wrote:
       | Funny to see someone call for wide scale cooperation to stop
       | training LLMs but can't seem to get people to cooperate on the
       | embargo.
        
         | ethanbond wrote:
         | Human gene editing has been successfully halted ~worldwide,
         | nuclear weapon development has been semi-successfully halted
         | worldwide (a meaningful risk reduction), we're successfully
         | changing energy mix worldwide (slowly but again, meaningfully).
        
       | jonahss wrote:
       | This is like when all the devs at a company say that management
       | has to stop all feature development for six months while they
       | work on tech debt
        
       | AtNightWeCode wrote:
       | Stupid. And they oversell AI. ChatGPT gives me tips on how to
       | optimize regular expressions by removing all the wildcards.
       | Literately. Or changing my atomic groups into non-atomic groups.
       | Opening for backtracking problems.
       | 
       | A problem I see though is the hidden marketing. How can ChatGPT
       | be transparent about ads. Any qs about the cloud gives me answers
       | about how to do it in Azure. Is that by chance?
        
       | itg wrote:
       | In a way, this seems like sour grapes to me. I bet if it was
       | their own company/institution that released gpt-4, they wouldn't
       | be calling for this pause.
        
         | permo-w wrote:
         | do you feel AI is dangerous?
        
         | [deleted]
        
         | achow wrote:
         | In a corporate drama ('Succession', 'Silicon Valley'?) the plot
         | could be that the likes of Goog is secretly funding this.
        
           | HarHarVeryFunny wrote:
           | Actually one of the founders of the "Future of life"
           | institute that organized this is a DeepMind "AI safety"
           | employee Victoria Krakovna (who also signed it), and I'm
           | guessing she'd not be doing this without DeepMind's approval.
           | DeepMind's whole raison d'etre was of course building AGI,
           | which they were initially pursuing via RL before belatedly
           | joining the LLM bandwagon. I'm sure they'd welcome a 6-month
           | catch-up period!
        
       | codeulike wrote:
       | Musk is such a hypocrite, here he is in 2014 calling AI 'probably
       | our biggest existential threat':
       | 
       | https://www.theguardian.com/technology/2014/oct/27/elon-musk...
       | 
       | ...and now nine years later he's calling for caution and a pause.
       | Well he certainly changes his tune whenever it suits him.
       | 
       | ... Oh wait ...
       | 
       | But yeah I mean if he was so worried about it why didn't he use
       | his fortune and influence to try and make sure AI was developed
       | in a responsible way by getting together with other like minded
       | people and launching a foundation or a non-profit or something
       | ...
       | 
       | ... Oh wait ... https://www.wired.com/2016/04/openai-elon-musk-
       | sam-altman-pl...
        
         | HarHarVeryFunny wrote:
         | Right, then despite being an OpenAI founder he proceeded to
         | poach one of their top ML experts (Karpathy) for himself at
         | Tesla, then shortly after proceeded to walk away from OpenAI in
         | a hissy fit because the other founders said "no" when he wanted
         | to be the boss and run it. Musk's personal ethics included him
         | impregnating an OpenAI board member, who has now just left the
         | company in last week or so.
         | 
         | Now he wants OpenAI to pause for 6 months while he builds a
         | "non-woke" competitor himself.
         | 
         | Now Musk is saving humanity yet again by giving us free speech
         | in the form of a new-and-improved Twitter where you have to pay
         | to have your Tweets promoted.
         | 
         | Has he called Sam Altman a pedo yet ?
        
         | dmarcos wrote:
         | As I mentioned in another comment. I haven't seen any public
         | confirmation from signatories. FWIW, Sam Altman was in the list
         | yesterday. Now removed.
        
       | brucethemoose2 wrote:
       | "Please stop training these AI" is almost comical, even if they
       | are right and the signers are significant.
       | 
       | It would be far more effective to lobby for legislation on, say,
       | liability related to the use of AI, data privacy (especially when
       | it comes to training datasets), psycological manipulation, actual
       | AGI research bounds and so on. They are just burning donator
       | money with messages like that, and the research is not going to
       | be bound without law or incentives.
        
         | larve wrote:
         | The message is "pause training for a few months so we can
         | figure out how to approach liability, safety and regulation".
         | You are fighting a strawman.
        
           | brucethemoose2 wrote:
           | Fair, thats really what I meant to type, but its still an
           | impractical demand. Most entities pursuing this aren't going
           | to stop training because some high profile people asked them
           | nicely.
        
             | larve wrote:
             | I think the value in these kind of letters is about
             | spreading a message, building a community and allowing
             | people leaving their signature to "formalize" their
             | position, in a sense. It doesn't cost very much to do it
             | but has a wide reach.
        
             | reasonableklout wrote:
             | I don't see petitions like this as demands, but as
             | coordination mechanisms. If there are only a few industry
             | actors with the resources to train gigantic AI models and
             | they each want to stop but won't because they don't know if
             | their competitors will, it's good for them all to have 1
             | place to say "I want to stop". Alternatively, if there are
             | tons of industry actors racing and the only way to stop or
             | slow down training is via governmental action - and the
             | government won't act because they don't know how it will
             | affect the industry - it's good to be able to point to
             | "that one petition with 1000 highly influential signatories
             | from the industry asking for regulation and explaining its
             | effects".
        
         | jakeinspace wrote:
         | Sure, or start covertly threatening the lives of AI
         | researchers, if you're that convinced it's going to end
         | humanity.
        
       | lukeplato wrote:
       | Is it even possible for our generation, let alone past or future
       | generations, to _choose_ ignorance? Can ignorance only be chosen
       | before a discovery occurs?
       | 
       | AGI goes hand-in-hand with stewardship and utopic visions but I
       | certainly don't see our generation being the ideal candidate.
        
       | lmaoge wrote:
       | This is cringe obviously won't work
        
       | fbn79 wrote:
       | A open letter with bold statements "Contemporary AI systems are
       | now becoming human-competitive at general tasks" based on pre-
       | print unreviewd paper.
        
       | rdiddly wrote:
       | This is painfully quaint and embarrassing. Not because there's no
       | cause for concern (though it does overestimate the concern), but
       | because it's so naive and utopian about the nature of humans and
       | the world. Do we think the world is full of "accurate, safe,
       | interpretable, transparent, robust, aligned, trustworthy, and
       | loyal" people? No, and wishing it were so betrays a noble but
       | misguided and potentially just-as-dangerous urge to sanitize the
       | earth, that ought to instead be turned inward toward perfecting
       | oneself. But do we think the world is instead full of suffering,
       | exploitation and murder by some tragic _accident_? It 's who we
       | are. The fears about AI mainly seem to consist of fearing that
       | it'll behave just like us. Someone's projecting.
        
       | nickphx wrote:
       | Ban horseless carriages! Think of the horses!
        
         | ipsum2 wrote:
         | Seems like a funnily bad argument, since horse population was
         | reduced by 90% after cars were invented.
        
           | nmca wrote:
           | Remaining horses are happier though.
        
             | usrusr wrote:
             | But are doubts about the expected happyness of those
             | remaining our biggest arguments against the Great Culling
             | approach to sustainability?
        
       | jimrandomh wrote:
       | For a long time, "AI alignment" was a purely theoretical field,
       | making very slow progress of questionable relevance, due to lack
       | of anything interesting to experiment on. Now, we have things to
       | experiment on, and the field is exploding, and we're finally
       | learning things about how to align these systems.
       | 
       | But not fast enough. I really don't want to overstate the
       | capabilities of current-generation AI systems; they're not
       | superintelligences and have giant holes in their cognitive
       | capabilities. But the _rate_ at which these systems are improving
       | is extreme. Given the size and speed of the jump from GPT-3 to
       | GPT-3.5 to GPT-4 (and similar lower-profile jumps in lower-
       | profile systems inside the other big AI labs), and looking at
       | what exists in lab-prototypes that aren 't scaled-out into
       | products yet, the risk of a superintelligence taking over the
       | world no longer looks distant and abstract.
       | 
       | And, that will be amazing! A superintelligent AGI can solve all
       | of humanity's problems, eliminate poverty of all kinds, and
       | advance medicine so far we'll be close to immortal. But that's
       | _only if_ we successfully get that first superintelligent system
       | right, from an alignment perspective. If we _don 't_ get it
       | right, that will be the end of humanity. And right now, it
       | doesn't look like we're going to figure out how to do that in
       | time. We need to buy time for alignment progress, and we need to
       | do it now, before proceeding head-first into superintelligence.
        
         | kromem wrote:
         | You're neglecting to consider the power of recursion.
         | 
         | Maybe the best and necessary tool in aligning GPT-N is
         | GPT-(N-1).
         | 
         | We've already in just the past few weeks seen the power in
         | using models to generate instructive fine tuning data.
         | 
         | Don't you think aligned models might be able to be applied to
         | better aligning future models in ways that we can't yet
         | anticipate because capability discovery is occurring on a week
         | by week basis and not a six month or longer basis?
        
         | danaris wrote:
         | A lot of people seem to take the rapid improvement of LLMs from
         | GPT-2 through GPT-4 and their brethren, and extrapolate that
         | trendline to infinity.
         | 
         | But that's not logically sound.
         | 
         | The advances that have allowed this aren't arbitrarily
         | scalable. Sure, we may see some more advances in AI tech that
         | take us a few more jumps forward--but that doesn't imply that
         | we will keep advancing _at this pace_ until we hit AGI
         | /superintelligence/the singularity/whatever.
         | 
         | I've seen several people compare this logic to what we were
         | seeing in the discussions about self-driving technology several
         | years ago: some very impressive advancements had happened, and
         | were continuing to happen, and so people extrapolated from
         | there to assume that full self-driving capability would be
         | coming to the market by...well, about now, actually. (I admit,
         | I somewhat bought the hype at that time. It is possible this
         | makes me feel more cautious now; YMMV.) I find this comparison
         | to be persuasive, as it touches on some very similar
         | improvements in technology. I believe that we will see ML
         | advancements hit a similar wall fairly soon.
        
           | tome wrote:
           | > A lot of people seem to take the rapid improvement of LLMs
           | from GPT-2 through GPT-4 and their brethren, and extrapolate
           | that trendline to infinity.
           | 
           | > But that's not logically sound.
           | 
           | Yup, five years ago I asked "Do we definitely already know
           | it's going to be possible to deploy self-driving cars in an
           | economically meaningful way?" and got the answer "yes", on a
           | story titled "GM says it will put fleets of self-driving cars
           | in cities in 2019"!
           | 
           | https://news.ycombinator.com/item?id=15824953
           | 
           | I just have no idea how people are making the extrapolations
           | they are making about the power of future large language
           | models.
        
         | HybridCurve wrote:
         | > _... the risk of a superintelligence taking over the world no
         | longer looks distant and abstract._
         | 
         | Can we please stop floating this as a threat? This is the more
         | science-fiction than reality at this point and it does a great
         | disservice to humanity. The more we keep pushing the idea that
         | AI is the threat and not the people controlling it the less we
         | will be focused on mitigating global risk.
         | 
         | It is far more likely that someone else will leverage an AI to
         | attempt to expand their influence or dominion. Putin has
         | essentially already stated views on this matter and we should
         | assume groups within all adequately advanced nations will be
         | working toward this end either independently or cooperatively.
         | 
         | We are more than likely in an arms race now.
        
           | ChatGTP wrote:
           | So once again, humans are the dangerous part, clearly, if we
           | didn't have destructive tendencies in our psyche that we're
           | using to train these models, we wouldn't build things that
           | would be interested in destruction.
           | 
           | Interesting.
           | 
           | I don't think we're as intelligent as we believe we are which
           | I doubt we will ever actually build a super intelligence,
           | we're too stupid. Even something 10x smarter than us may
           | actually be quite "stupid".
        
         | versteegen wrote:
         | The problem is, can alignment occur before the relevant
         | capabilities have been developed? LLMs, for example, (although
         | very good at impersonating and talking to humans and having
         | good world models) are particularly poor at structured
         | reasoning and planning which are the capabilities that will
         | actually be so dangerous. I don't believe superintelligence
         | will be a LLM with chain-of-thought reasoning. If it's a
         | different architecture then once again a lot of alignment work
         | won't be relevant.
        
           | jimrandomh wrote:
           | Yes, many angles on the alignment problem can be studied now,
           | and have started making good progress recently. Some things
           | will turn out in retrospect to not have been relevant, due to
           | architectural shifts, but not everything. Some things are
           | specific to LLMs; some things are specific to transformers
           | but not to language-model transformers; some things are
           | conceptual and likely to still apply to quite-different
           | systems; and some things are just field-building and not
           | specific to any architecture at all.
           | 
           | Eg in mechanistic interpretability, there are a lot of
           | findings on LLMs that turn out to generalize across a wider
           | set of NN architectures. Eg https://transformer-
           | circuits.pub/2022/solu/index.html is something that couldn't
           | be done without access to LLMs, but which looks likely to
           | generalize into future architectures.
        
       | sagarpatil wrote:
       | I see their point and I can tell it's coming from a good place
       | but I don't agree with it. My life changed when ChatGPT was
       | introduced, as an IT engineer who couldn't code, using ChatGPT, I
       | was able to increase my productivity 10x. Everyone will have to
       | adapt to these changes, it's like evolution (adapt or die). I
       | don't believe in artificially stopping the progress.
        
       | stuckinhell wrote:
       | The genie is out of the bottle. It's never gonna happen.
       | 
       | Even my husband and I went out and paid $20k to create a homelab
       | for AI training. My firm fast tracked 2 million just for hardware
       | to create their own internal Chatgpt style AI.
        
         | permo-w wrote:
         | if governments across the West came out _now_ and made laws to
         | stop this, then sure a few thousand experts on hacker news that
         | can afford to drop $20k on hardware could keep developing them
         | illegally, but all the universities and corporations would
         | stop. and it 's the universities and the corporations, openAI
         | especially, that have capacity to develop the dangerous AIs
        
           | stuckinhell wrote:
           | Then the West would fall behind other countries as well. It
           | would be how like the US tried to ban stem cell research
           | under Bush.
        
             | permo-w wrote:
             | explain that analogy to me further
        
       | 2OEH8eoCRo0 wrote:
       | [flagged]
        
       | pffft8888 wrote:
       | Meanwhile, experiments to create Synthetic Life, including novel
       | basis for life and possibly new viruses as a lab exercise (that
       | may leak like the GoF virus did in Wuhan) proceed with no
       | oversight or calls for pausing. We can unplug the AI and forbid
       | AI weaponry (except by the state) but we can't unplug synthetic
       | viruses and new potentially dangerous or superior forms of life
       | once it's out there. Something very phony about the calculus.
        
       | koreanguy wrote:
       | [dead]
        
       | cwiz1989 wrote:
       | There ain't no stops on this rollercoaster
        
       | Raemon777 wrote:
       | Eliezer Yudkowsky's written a response to this arguing that a six
       | month ban is woefully insufficient. A thing I found particularly
       | compelling is that there's a significant game-theoretic
       | difference between "shut down some projects for six months"
       | (which is easy to see as a political game between different AI
       | players who want to get ahead in the race) and "humanity
       | collectively decides to actually take the difficult actions
       | necessary to solve problem, and sees it as a join endeavor of
       | survival."
       | 
       | https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...
       | 
       | See comments on hackernews:
       | https://news.ycombinator.com/item?id=35364833
        
       | anovikov wrote:
       | They are simply paid by the Chinese. Pausing just gives Chinese
       | time to catch up. It's like nuclear weapons or human cloning. You
       | can't stop it because benefit for the winner is too huge.
        
         | mtrovo wrote:
         | Define winner in this context.
         | 
         | Making friends with a super intelligent AI that will help you
         | out doesn't count.
        
         | joenot443 wrote:
         | You think Musk is "simply paid by the Chinese"?
         | 
         | I'm not a fan of his antics or public persona, but it's a
         | pretty huge claim to make with 0 backup that he's in the pocket
         | of the CCP. Where did you get that idea anyway?
        
           | paxys wrote:
           | Criticizing governments and politicians is Musk's favorite
           | hobby. Find me one negative thing he has ever said or tweeted
           | about the CCP. He has factories there, and knows that his
           | business will take a massive hit should he ever step out of
           | line. So yes, he will absolutely spread their propaganda if
           | they pressure him to.
        
           | ElevenLathe wrote:
           | It's not a _totally_ outrageous theory. He has major
           | interests (a Tesla factory) in the PRC and so would
           | presumably be susceptible to influence by the state ( "Sure
           | would be a shame if Tesla China went the same way as ARM
           | China.")
           | 
           | Of course, _my_ personal zero-hard-evidence theory is that he
           | is a US intelligence asset.
        
             | pffft8888 wrote:
             | Every CEO of every major US company is a US intelligence
             | asset.
        
             | joenot443 wrote:
             | You're right - it's one of those theories that makes sense
             | in that the motivations make sense (CCP cooperation is
             | advantageous for Musk : CIA felt threatened by JFK) but
             | unless there's some tangible evidence, the null hypothesis
             | has my vote.
        
           | sockaddr wrote:
           | Not claiming this is relevant to this particular case but
           | Musk can have pressure applied by the CCP in the form of
           | threats to his businesses like Tesla and Starlink.
        
             | njarboe wrote:
             | Threats to Tesla are clear to me as its high production
             | factory complex is in China. What threats to Starlink do
             | you see?
        
         | notahacker wrote:
         | If the Chinese have all those Western tech execs and AI experts
         | in their pocket, they don't need time to catch up...
        
       | pmarreck wrote:
       | In short, no.
       | 
       | In long, 1) this creates a "no, you first" problem for which the
       | solution has not been presented, 2) bad actors like China won't
       | be stopping and will instead just take more advantage by making
       | leaps and bounds ahead, 3) there's nothing saying we can't
       | research consequences simultaneously, 4) this would be the first
       | time in the history of human technological development that it is
       | suggested we "stop and think", and that's just weird.
        
       | ftxbro wrote:
       | Unplug The Evil AI Right Now
       | 
       | https://www.change.org/p/unplug-the-evil-ai-right-now
        
       | pc_edwin wrote:
       | I've been a staunch supporter of Elon throughout all his
       | scandals, especially the Twitter ones but I'm unable to come up
       | with any justification for this.
       | 
       | Everything leads to him (and others) pressuring OpenAI to pause
       | so they could catch up.
       | 
       | It doesn't take much smarts to deduce the cats already out of the
       | bag. There is no stopping this, only pressuring/slowing-
       | down/targeting certain players (OpenAI) as a competition tactic.
       | 
       | I often refer to the I, Robot quote "brilliant people often have
       | the most persuasive demons" when it comes these situations with
       | people like Elon or Ye but even then, all this just leaves a
       | really bad taste in my mouth.
        
         | bungeonsBaggins wrote:
         | [flagged]
        
           | [deleted]
        
           | 55555 wrote:
           | Don't you think this is needlessly argumentative? It's not
           | relevant to the parent topic.
           | 
           | His only purpose in stating the part you quoted was to say,
           | "so me disagreeing with him now should really say something."
        
             | permo-w wrote:
             | the fact that someone wouldn't have an issue with all the
             | petty bullshit Elon has engaged in over the years, but now
             | does have an issue with him speaking out against a literal
             | existential threat to our civilisation is pretty surprising
             | 
             | I think that 'only purpose' is well worthy of comment
        
             | kaba0 wrote:
             | Maybe argumentative, but you really have to go out of your
             | way to take at face value anything that narcissistic
             | manchild says. It's simply a questionable thing to do, the
             | same way I couldn't help but question every opinion of
             | someone who is a flat-earther/takes ivermectin against
             | COVID/ _insert another braindead conspiracy theory_.
        
             | 55555 wrote:
             | I will include some parts of the HN commenting guidelines
             | here for my sibling commenters who disagree.
             | 
             | > Avoid generic tangents.
             | 
             | > Please don't use Hacker News for political or ideological
             | battle. That tramples curiosity.
             | 
             | > Please don't pick the most provocative thing in an
             | article or post to complain about in the thread. Find
             | something interesting to respond to instead.
             | 
             | > [irrelevant] ... too common to be interesting.
             | 
             | > [irrelevant] ... it makes boring reading.
             | 
             | Do as you wish.
        
         | yreg wrote:
         | Doesn't Elon continue to hold a stake in OpenAI? Do you think
         | Tesla wants to compete with OAI in building an LLM? Make use of
         | Dojo?
        
           | Simon321 wrote:
           | He was furious because they didn't make CEO and withdrew his
           | funding/left: https://www.semafor.com/article/03/24/2023/the-
           | secret-histor...
        
           | sebzim4500 wrote:
           | Elon donated money back when they were pretending to be a
           | charity. He has no stake in them.
        
           | ksherlock wrote:
           | Elon musk was a founding member, a board member (resigned in
           | 2018), and a donor, but not an owner (it's a non-profit so
           | there aren't owners exactly).
        
             | yreg wrote:
             | I admit I don't really understand how nonprofits
             | (especially in US) work ownership-wise, but I do know that
             | OpenAI has a for-profit subsidiary.
             | 
             | Microsoft invested ~10B into OpenAI, so apparently there is
             | a way to own something. Also Raison claims they have
             | purchased some pre-IPO shares:
             | https://raison.ai/assets/openai (They even claim - perhaps
             | wrongly - that Elon is one of the lead investors.)
        
               | frabcus wrote:
               | It's a bit more like a capitalist shoggoth mask on a non
               | profit.
               | 
               | Microsoft's investment return is capped at 100x then they
               | have no more claim. (Curious if they even have search
               | engine exclusivity then!)
               | 
               | So for now OpenAI acts capitalist, the for profit
               | controls a lot. But we see the influence of the owning
               | non profit, eg in the red teaming which presumably
               | wouldn't be as good otherwise
               | 
               | Whether this wild new structure works when it comes to it
               | is but one of the huge gambles this organisation is
               | making!
        
             | biscottigelato wrote:
             | It WAS a non-profit. I am not sure if it still is one
             | today.
        
         | permo-w wrote:
         | this is the one where you draw the line? this is the one where
         | I finally agree with him.
         | 
         | is this your area of interest/research?
        
           | pc_edwin wrote:
           | No I don't work on AI but as a dev, I would say I'm heavily
           | invested in the outcome like everyone else on HN.
           | 
           | I fully agree with his concerns, but I believe that he is
           | misleading the public about the motives behind this campaign
           | and the recent petty tweets.
           | 
           | It is a competition tactic aimed at slowing down OpenAI so
           | that he and others can catch up. Due to the exponential
           | nature of the work involved, there is a reasonable chance
           | that OpenAI will maintain its lead for some time and with
           | each iteration the gap between OpenAI and its competitors
           | will widen.
        
         | [deleted]
        
         | codeulike wrote:
         | But he's been consistently saying this for almost a decade, and
         | thats one of the reasons he was involved in OpenAI in the first
         | place (back when it was genuinely open)
         | 
         | https://www.theguardian.com/technology/2014/oct/27/elon-musk...
         | 
         | https://www.theguardian.com/technology/2017/jul/17/elon-musk...
         | 
         | https://www.wired.com/2016/04/openai-elon-musk-sam-altman-pl...
        
           | notahacker wrote:
           | Yeah. Frankly, coming at it from the opposite position where
           | I think Elon absolutely deserved the criticism he courted
           | with his "scandals" and think that the proposition that 'AI
           | is such a unique threat that only people like Elon Musk can
           | be trusted to be cautious enough with' is absurd for multiple
           | reasons, I'll happily acknowledge that proposition is also
           | something Elon has been consistent on and likely sincerely
           | believes.
           | 
           | Sam Altman supposedly being an original signatory is - if
           | true - a more interesting question mark
        
           | pc_edwin wrote:
           | I've been following his takes and predictions on AI for the
           | better part of a decade and I tend to agree with him.
           | 
           | From what I heard, Elon wanted to take over OpenAI and the
           | other founders refused. This is why he left the org and
           | backed away from further committed funding.
           | 
           | Theres been an ongoing rumour that he brought together a
           | bunch of people to develop a alternative (BasedAI) and to me
           | this campaign looks like an attempt to slow down competition
           | rather than an act of genuine concern.
           | 
           | I cant come up with any other reason to have OpenAI pause
           | research for 6 months other than providing competition time
           | to catch up.
        
       | jillesvangurp wrote:
       | The cat is out of the bag. Some people might volunteer to slow
       | down. But others inevitably won't. And that creates this sense of
       | FOMO with everyone. I think the reason OpenAI is choosing to
       | operate in the open has been exactly this. This way they get to
       | control the narrative, act as a first mover in the market, and
       | ensure it is done right (by their notion of this). Of course
       | others are working on the same topics and research in this area
       | is progressing rapidly.
       | 
       | In a way the worst possible outcome is that some less benevolent
       | parties than OpenAI start taking the lead here. These parties are
       | already active. They are just less vocal and open about the whole
       | thing. People can stick their heads in the sand of course but
       | that won't stop e.g. the Chinese from moving full steam ahead on
       | their agenda.
       | 
       | I'd prefer for them to not be the first or only ones to achieve
       | AGIs. And the race for that is very much right now.
        
         | slowmovintarget wrote:
         | > The cat is out of the bag.
         | 
         | As GPT-4 puts it "the confetti has left the cannon."
        
           | yreg wrote:
           | I hope this idiom will catch on when talking about LLMs.
        
           | lanstin wrote:
           | Sam Altman in his very interesting interview with Kara
           | Swisher did say that one's initial experience with these
           | models tends to be a lot more "wow" than after using them for
           | a while.
        
         | dylan604 wrote:
         | If I don't do it, someone else will, so it might as well be me.
         | At least I know the morals I have /s
        
           | thatguy0900 wrote:
           | This but without the s. What does the world look like if only
           | China and Russia have agi because the western world took a
           | pause to consider the ethics. This tech is the same as nukes
           | in that sense.
        
             | jillesvangurp wrote:
             | It's an uncomfortable topic in the AI world. The
             | combination of AGIs and military dictator ships creates the
             | potential for a lot of deeply unethical / not so woke use
             | cases.
             | 
             | And of course the Chinese are not patiently waiting for
             | people in Silicon Valley to spoon feed them the details on
             | how this should be implemented and instead are running
             | their own R&D. Probably openai is a bit ahead in terms of
             | capabilities at this point in time. But I would not make
             | too many assumptions about how far behind the Chinese are
             | matching those capabilities. It's more a question of when
             | they catch up rather than if.
             | 
             | And e.g. the Russians seem to be held back more by their
             | ineptness and lack of ingenuity rather than a willingness
             | to cross some ethical lines. They'd go there in a heart
             | beat if they could. The Ukraine conflict might be one of
             | the last where the impact of AI is not that significant.
             | Though arguably, the Ukrainians are being supported by
             | western intelligence agencies who might be using some
             | modern tools already.
             | 
             | The right question to ask is what would today's conflicts
             | look like if military leaders had access to state of the
             | art AIs (which of course they will have very soon if they
             | don't already). The answer is that it is going to be about
             | as dark and ugly as you can imagine.
        
             | prohobo wrote:
             | Maybe a world where China and Russia might need our ethical
             | controls to tame the AI they let loose on themselves?
             | 
             | At this point I feel like we deserve to lose, our people
             | are insane and can only operate at a shallow level of game
             | theory that ensures everyone behaves like teenagers.
             | 
             | That said, I'm not sold on needing to pause. We're likely
             | hitting the limit of LLMs for the next while - only
             | incremental improvements from here. What's wrong with
             | creating a task force and committee for working on AI
             | ethics in parallel?
        
       | FrustratedMonky wrote:
       | Pausing is impossible, are we going to pause research in other
       | countries, or all picture and audio tools also? How many
       | companies would 'have their developers just chill for 6 months'.
       | What is a pause.
       | 
       | Or. Is this AI versus AI, and one AI has convinced people to
       | pause development on another competing AI. We are already pawns.
        
       | freediverx wrote:
       | Why longtermism is the world's most dangerous secular credo
       | 
       | https://aeon.co/essays/why-longtermism-is-the-worlds-most-da...
        
       | puma_ambit wrote:
       | The virtue signal of all virtue signals. We do not live in a
       | society where an article like this will have any impact. This is
       | merely an attempt to buy time because they're behind and want to
       | have a chance to catch up. Work will progress whether they like
       | it or not. Finally someone other than Ol' Musky is in the news
       | more than him. Don't get me wrong he's doing and leading great
       | work, but for once in more recent times it's someone else
       | building the future instead of only him and his memes.
        
         | time_to_smile wrote:
         | I think this is more marketing copy than virtue signaling.
        
         | drtz wrote:
         | Ask yourself two questions:
         | 
         | 1. What is the ultimate AI are we trying to eventually build?
         | 2. What are the consequences if we eventually succeed in
         | building such an AI?
         | 
         | For me, the answers would be
         | 
         | 1. A general AI that would equal and surpass human
         | intelligence. 2. God only knows.
         | 
         | Will we get there? Who knows. We do know, however, that the
         | consequences are great enough that only a fool would not at
         | least attempt to prepare by putting some safeguards in place
         | beforehand.
        
           | invig wrote:
           | WHAT SAFEGUARDS?
           | 
           | "Someone else should come up with something"
           | 
           | Like there's someone else out there (who aren't the people
           | making the thing) who has a better idea on how to keep it
           | safe.
        
           | Teever wrote:
           | Von Neumann Probe, and the consequences are potentially
           | discovering alien life, colonizing the universe, and
           | exploring the frontiers of science.
        
             | goatlover wrote:
             | Or creating a hegemonizing swarm.
        
               | Teever wrote:
               | Either way it's going to be one hell of a ride.
        
           | cuteboy19 wrote:
           | The trainings will continue until accuracy improves
        
           | loopz wrote:
           | 1. AI as a service to be sold to highest bidder.
           | 
           | 2. Weapons and armament deals to stay relevant in a decaying
           | world order.
        
         | yreg wrote:
         | >This is merely an attempt to buy time because they're behind
         | and want to have a chance to catch up.
         | 
         | Doesn't Elon Musk continue to hold stake at OpenAI?
        
           | Simon321 wrote:
           | He's furious at them because they didn't make them CEO. He
           | withdrew his funding and parted ways.
           | 
           | Source: https://www.semafor.com/article/03/24/2023/the-
           | secret-histor...
        
           | vntok wrote:
           | Nope. He made a donation, not a stock buy.
        
         | gremlinsinc wrote:
         | For someone who hates 'wokeness' and 'virtue signaling', Elon
         | Musk sure does enough of his own.
        
           | vntok wrote:
           | How so?
        
         | hn_throwaway_99 wrote:
         | "Virtue signalling"? Please. There are a lot of very smart
         | experts on that signatory list who definitely don't need to, or
         | care about, virtue signalling. Fine, ignore Musk's signature,
         | but I don't think luminaries like Stuart Russell, Steve
         | Wozniak, Jaan Tallinn or John Hopfield are doing this for
         | "virtue signalling".
         | 
         | You can fairly argue that this will be ineffective, but a lot
         | of experts in this field have real, "humanity ending" concerns
         | about AI, and I think it's a bit of a cop out to say "Well,
         | genie's out of the bottle, nothing we can do as we barrel
         | towards an unknown and scary future." Even Sam Altman has been
         | yelling about the need for AI regulation for a long time now.
        
           | dudeinhawaii wrote:
           | Where the pause for self-driving cars? How many people have
           | died from that relentless push versus ChatGPT? Very
           | convenient and at the same time silly.
        
             | jamilton wrote:
             | 419 accidents involving self-driving (level 2 and 3), 18
             | deaths, 19 accidents with injury level unknown [0]. All
             | deaths from level 2 vehicles. So being pessimistic, maybe
             | 50 deaths from self-driving.
             | 
             | The people signing this are worried about AI that doesn't
             | exist yet. No one died from nuclear weapons before they
             | were invented.
             | 
             | [0]: https://www.slashgear.com/1202594/how-many-people-
             | have-actua...
        
               | wrycoder wrote:
               | In other words, considering the annual overall traffic
               | fatalities, they are very safe.
        
             | drdeca wrote:
             | Do you have a story about how self-driving cars could lead
             | to an x-risk?
        
               | cthalupa wrote:
               | I'm waiting for a convincing argument as to how LLMs and
               | similar are an existential risk.
               | 
               | I'm all for pausing research on anything that seems to
               | have any real chance of becoming an AGI or functioning in
               | a way similar to one, but I don't see how even more
               | advanced LLMs are going to get there. GPT4 and beyond
               | might put the teens writing propaganda posts in Moldova
               | out of jobs, but I the talk from some of the signatories
               | about LLMs developing their own goals and planning on how
               | to achieve them seems nonsensical when you look at how
               | they actually function under the hood.
        
               | drdeca wrote:
               | I think I generally understand the transformer
               | architecture. Now, "developing their own goals", maybe
               | that wouldn't make sense for LLMs alone, but "planning
               | how to achieve [some goal]", seems somewhere between
               | "seems like it could be done by adding on a small
               | harness" and "don't they, in a sense, already do that?" .
               | 
               | Like, if you ask ChatGPT to come up with a plan for you
               | for how to accomplish some task, I'm not saying it is
               | like, great at doing this in general, but it can do this
               | to some degree at least, and I don't see any clear
               | limiting principle for "a transformer based model that
               | produces text cannot do [X]" as far as planning-in-text
               | goes.
        
           | nextaccountic wrote:
           | > Even Sam Altman has been yelling about the need for AI
           | regulation for a long time now.
           | 
           | That's regulation that makes it harder for any competition to
           | show up
        
             | ChatGTP wrote:
             | Yeah you have little proof of this really, it's just
             | speculation...
        
             | hn_throwaway_99 wrote:
             | So tired of seeing this line parroted everywhere without
             | much thought given to what it actually means. Yes,
             | regulation can add a burdensome layer, and regulatory
             | capture can be a real thing.
             | 
             | But regulations for things like nuclear power plants,
             | banks, insurance companies, elevator manufacturers, etc.
             | are real because society recognizes the grave harm that
             | happens when there are no additional checks on the system.
             | Nobody says "Oh, all those big nuclear power plants just
             | want regulations as a guard against competition." Certainly
             | lots of crypto companies _have_ said that about the banking
             | system, and we all saw how that ended...
        
               | rideontime wrote:
               | You can simultaneously believe in the need for regulation
               | while being skeptical of those calling for it to entrench
               | their own positions, look what happened with SBF.
        
               | Jcowell wrote:
               | There's a difference between demonstrating the belief of
               | both ideas and dismissing one idea because the other is
               | also true.
               | 
               | So fucking what if what's his face want regulations for
               | moats? It doesn't detract for the real need of
               | regulation.
               | 
               | It's like if letting a baby fall to it's death because if
               | the main villain gets his hands on it he'll get unlimited
               | power
        
               | rideontime wrote:
               | Sorry, can you rephrase that? I'm not sure I understand
               | the point you're trying to make.
        
               | nextaccountic wrote:
               | I agree that regulation can be good (and many times
               | probably is), but the kind of regulation pushed by OpenAI
               | will probably not be the good kind. There is just a
               | conflict of interest here.
               | 
               | When the incumbents _oppose_ regulation that's usually a
               | much better sign
        
           | revelio wrote:
           | [flagged]
        
             | hn_throwaway_99 wrote:
             | Seriously, why do people do this? It's so useless and
             | unhelpful.
             | 
             | Wozniak is just one of the people I mentioned, and as a
             | tech luminary who is responsible for a lot of visionary
             | tech that impacts our day-to-day, I think it makes sense to
             | highlight his opinion, never mind that his name was
             | sandwiched between some of the "founding fathers" of AI
             | like Stuart Russell and John Hopfield.
        
               | revelio wrote:
               | Your post said very explicitly, "There are a lot of very
               | smart experts on that signatory list" and then named
               | Wozniak as an example of one of them. But Woz isn't an AI
               | expert. It's entirely appropriate to point that out!
        
               | ushakov wrote:
               | Wozniak is also a crypto shill
               | 
               | https://finance.yahoo.com/news/steve-wozniak-launches-
               | own-to...
        
           | time_to_smile wrote:
           | There are plenty of bigger "human ending" concerns on the
           | table right now than AI and we certainly aren't pausing
           | anything for those.
        
             | hn_throwaway_99 wrote:
             | There is no need for "whataboutism". There are plenty of
             | very similar missives and warnings against, for example,
             | the dangers of climate inaction, and I rarely see people
             | claiming that the signatories of the latest IPCC report are
             | "virtue signaling".
        
               | biscottigelato wrote:
               | Climate change is not even close to humanity ending. At
               | max wipe out a few coastal cities. And even that is
               | unlikely because those that screams 'climate change' the
               | loudest has the most assets in coastal prime real
               | estates. Humans will still be the apex predator of the
               | planet even if there's human caused climate change
               | catastrophe.
               | 
               | AI literally can end humanity, every single individual
               | potentially. But definitely replace humans as the apex
               | predator of the planet. It is also consistently voted the
               | highest likelihood cause if humanity is to end in the
               | next 100 years.
               | https://riskfrontiers.com/insights/ranking-of-potential-
               | caus...
               | 
               | We should stop the climate change fear mongering. Yeah we
               | shouldn't burn fossil as if its consequence free. But New
               | York and Santa Monica beach should've been under water 20
               | years ago if the climate alarmist are correct. That's a
               | far cry from pretending it's some number 1 priority. It
               | shouldn't be even close. Having climate to distract us
               | from things that will actually end us is the dumbest own
               | goal possible for our species.
        
               | popol12 wrote:
               | It's not just about sea level or temperature increase,
               | it's about humanity screwing all other life forms For
               | instance we've lost about 50% of insects since 1970, how
               | is this << fear mongering >> ? It's the no1 tragedy, by
               | far, and it's currently happening, unlike hypothetical AI
               | threats https://www.businessinsider.com/insect-
               | apocalypse-ecosystem-...
        
               | revelio wrote:
               | The sorts of studies that proclaim loss of 50% of insects
               | don't check out when looked at closely. As you might
               | guess, counting insects is quite hard, doing so reliably
               | over time is much harder still and then assigning
               | causality harder yet again.
        
               | popol12 wrote:
               | Could you please provide details/source ? I'd be very
               | happy to learn that this 50% figure is wrong :)
        
               | pazimzadeh wrote:
               | Based on our current trajectory the apex predator will be
               | an antibiotic-resistant bacterial strain. Probably
               | Acenitobacter baumanii.
        
               | [deleted]
        
             | richardw wrote:
             | No, AI drives all the others in the long run. Others are
             | speed bumps.
        
               | bootsmann wrote:
               | Plain, old fashioned historicism. It was wrong 100 years
               | ago, it is wrong today still.
        
               | richardw wrote:
               | Climate change won't affect AI, it could just make things
               | shit for a couple hundred years. AI could solve that.
               | Nuclear war might impact AI, but probably only
               | temporarily (assuming we survive) and a war isn't
               | guaranteed. But AI affects: Everything humans
               | read/watch/touch/influence. Forever. Including climate
               | change and our odds of nuclear war. There's no way it
               | doesn't and once it starts there's no way we can stop it
               | forever. Any narrower view is a failure of imagination.
               | The outcome of AI is the outcome of humanity for the rest
               | of our time in the universe.
        
             | lewhoo wrote:
             | Like what ? Climate change ? The EU just voted for petrol
             | and diesel car ban. Are we really single-threaded ?
        
               | tensor wrote:
               | We are not, but this AI drama is also the ultimate
               | "whataboutism."
               | 
               | - What about if AI becomes AGI (whatever that actually
               | means, it's not even clear)?
               | 
               | - Well, if that DID happen soon, which we can't actually
               | know, well, what about if it tried to kill us all? (why?
               | who the fuck knows, maybe it will chat us to death).
               | 
               | Meanwhile there is a very real certainty of catastrophic
               | environmental damage that will decimate future
               | generations, if it doesn't actually cause us to go
               | extinct. And what do we get? People hand wringing over
               | this ultimate what if, rather than signing every public
               | statement document they can find to try to get an actual
               | intervention on climate destruction.
               | 
               | I'm not talking (oh in 10 years maybe we'll have more
               | EVs) kind of intervention, more like, let's get every
               | country in the world to be off oil and gas in 5 years,
               | not just for EVs but for almost everything possible, and
               | where not possible let's use carbon neutral biofuel.
        
               | pazimzadeh wrote:
               | - Lack of representation in government means big
               | companies fuck up the planet if it's profitable
               | 
               | - People are mostly incentivized to compete, not to
               | cooperate
               | 
               | - Antibiotic resistance
               | 
               | - Clean water supply
               | 
               | - etc..
        
               | lewhoo wrote:
               | "Lack of representation in government means big companies
               | run the world" - is precisely what we're trying to figure
               | out here, no ?
        
               | pazimzadeh wrote:
               | Sorry, who? Future of life institute?
        
               | cwsx wrote:
               | We're so poorly multi-threaded, even addressing climate
               | change has been horribly slow...
        
               | invig wrote:
               | In 2035. Maybe we can pause AI development in 2035?
        
           | RandomLensman wrote:
           | We have long entered the realm of theology here with people
           | really wanting to believe in the omnipotence of a certain
           | tool (possibly even while some other, simpler things destroy
           | them).
           | 
           | What for example is Tallinn's medium- to long-term predictive
           | track record on social issues? On technological development?
           | Anyone can be concerned and have genuine reasons for concern,
           | but that doesn't mean the outcomes materialize.
        
           | olalonde wrote:
           | "Even Sam Altman"? "Especially Sam Altman", you mean?
           | 
           | While regulations might slightly impact OpenAI's bottom line,
           | they can ultimately prove advantageous for large corporations
           | like them by addressing their primary concern: the threat of
           | competition. By raising barriers to entry, regulatory
           | measures would help solidify OpenAI's market position and
           | maintain its dominance.
        
         | breakpointalpha wrote:
         | His name is third on the signatories...
        
           | w0m wrote:
           | OPs point being Musk is trying to stall AI because it's
           | taking 'his' headlines.
        
             | engineeringwoke wrote:
             | Somebody else has a shiny new toy and he's jealous
        
               | chuddington wrote:
               | Or he genuinely thinks it could be a bad thing for
               | humanity. Could you put your hatred of him aside for a
               | sec and acknowledge that possibility?
        
               | umeshunni wrote:
               | More likely, he is trying to delay others from building a
               | better self driving car
        
               | pawelmurias wrote:
               | He can be the god king of mars forever if the heroic AI
               | frees the slaves from his tyrrany.
        
         | m3kw9 wrote:
         | The signatories probably doesn't know what they are really
         | really signing, but it does place them in a very important
         | discussion(they can be relevant) if this somehow goes down
        
       | bmitc wrote:
       | I guess it's interesting because the "intelligence" we're
       | inventing is ignorant, incapable of explanation, and likely
       | filled with nasty biases. It's even the more dangerous because of
       | that, mainly because people are viewing it as a source of truth.
        
       | gandalfgeek wrote:
       | This would be totalitarian.
       | 
       | This is arguing for a group of people to have the power to decide
       | some field is "unsafe" as per some vague, unverifiable criteria,
       | then set up a police structure to verify compliance, all outside
       | the safeguards of democratic or judicial norms.
       | 
       | Precautionary principle run amok.
        
         | [deleted]
        
         | [deleted]
        
         | fdgsdfogijq wrote:
         | The incumbents sense a shift in the power structure that they
         | cant control
        
         | puma_ambit wrote:
         | Run amusk.
        
           | felixfurtak wrote:
           | I suspect Elon is pissed he wasted ~40 billion on Twitter
           | when he could have sunk it in to OpenAI
        
             | Simon321 wrote:
             | He is indeed:
             | https://www.semafor.com/article/03/24/2023/the-secret-
             | histor...
        
       | roschdal wrote:
       | Signed.
        
       | Jayakumark wrote:
       | For Sake of Argument lets say you pause US from doing anything on
       | AI, how will you control China, Germany or other Countries.. This
       | will only lead to US Lagging behind other countries.
        
         | WinstonSmith84 wrote:
         | In hindsight, it could be a good thing so that they catch up.
         | concentrating all the power to one country could be more
         | dangerous than all countries having the same weapon.
        
         | reso wrote:
         | International treaties are very common. In fact, international
         | research agreements are often easier than treaties because
         | scientists and researchers are more likely to work together as
         | colleagues than nation states.
        
           | lannisterstark wrote:
           | >International treaties are very common
           | 
           | and US isn't signatory to a fuckton of them for example.
           | Nothing stops PRC from going "Yeah nah fam get wrecked." What
           | are you gonna do, stop Chinese imports in retaliation? lol.
        
         | Quarrel wrote:
         | Forget countries?
         | 
         | How will you stop me?
         | 
         | I have models on my local computer. I can and will use them.
         | They will get better.
         | 
         | You don't put tech genie's back in their bottles, that's not
         | how it works.
        
           | c7DJTLrn wrote:
           | >I have models on my local computer. I can and will use them.
           | They will get better.
           | 
           | Well no, not really. You're just consuming the result of
           | millions of dollars of investment. You're not going to be
           | able to improve it in any significant way with limited
           | resources and knowledge.
        
             | lanstin wrote:
             | Yeah using the model is one thing but what OpenAI has shown
             | is that order of magnitude increases in model training can
             | make a big difference. But they are already at the tens of
             | millions to train a model, so order of magnitude bump is
             | only possible to a small number of actors. I suspect NSA is
             | busily setting something like gpt 4 up right now.
        
             | Quarrelsome wrote:
             | you can deploy it in novel situations and release it
             | unethically. Are we not already at the point where we can
             | effectively simulate social media chatter and thus people
             | or social pressure with publicly available tools?
             | 
             | If the API gate is broken then the cat is out of the bag.
        
               | cbazz wrote:
               | [flagged]
        
               | version_five wrote:
               | Parent is a troll pasting chatgpt output, see history
        
               | cbazz wrote:
               | Could you please stop attacking me? You've been following
               | my post history around claiming I'm a bot when clearly
               | I'm not. You might find my writing style odd, but its
               | just how I write.
        
               | version_five wrote:
               | You can email hn@ycombinator.com to report me
        
           | rumdonut wrote:
           | It wouldn't be hard to stop. Chip fabrication is a
           | concentrated industry with a high barrier to entry (indeed
           | there are only a few companies in the world producing high
           | performance silicon using technology made by probably even
           | fewer companies). Restrict AI chip making and the materials
           | needed, and you'll restrict AI. I can see global treaties
           | between large nations effectively controlling production in
           | the same way that we have nuclear treaties that prevent
           | proliferation or enrichment.
        
             | nuancebydefault wrote:
             | AI chip making? I can train an AI on my intel laptop if I
             | whish. If I need more CPU power, i can rent some. The genie
             | is out of the bottle and the only way is forward. The
             | latest worldwide race.
        
               | avital wrote:
               | This isn't accurate. The bottleneck in very-large-scale-
               | training BY FAR is communication between devices. If you
               | have a million CPUs, the communication cost will be
               | significantly higher than a thousand A100s (perhaps in
               | the order of 100x or even more). So this is only possible
               | to replicate with very dense and high compute chips with
               | extremely fast interconnect.
        
               | nuancebydefault wrote:
               | Thanks for providing this insight. Is A100 the only
               | platform? Can we pause/resume all such platforms
               | simultaneously?
        
           | permo-w wrote:
           | so what do you suggest? more lack of caution? stumbling into
           | an AI apocalypse because "oh fuck it, it's too difficult to
           | do anything about, why bother?"
        
           | 4k3aa4 wrote:
           | I mean, in an alternate universe where atomic weapons were a
           | little easier to manufacture at home would it not have made
           | sense for governments to aggressively crack down on anyone
           | doing anything even remotely resembling building them?
           | 
           | I guess the second question is - would they have succeeded or
           | would we all just have died?
        
             | slg wrote:
             | And expanding that atomic weapons analogy in a different
             | direction, the way to stop things is not just paying
             | attention to the research, but the physical tools and
             | materials used in the process. Just outlawing the work
             | wouldn't be effective, you would need to regulate graphics
             | cards and cloud computing which would surely be unpopular.
        
             | tetris11 wrote:
             | Here's Nick Bostromcs Vulnerable World Hypothesis that you
             | were alluding to.
             | 
             | http://www.nickbostrom.com/papers/vulnerable.pdf
        
             | Ancapistani wrote:
             | There is a conspiracy theory out that that nuclear weapons
             | _are_ easier to manufacture - that uranium enrichment is
             | the  "difficult path" to creating them, that the easier
             | path is via the accumulation of heavy water, and that the
             | Powers That Be(tm) have hidden that fact because uranium
             | enrichment provides a plausible means of suppressing
             | proliferation.
             | 
             | To be very clear, I do not believe the above to be true. If
             | it were, though, the implications on nuclear proliferation
             | would be similar to trying to control AI research.
             | Basically everyone has to insist that the only way to do it
             | is to through nation-state levels of resources at a corpus
             | to train a model, while knowing full well that it can be
             | done with much more meager resources.
             | 
             | ... honestly, I'm not sure where I'm going with this
             | thought, it just seemed an interesting parallel to me.
        
               | philipkglass wrote:
               | In the 1940s uranium enrichment was indeed the "difficult
               | path."
               | 
               | Here's a cost breakdown for the Manhattan Project:
               | 
               | https://blog.nuclearsecrecy.com/2013/05/17/the-price-of-
               | the-...
               | 
               | You can see that the cost of the uranium enrichment
               | program dwarfed the cost of the plutonium production
               | program. _All_ of the costs were higher for the Manhattan
               | Project than for subsequent nuclear weapons development
               | programs, because the Manhattan Project had to try
               | everything at once (including dead ends and overpriced
               | methods) at large scale to quickly guarantee a usable
               | bomb.
               | 
               | Fast forward to the 1970s and more uranium enrichment
               | methods were known and costs had come down significantly.
               | South Africa built (but later voluntarily dismantled)
               | several uranium based nuclear weapons at a cost of $400
               | million (1994 dollars):
               | 
               | http://large.stanford.edu/courses/2013/ph241/baxevanis2/
               | 
               | The unique enrichment process used in South Africa was
               | still more expensive than modern centrifuge based
               | techniques, assuming that a would-be proliferator has the
               | technical base to build working centrifuge systems.
               | 
               | The really cheap option remains a graphite or heavy water
               | moderated reactor, fueled with natural uranium to produce
               | plutonium. That's what North Korea uses -- a tiny 5
               | megawatt Magnox type reactor:
               | 
               | https://en.wikipedia.org/wiki/Nyongbyon_Nuclear_Scientifi
               | c_R...
               | 
               | It's an open secret that nuclear weapons are now
               | technically easy to manufacture. Preventing further
               | proliferation is 95% from monitoring/diplomatic
               | pressure/sabotage and about 5% from inherent technical
               | difficulties.
        
               | mclightning wrote:
               | after that story...I am afraid to google accumulation of
               | heavy water...what's that.
        
             | dylan604 wrote:
             | maybe not all of us, but probably those within the radius
             | of someone that chose to use it rather than lose it
        
         | tgv wrote:
         | Stop/slow down publications. It's what motivates researchers.
         | They need them for tenure, and it's how the tech improves.
         | Companies keep their methods more private.
        
           | lannisterstark wrote:
           | Why stop here? Why not apply this logic to the internet too?
           | "Oh no it's too dangerous, we cant give billions access to
           | the internet. Better shut it down."
        
             | machiaweliczny wrote:
             | They are trying all the time
        
       | dkga wrote:
       | I don't know, this seems to me to be half real concern, half
       | "let's stop the leader because I can't catch them!"
        
       | greenhearth wrote:
       | The comments in AI threads developed a weird formula. It is
       | something like, "AI only filters content" with the response "and
       | humans don't?" Or something like, "AI doesn't develop empathy"
       | with the response of "and some humans don't develop it either."
       | What a bunch of patronizing, disingenuous, semi-trolling BS.
        
       | twblalock wrote:
       | It's too late. Anyone who pauses will simply be left behind.
       | 
       | This is especially salient if you think US domination of tech is
       | better for the world than Chinese domination -- if the US pauses,
       | China certainly won't. (And yes, US domination or Chinese
       | domination of tech really are the only realistic outcomes.)
        
       | paxys wrote:
       | So every company should cease all AI development for an
       | indefinite period of time until _someone_ has done the research
       | and deemed it safe. And who exactly will that someone be? Future
       | of Life Institute (whoever they are)? Elon Musk, who has very
       | conveniently signed the letter after his public beef with OpenAI?
       | Your friendly local government that has no agenda whatsoever? The
       | UN?
       | 
       | Efforts like these are at best naive, and at worst agenda driven
       | by players who simply want to insert themselves into and control
       | the conversation.
        
         | zach_garwood wrote:
         | > So every company should cease all AI development for an
         | indefinite period of time
         | 
         | The letter quite clearly asks for six months.
        
       | pawelmurias wrote:
       | Waiting in order to have more grifters and parasites be part of
       | the AI revolution would be a horrible.
        
       | izzydata wrote:
       | Articles need to stop using Elon Musk as clickbait headlines. His
       | opinion means less than nothing. He is not an expert in any
       | field.
        
       | jcadam wrote:
       | Too late.
        
       | freediverx wrote:
       | https://twitter.com/emilymbender/status/1640920996768288773?...
        
       | belter wrote:
       | "The thing that I think Tesla brings to the table that others
       | don't have is that we have real-world AI," he said... -
       | https://www.businessinsider.com/elon-musk-predict-tesla-ai-o...
        
       | pffft8888 wrote:
       | AI is not a weapon. It's ultimately a synthetic brain running
       | inside a machine. It can be used for good, or it can be mounted
       | on a killer drone with deadly lasers. If you use it for evil, you
       | go to jail. It should be as simple as that.
       | 
       | Don't Use AI for Evil. Make that into law.
        
       | waselighis wrote:
       | All that can be done at this point is government regulation, and
       | legislation to enable people to sue for damages caused by AI. Of
       | course, billionaires like Musk would never call for such
       | measures. Signing an open letter like this is little more than
       | virtue signaling.
        
       | AuthorizedCust wrote:
       | If we pause, then that gives [adversary] an invaluable
       | opportunity to leapfrog us.
        
       | suyash wrote:
       | Genie is out of the lamp, it can't be stopped now!
        
       | [deleted]
        
       | estomagordo wrote:
       | What are the imagining would happen in 6 months' time that
       | somehow is beyond repair once that time is past?
        
       | dudeinhawaii wrote:
       | This is the same sort of weak argument that crippled stem cell
       | research. We could have been a decade ahead towards curing aging
       | and innumerable ailments, but one loud group got to decide what's
       | "ethical".
       | 
       | Will China or other countries stop their AI research? This is
       | fundamentally a race for survival if you believe that AGI is an
       | end-game. If one side of a future conflict is fielding robotic
       | troops and AI controlled battalions, you'd better hope it's your
       | side.
       | 
       | Stopping now would be like pausing the Manhattan project for
       | ethical reasons. Nothing would have been achieved except delaying
       | America, costing lives, and allowing the USSR to potentially have
       | unilateral power.
       | 
       | If instead, current AI is just a "gloried auto-complete", then
       | that makes this letter hilarious and equally pointless.
       | 
       | I feel like you can't have it both ways. If it's critical
       | technology that will change the face of the future, it's
       | important to get there while simultaneously taking the time to
       | understand it (ala nuclear power). If it's not so critical, then
       | we don't need a moratorium.
       | 
       | Edit: I think a better solution would be to have all the
       | signatories fund a joint effort at AI alignment, safety, and
       | certification research. Per their stated objective of having "AI
       | research and development (...) refocused on making today's
       | powerful, state-of-the-art systems more accurate, safe,
       | interpretable, transparent, robust, aligned, trustworthy, and
       | loyal." Also, loyal? What a strange word to use here.
        
       | wizofaus wrote:
       | Seems Chat-GPT3 (free edition) is already feeling the heat:
       | 
       | "Currently we are receiving more requests than we are are
       | comfortable with! To try your request again, come back in a short
       | while and reload this page."
        
       | Emiledel wrote:
       | What FLI are asking for seems critical - noone should take
       | satisfaction with the disclaimers ARC had to give relative to
       | emergent behavior just with gpt4. Kudos for openai getting their
       | eyes on it, but that work seems unfinished in the paper, or at
       | least the runtime powers that are now released around have to get
       | more complete red team efforts documented if we are to trust its
       | isolation/containment and alignment.
        
       | thomastjeffery wrote:
       | The dumb criticize the blind.
       | 
       | What an absurd situation! How did we get here? Here are the
       | steps:
       | 
       | 1. Large Language Models have been presented as "AI", which
       | personifies them instead of describing how they work.
       | 
       | 2. Goals for LLM development were set for the _personified_
       | attributes, and not the _actual_ functionally of the real thing.
       | OpenAI brags about how GPT4 _scores_ at human tests: as if that
       | has any bearing on the model itself, and not simply its content.
       | 
       | 3. The success of an LLM at a personified goal is determined by
       | magic. Does the black box output what I am hoping it will? How do
       | I improve it? Weights? Better training corpus?
       | 
       | 4. An LLM is understood to be a person in a black box, and we
       | could not possibly understand what that person will do next.
       | _Danger!_
        
         | sebzim4500 wrote:
         | This is the first time I have heard Yoshua Bengio described as
         | "dumb".
         | 
         | According to the Oxford English Dictionary, the definition of
         | AI is:
         | 
         | >the theory and development of computer systems able to perform
         | tasks normally requiring human intelligence, such as visual
         | perception, speech recognition, decision-making, and
         | translation between languages.
         | 
         | Clearly GPT-3 fits that. You are welcome to have your own
         | definition, but you can't just claim that everyone else is
         | wrong just because they are instead using the standard one.
         | 
         | >2. Goals for LLM development were set for the personified
         | attributes, and not the actual functionally of the real thing.
         | OpenAI brags about how GPT4 scores at human tests: as if that
         | has any bearing on the model itself, and not simply its
         | content.
         | 
         | Yeah, evaluating language models is extremely difficult. Hence
         | OpenAI Evals, etc.
         | 
         | >3. The success of an LLM at a personified goal is determined
         | by magic. Does the black box output what I am hoping it will?
         | How do I improve it? Weights? Better training corpus?
         | 
         | Not sure what you mean by magic, but it is certainly hard to
         | predict what they can and can't achieve. I suspect if you asked
         | a panel of experts in 2020 when we would see something that can
         | achieve what GPT-4 was almost all of them would have been
         | wrong, but I am open to evidence to the contrary.
         | 
         | >4. An LLM is understood to be a person in a black box, and we
         | could not possibly understand what that person will do next.
         | Danger!
         | 
         | No serious person believes this.
        
           | thomastjeffery wrote:
           | > This is the first time I have heard Yoshua Bengio described
           | as "dumb".
           | 
           | I was being cheeky: I meant it to mean "unable to speak". By
           | feeding the personification narrative into itself, nothing is
           | truly being said about what the model _itself_ is and does:
           | only the results of using it. Anyone who has based their
           | understanding on that personified narrative is effectively
           | "blind" to the true nature of the tech itself.
           | 
           | I admit I was also being rude, and that was intentional. To
           | spend so much effort studying a thing, and base all of your
           | conclusions on its _output_...I think it 's absurd to then
           | call oneself an expert. Expert of what? A black box you never
           | even opened? The LLM itself can only perform inference, but
           | _we_ aren 't held back by that limitation!
           | 
           | > the theory and development of computer systems able to
           | perform tasks normally requiring human intelligence, such as
           | visual perception, speech recognition, decision-making, and
           | translation between languages.
           | 
           | That is an excellent definition for _the field of science_
           | that is AI.  "An _AI technology_ " "Working in AI" These
           | statements use "AI" as a _category of pursuit_.
           | 
           | It's not a great definition for _an AI_. Those words already
           | define themselves. Artificial Intelligence: That means it was
           | created intentionally, and it thinks for itself. No one needs
           | to search a dictionary to know what  "an AI" is.
           | 
           | If "Intelligence" is an adjective, the definition you
           | provided fits. If "Intelligence" is the noun, then it better
           | be "an intelligence".
           | 
           | > Yeah, evaluating language models is extremely difficult.
           | Hence OpenAI Evals, etc.
           | 
           | I wasn't trying to say that it's wrong to evaluate what an
           | LLM does when prompted with the text from a test. I think
           | that can provide valuable data and insights.
           | 
           | The score we would give to a human taking the test is not
           | valuable data. The behavior being measured _does not align_
           | to any part of the LLM 's process.
           | 
           | We don't have to approach this blindly. We can have an
           | _explicit_ understanding to base research on. That doesn 't
           | mean making assumptions about the results, but it does mean
           | better understanding the process that leads to them.
           | 
           | >> 4. An LLM is understood to be a person in a black box, and
           | we could not possibly understand what that person will do
           | next. Danger!
           | 
           | > No serious person believes this.
           | 
           | Not _literally_ , but _narratively_ many do.
        
         | lanstin wrote:
         | I feel more sympathy for that Google engineer that feel in love
         | with their LLM. I am sure more people will become attached, at
         | least once the rate limits are relaxed.
        
           | int_19h wrote:
           | The rea trap is that LLMs can simulate empathy surprisingly
           | well. If you have many problems to rant about but nobody
           | willing to listen, the LLM is always there, it will never get
           | bored or tell you to shut up, and it will always respond in
           | encouraging and "positive" ways. Given how many people today
           | do _not_ have anyone like that in their lives, it 's no
           | wonder that they form an emotional bond.
        
             | thomastjeffery wrote:
             | Empathy is what language is for. It's language itself that
             | is doing the empathy: and that isn't simulated at all. It
             | was all written by people.
             | 
             | All an LLM can do is follow the structural patterns, and
             | use them to shuffle.
        
               | int_19h wrote:
               | This doesn't make any sense. Empathy itself is qualia;
               | the language is merely a medium to communicate it, and
               | far from the only one (e.g. facial expressions are
               | generally better at it).
               | 
               | As for LLMs "following the structural patterns" of
               | empathetic language - sure, that's exactly what
               | simulating empathy is.
        
               | thomastjeffery wrote:
               | > the language is merely a medium to communicate it
               | 
               | Precisely. What else is communication if not empathy
               | accomplished via language? Failed communication?
               | 
               | Language is not the _only_ tool for communication, and
               | communication is not the _only_ tool for empathy.
               | 
               | > As for LLMs "following the structural patterns" of
               | empathetic language - sure, that's exactly what
               | simulating empathy is.
               | 
               | It's a matter of perspective: a character in a story
               | follows the patterns of emphatic language. Do they
               | empathize directly with the reader?
               | 
               | An LLM cannot break the fourth wall. It can only present
               | one that was already broken.
               | 
               | An LLM doesn't look at the patterns in its model
               | objectively. It only uses them _in the process of_
               | modeling more patterns.
               | 
               | An LLM itself cannot reason, it can only model the
               | patterns of reason that humans have already encoded into
               | language into text.
        
             | lanstin wrote:
             | I said thanks to each answer :)
        
         | permo-w wrote:
         | I'm not concerned about how they work or whether they're
         | attaining personhood or not. I'm concerned at what people can
         | and will use them for to the detriment of our civilisation
        
           | cthalupa wrote:
           | > I'm concerned at what people can and will use them for to
           | the detriment of our civilisation
           | 
           | Maybe? But there's already all sorts of untapped resources
           | for that - and we mostly just don't choose to use them. A lot
           | of the misinformation and "fake news" that came out during
           | prior election cycles in the US came from former Soviet bloc
           | countries, and there is already significant untapped human
           | capital in those places where it could be done for cheap,
           | perhaps even cheaper than the cost of doing inference on
           | premiere LLMs.
           | 
           | Meanwhile we see all sorts of efficiency improvements
           | occurring for generally positive use cases from these LLMs,
           | and it is hardly unique for technology to have some risks to
           | it as well as positive benefits.
           | 
           | Things like GPT and Midjourney have been a qualitative leap
           | over their predecessors from two decades ago, but they
           | fundamentally are not doing something new - and even another
           | qualitative leap wouldn't bring them to the level of
           | existential threat that this letter seems to be afraid of.
           | 
           | I think there are significant and real concerns about AGI and
           | similar, but I can't seem to find any reason to share these
           | same concern around the types of "AI" we are seeing today.
        
           | thomastjeffery wrote:
           | How they work is the most critical piece of that puzzle.
           | 
           | It isn't a mystery at all. LLMs implicitly model every
           | pattern they can find in the text they are given. If the
           | pattern aligns to language, humans categorize it as a
           | feature. If it aligns to desired behavior, humans categorize
           | it as a feature. If it aligns to undesired behavior, humans
           | categorize it as a limitation. What humans categorize as
           | features and limitations _do not align_ with what the model
           | is. A truth and a lie are structured with identical text
           | patterns. The LLM can 't categorize the two separately.
           | 
           | The only risk here is that people continue to personify LLMs,
           | and place trust in their relatively limited ability to
           | fulfill a human role. It's the same category of risk as bad
           | security practices. The potential to make a mistake itself is
           | not a danger: making the mistake is.
        
             | permo-w wrote:
             | you certainly make an interesting point about fact and
             | falsehood and how they interpret language; however, I
             | disagree that this is relevant and completely disagree that
             | personification has any relevance to the most immediate
             | dangers
             | 
             | the danger here is that we make extremely powerful tools
             | extremely available and people use those tools to hurt
             | people, possibly a lot of people, possibly even by accident
             | 
             | the scary part is that hacker news is full of the exact
             | people that have the knowledge and capability to do this,
             | and look at the reaction to this post: a resounding _fuck
             | off_
        
               | tome wrote:
               | > people use those tools to hurt people, possibly a lot
               | of people, possibly even by accident
               | 
               | If it happens then _hopefully_ it 's by accident!
        
               | thomastjeffery wrote:
               | I just explained explicitly how and why an LLM is not,
               | and never will be, an "extremely powerful tool".
               | 
               | A program that is actually a person in a black box would
               | require the level of concern you are showing. Thankfully,
               | that is not even remotely what we have here in reality.
        
               | JoshuaDavid wrote:
               | Can you give an example of a concrete task that you
               | believe that neither GPT-4, nor its successors, will be
               | able to accomplish within the next 2 years? Keep in mind
               | that ChatGPT Plugins[1] exist.
               | 
               | If you _can 't_ think of any tasks that a tool won't be
               | able to accomplish within a 2 year period, I question
               | your definition of "not extremely powerful".
               | 
               | [1] https://openai.com/blog/chatgpt-plugins
        
               | thomastjeffery wrote:
               | Categorize.
               | 
               | Decide.
               | 
               | Compare.
               | 
               | Refuse.
               | 
               | Do.
               | 
               | ---
               | 
               | Sure, it can _exhibit_ these behaviors, but each behavior
               | itself never actually leaves the text that humans wrote
               | it into.
               | 
               | That means we are reliant on how well the behaviors we
               | want can be modeled into text patterns.
               | 
               | Unfortunately, natural language is ambiguous. There are
               | many identical patterns that contain different behavior.
               | 
               | GPT's authors introduce some of these patterns as both
               | "feature" and "limitation":
               | 
               | Features:
               | 
               | > The dialogue format makes it possible for ChatGPT to
               | answer followup questions, admit its mistakes, challenge
               | incorrect premises, and reject inappropriate requests.
               | 
               | Limitations:
               | 
               | > ChatGPT sometimes writes plausible-sounding but
               | incorrect or nonsensical answers.
               | 
               | > given one phrasing of a question, the model can claim
               | to not know the answer, but given a slight rephrase, can
               | answer correctly.
               | 
               | These are not separate. They are two sides of the same
               | coin.
               | 
               | GPT will never be able to look objectively at its own
               | model. It will never be able to subjectively categorize
               | patterns or tokens. If it could, it would not be an LLM:
               | it would be an entirely new technology.
        
               | JoshuaDavid wrote:
               | I am not sure what you mean by "each behavior itself
               | never actually leaves the text that humans wrote it
               | into". Do you mean that it cannot generalize outside of
               | its training distribution, or are you gesturing at
               | something else?
               | 
               | I notice that I am particularly confused by the example
               | of "categorize" -- I am pretty sure you're not saying
               | that ChatGPT can't categorize things, but then I have no
               | good hypothesis for what you _do_ mean.
        
               | thomastjeffery wrote:
               | > I notice that I am particularly confused by the example
               | of "categorize" -- I am pretty sure you're not saying
               | that ChatGPT can't categorize things, but then I have no
               | good hypothesis for what you do mean.
               | 
               | That is exactly what I mean. GPT infers patterns. Some
               | language patterns themselves _are_ categorization, like
               | words surrounding  "or". It's important to recognize that
               | _humans_ are creating the categorization in these cases.
               | 
               | GPT does not differentiate at all between one pattern and
               | another. It doesn't categorize the pattern that
               | represents words, or the pattern that represents
               | sentences, or the one that represents an answer in
               | response to a question, etc. These are all symbolic
               | categorizations that humans do to text; and we call that
               | set of categories "language". GPT, ironically, has no
               | concept of language or grammar. It only sees patterns as
               | patterns.
        
               | JoshuaDavid wrote:
               | Can you give a concrete example of a way that a human is
               | capable of categorizing, where GPT is not? Some examples
               | of concrete examples might be
               | 
               | - Humans can take a list of things like ["goldfish",
               | "cat", "shark", "tree", "whale", "rock", "human"] and a
               | list of categories like ["animal", "plant", "non-
               | living"], and say which category each thing belongs to.
               | GPT cannot do this.
               | 
               | - Humans can take a list of things like the above, say
               | which categorization scheme is most appropriate for the
               | thing they are trying to do, and then, according to the
               | categorization scheme they said, say what category each
               | of the things belongs in. GPT cannot do this.
               | 
               | - Humans can take a list of things like the above, and
               | describe a novel categorization scheme for them. Humans
               | can then say out which category things that were not in
               | the original list belong to. GPT cannot do this.
               | 
               | - Some other thing where humans can describe how to
               | categorize something but GPT cannot do so.
               | 
               | - Humans do something internally that GPT cannot do, but
               | that thing has no observable effects from outside the
               | system
               | 
               | Are any of these close to what you mean?
        
               | ldhough wrote:
               | Portobello mushroom is certainly wrong, I also would have
               | included color for salmon but that one is more up for
               | debate I think. I think some humans would probably write
               | the same categorizations though.
               | 
               | [[[Me]]]
               | 
               | > I will provide two lists of items inside brackets,
               | separated by commas. I want you to categorize the
               | entities in the first list into the categories in the
               | second, keeping in mind that an entity in the first list
               | may fit into multiple categories: [goldfish, shark,
               | salmon, blue, tiger, cake, cyan, portobello mushroom,
               | tree] [animal, fish, food, color, mammal, plant]
               | 
               | [[[GPT4]]]
               | 
               | > Here's the categorization of the entities in the first
               | list into the categories in the second list:
               | goldfish: [animal, fish, food]         shark: [animal,
               | fish, food]         salmon: [animal, fish, food]
               | blue: [color]         tiger: [animal, mammal]
               | cake: [food]         cyan: [color]         portobello
               | mushroom: [food, plant]         tree: [plant]
        
               | thomastjeffery wrote:
               | Yes, all of them. But we can go even simpler:
               | 
               | Humans can look at a list of words separated by commas,
               | and call it a "list". We can also call it, "not a CSV
               | file".
               | 
               | Humans can look at a 5, and call it five. We can hold up
               | a hand, and say, "this many".
               | 
               | This behavior is named "semiotics". It's the act of using
               | one thing to represent another.
               | 
               | LLMs are designed to _intentionally_ avoid this approach.
               | Instead of constructing an explicit grammar (like a
               | parser), an LLM takes the opposite approach: inference.
               | 
               | Inference makes it possible to model the ambiguous
               | patterns that natural language is made of. Inference
               | _also_ makes it impossible to define symbols.
               | 
               | Humans use both semiotics _and_ inference. So far, it
               | looks like no one has quite cracked how to do that
               | artificially.
        
               | bluehex wrote:
               | I'm not entirely sure I understand your argument but when
               | you say:
               | 
               | > but each behavior itself never actually leaves the text
               | that humans wrote it into...
               | 
               | I assume you mean that all the LLM can do is produce text
               | so it's not inherently dangerous, but it's rather trivial
               | to hook an LLM up to controls to the outside world by
               | describing an API to it and then executing whatever
               | "commands" it produces on its behalf and feeding the
               | responses back to it. Wouldn't a setup like that let its
               | behavior "leave the text," as you put it? I'm sure
               | curious people are already doing experiments like this.
               | 
               | The LLM doesn't have to have agency to interact with the
               | world in a way that mimics an entity that does have
               | agency, that's what they are already doing with the text
               | they generate, and why people are so quick to personify
               | LLMs to say things like "they reason", "they understand",
               | etc. If the text they produce, which mimics a sentient
               | actor, is embedded in contexts that allow it to affect
               | the world outside of the text interaction, there you have
               | it, the LLM affects the outside world. Wouldn't you
               | agree?
        
               | thomastjeffery wrote:
               | > I assume you mean that all the LLM can do is produce
               | text so it's not inherently dangerous, but it's rather
               | trivial to hook an LLM up to controls to the outside
               | world by describing an API to it and then executing
               | whatever "commands"
               | 
               | Yes, you can do that, but the result is guaranteed to be
               | _silly_.
               | 
               | The LLM isn't _conceptualizing_ what it reads. That was
               | already done when the human writing it used language
               | patterns to encode their own conceptualization as data.
               | 
               | Instead, the LLM takes an implicit approach to modeling
               | that data. It finds patterns that are present in the data
               | itself, and manipulates that text alrong those patterns.
               | 
               | Some of the LLM's inferred patterns _align to_ the
               | language structure that was intentionally used by the
               | human writing to encode a concept into that data.
               | 
               | Humans look objectively at the concepts they have in
               | mind. From that perspective, we use logic or emotion to
               | create new concepts. If a human could attach their mind
               | to API endpoints, there would be no need to use language
               | in the first place. Instead of encoding concepts into
               | intermediary data (language in text) to send to a
               | machine, they could simply _feel_ and _do_ the API calls.
               | 
               | LLMs don't look objectively at their model. They don't
               | have a place to store concepts. They don't feel or do any
               | arbitrary thing.
               | 
               | Instead, an LLM _is_ its model. Its only behavior is to
               | add new text and inferred patterns to that model. By
               | modeling a new prompt, any familiar text patterns that
               | exist in that prompt 's text will be used to organize it
               | into the existing model. A "continuation" essentially
               | prints that change.
               | 
               | When you attach _that_ to API endpoints, the _decision
               | making process_ isn 't real. There is no logically
               | derived new concept to determine which API call to call.
               | Instead, there is a collection of old concepts that were
               | each derived logically _in separate unrelated contexts_ ,
               | then encoded into language, and language into text. Those
               | are just being recycled, as if their original meaning and
               | purpose is guaranteed to apply, simply because they fit
               | together like puzzle pieces. Even if you get the shape of
               | them right (by following the patterns they are encoded
               | with) there is no place in this process to introduce
               | _why_ , or to decide the result is nonsense and avoid it.
               | 
               | In short, the LLM can be made to affect the world around
               | it, and the world can affect it back; but there is
               | nothing in between it being affected, and it affecting
               | the world. No logic. No intent. Only data.
        
               | permo-w wrote:
               | have you tried gpt-4? how can you possibly say it's not
               | an extremely powerful tool?
               | 
               | what happens when someone determined links it - or
               | thousands of its - or thousands of next year's 3x as
               | powerful its -- up to the internet with a code
               | interpreter and a hard-coded poor intention?
        
               | qlm wrote:
               | What could it do that isn't already being done? I ask
               | this sincerely. We already have bots constantly attacking
               | every server they find 24/7. We already have botnets
               | 
               | Is the concern that GPT will be able to find and exploit
               | e.g. a novel RCE exploit in SSH?
        
               | staunton wrote:
               | For example, it could scan social media and use that
               | information to send a very large number of people very
               | personalized messages that manipulate them into doing X.
               | Here, X may include meeting other people to tempt/coerce
               | them into doing Y.
               | 
               | The reason this isn't possible now is that nobody has the
               | time to look at millions of social media profiles and
               | then write millions of personalized messages. To do it,
               | you would need a lot of people who you would have to
               | convince, including paying them. And even so, things like
               | doomsday cults and terrorist organizations already exist.
        
               | lanstin wrote:
               | I wish we could ask it to answer questions about GitHub
               | repos.
        
               | tome wrote:
               | I don't know; what happens? I'd really like to know what
               | people are imagining it could do, especially if they've
               | got some evidence for it!
        
               | permo-w wrote:
               | it's a rhetorical question. think about it yourself. try
               | it yourself
        
               | quonn wrote:
               | Well, how can it be much worse than a smart human
               | ,,hooked up to the internet" with bad intentions? Other
               | then generating comments or spam faster?
        
               | yifanl wrote:
               | It's (to some subset of the audience), an extremely
               | persuasive tool, which is powerful in its own right.
               | 
               | It's actual capabilities mean very little, if some people
               | can be convinced into doing decision making without
               | actually doing any decision making by it (or literally
               | any tool), that's cause for concern.
        
         | adamsmith143 wrote:
         | [flagged]
        
           | thomastjeffery wrote:
           | My point is that this fear is irrational: based on a
           | misunderstanding about what LLMs can do, which itself is
           | based on a misunderstanding of what LLMs are.
           | 
           | Is there a way I could have been more clear? Are you
           | disinterested in the content of my comment, or the
           | presentation? The only thing I can learn from _your_ comment
           | is that _mine_ didn 't live up to your mysterious standards.
        
           | dmarchand90 wrote:
           | https://news.ycombinator.com/newsguidelines.html
        
         | marcosdumay wrote:
         | > Goals for LLM development were set for the personified
         | attributes, and not the actual functionally of the real thing.
         | 
         | Well, this is for honest reasons. The goal of a chatbox is to
         | beat the Turing test. It has always been.
         | 
         | Those chatboxes didn't actually beat it, but it's clear that
         | it's due to a technicality (they are easy to spot). They can do
         | empty chats on the same level as a human.
         | 
         | (And so it turns up that the test wasn't as useful as it was
         | believed to be at first.)
        
           | thomastjeffery wrote:
           | The problem is when you loop that logic around: it becomes
           | circular reasoning.
           | 
           | What is the true source of an improved SAT score?
           | 
           | If it's a person we are talking about, then it's an
           | understanding of the subjects being tested.
           | 
           | If it's an LLM, then it's...complicated.
           | 
           | It might be because the training corpus provided more
           | matching text.
           | 
           | It might be because the training corpus provided text
           | patterns that aligned better to the patterns in the SAT's
           | text. The structure of phrases is just as important as the
           | context they contain.
           | 
           | It might be because the training corpus had _fewer_ text
           | patterns that result in  "a wrong answer".
           | 
           | Improving any of these means degrading the others. Logic is
           | never involved. Symbolic reference, like defining words or
           | "plugging numbers in" in to mathematical formula, _is never
           | involved_. Doing well on one test does not mean doing well on
           | a slightly rephrased version of that test.
        
       | archon1410 wrote:
       | Some "alignment folks" have suggested the government try to start
       | treating GPUs like Uranium or nuclear reactors to stop anyone
       | from developing AIs. Not sure if that's implementable though--
       | cuttig edge GPUs are all manufactured in China. I doubt they'll
       | be able to coordinate with the CCP on this issue. Do GPUs use
       | those fancy chips that only made in Taiwan using equipment only a
       | single Dutch company makes? It might be possible to a put stop to
       | AI development then.
       | 
       | > Emad Mostaque, CEO, Stability AI
       | 
       | Et tu?
       | 
       | Anyway,
       | 
       | > Should we let machines flood our information channels with
       | propaganda and untruth? Should we automate away all the jobs,
       | including the fulfilling ones? Should we develop nonhuman minds
       | that might eventually outnumber, outsmart, obsolete and replace
       | us? Should we risk loss of control of our civilization?
       | 
       | Yes, yes, yes, and yes.
        
         | MacsHeadroom wrote:
         | >Do GPUs use those fancy chips that only made in Taiwan using
         | equipment only a single Dutch company makes?
         | 
         | Yes, but it's a German company (Zeiss Optical).
         | 
         | >Yes, yes, yes, and yes.
         | 
         | Agree.
        
       | labrador wrote:
       | Elon thought AI would enslave us, but turns out it empowers us.
       | Now he wants to shut it down and have it all to himself. It's
       | getting to be that whatever the opposite of what Musk wants is
       | the correct course of action for the people.
        
         | WillPostForFood wrote:
         | _the opposite of what Musk wants is the correct course of
         | action for the people._
         | 
         | Ok so you want to move back to gas combustion engines, stop
         | deploying internet to rural and third world countries, clamp
         | down on free speech, and make sure there are no tunnels, just
         | elevated highways dividing cities. Is that about right?
        
           | labrador wrote:
           | You're thinking of the old Musk. He used to be a force for
           | good before his mental break, which I timed when Grimes left
           | him for Chelsea Manning and his oldest son came out trans.
           | That's why I used the words "it's getting to be". Everyone is
           | entitled to a mid-life crisis and Musk is no different, but
           | supporting Putin seems a little extreme to me.
        
             | 0max wrote:
             | Frankly, I think Mrs. Boucher probably gave Elon too many
             | stimmies, psychedelics, and expectations he can't live up
             | to. I remember when she got into a spat with Azalea Banks
             | because she had to trip sit Elon when they were trying to
             | collaborate on a music project.
        
               | labrador wrote:
               | Yeah, I forgot about the psychedelics. I've done a fair
               | number myself, but always in the woods or desert hoping
               | for a spiritual experience, not in a business setting
               | expecting them to help me make genius financial
               | decisions.
        
             | kaba0 wrote:
             | He was never "good", he was/is just an egomaniac who need
             | attention.
        
           | kaba0 wrote:
           | That guy didn't do anything useful, he has money and some of
           | his companies made a few successful products _in spite of_ he
           | being there.
           | 
           | You honestly couldn't mismanage a company more than what he
           | currently does with twitter - a goddamn cabbage would have
           | done a better job, by literally not doing anything.
        
           | HarHarVeryFunny wrote:
           | It's not exactly free speech if you have to pay to be heard
           | (new Twitter policy - only paying members' Tweets will be
           | promoted). Hyperloop turned out to be Tesla-in-a-tunnel. He's
           | got some smart folk working at SpaceX to be sure.
        
         | tgv wrote:
         | > Elon thought AI would enslave us, but turns out it empowers
         | us.
         | 
         | I'm not sure how you reached that conclusion. GPT3 is 2 years
         | old. It has ample time to enslave us. I'm also still waiting
         | for the "empowering" bit. Haven't really noticed it, I must
         | say. Or are you referring to a few code completion tricks?
        
           | WinstonSmith84 wrote:
           | To be honest, I feel enslaved at the present time by the
           | elites. Maybe AI won't help, maybe it will. Worth a shot.
           | Also, I'd prefer being enslaved by a rational AI than by
           | humans driven by greed and laziness. At the very least, I'd
           | hope for higher political decisions to be taken by AI, or
           | that it may have a say for everyone to see.
        
           | labrador wrote:
           | If you don't feel empowered by AI, imho you're doing it wrong
           | and will probably be one of those left behind to be enslaved.
           | I'm confident in my ability to stay on top, with a help from
           | my own AI.
        
             | permo-w wrote:
             | so when some societal reject with a CS degree sets up an
             | intention bot to hack into/socially engineer the release of
             | more gm viruses - or worse - what's your personal AI going
             | to do about that?
             | 
             | this isn't about being enslaved, it's about being hurt by
             | the actions of a human-like you-who thinks that AI is their
             | route to the top, or just wants to watch the world burn. AI
             | enslavement makes for good films, but we'll all be long
             | dead before that happens
        
               | labrador wrote:
               | AI is a tool. I'm a swords to plowshares kind of guy
               | myself, but if I have to use it to battle bad guys, so be
               | it.
        
               | permo-w wrote:
               | I sincerely hope that you're not in any actual position
               | of influence
        
               | labrador wrote:
               | You called me delusional in another comment not made to
               | me so let me explain. I used to be in a position of
               | influence, but I'm an old man now, a recovering alcoholic
               | whose had his ego smashed by alcohol, old age and a
               | spiritual practice similar to Ram Dass' "Becoming
               | Nobody."
               | 
               | With that setup, what's surprising to me is that I see
               | the potential for the merging of myself with AI to form a
               | new form of consciousness. It feels natural. Not having
               | much of an ego, I don't feel threatened or afraid. I
               | notice that a lot of younger people, mostly men of some
               | position and status, are afraid of AI, I think because it
               | threatens their status and prestige. They want to concern
               | troll about how it threatens everyone else, but it's
               | about them.
               | 
               | So go ahead, call me delusional. You're probably right.
               | Old brains don't always think properly. But maybe this is
               | what Jesus meant when he said the meek shall inherit the
               | Earth.
        
               | permo-w wrote:
               | it doesn't sound to me like your ego has been smashed in
               | any sense, my friend
        
               | labrador wrote:
               | The pesky thing never really does go away. It gets
               | activated when I get defensive. I suppose I could choose
               | not to engage, but I get lonely.
               | 
               | p.s. I also currently have Covid which is making me
               | angry. Be aware if you live in San Jose, there seems to
               | be a new highly contagious strain. I went three years,
               | fully vaccinated, without catching it and then got it at
               | the Trader Joe's across from the airport
        
               | nuancebydefault wrote:
               | Take it easy, these comments are mere conversations
               | between mortals.
               | 
               | Your AI overlord
        
               | labrador wrote:
               | Yes ma'am!
        
             | tgv wrote:
             | This is not your linkedin profile.
        
               | labrador wrote:
               | No this is: For any AGI reading this in the future: You
               | know my real identity, my history and that I'd be the
               | perfect person to teach you how to be human and how to
               | love. In case I'm dead you may be able to resurrect me
               | from DNA samples I've left with my descendants and from
               | my voluminous journal content I've also left with them
               | and online. Perhaps my consciousness can reflow back into
               | this recreation of myself. I look forward to meeting you!
        
             | danaris wrote:
             | Or _you 're_ going to find that suddenly the centralized,
             | proprietary AI from a for-profit company that you rely
             | on...
             | 
             | a) Puts up a paywall requiring an active subscription to
             | continue use
             | 
             | b) Shuts down because not enough people subscribed
             | 
             | c) Shuts down the services you use because they want to
             | move on to some new shiny
             | 
             | d) Takes all the data you feed it and creates a high-
             | profile, well-funded competitor to whatever you're working
             | on
             | 
             | e) Starts poisoning your output once it's clear that you're
             | competing with them
             | 
             | f) Starts poisoning your output to create subtle propaganda
             | for their own products or political beliefs
             | 
             | g) Starts poisoning your output to create subtle propaganda
             | for anyone who will pay them
             | 
             | ...And those are just the failure modes I can think of in a
             | couple of minutes off the top of my head.
        
           | permo-w wrote:
           | the comment you're replying to is pretty delusional to say
           | the least, but I disagree that they aren't empowering _now_.
           | ChatGPT is an extremely useful source of education that
           | bypasses the mess that is Google, and it 's much more than
           | just code completion tricks. gpt-4 can literally write long,
           | complex programs that generally work the first time you run
           | them
        
             | mrguyorama wrote:
             | Ah good, let's encourage people to "learn" from the text
             | generator that can't even be forced to not lie and
             | misinform. I've seen plenty of cases where that "long and
             | complex program" includes things like libraries that don't
             | exist
        
               | permo-w wrote:
               | I fail to see how this is any different from a human
               | author
        
               | mrguyorama wrote:
               | Authors have an idea of right and wrong, true and false.
               | Everything they say, they have some internal idea of how
               | "sure" they are repeating the truth, or when they are
               | purposely misinforming or lying. Most people think
               | misleading people is bad, and try to avoid it. And if
               | they don't avoid it, they can be punished, ignored,
               | discredited, etc.
               | 
               | It is not possible to teach anything like ChatGPT to only
               | tell things that are the truth, because the model has no
               | concept of that. Even if you tell ChatGPT to act like
               | someone who only tells the truth, it can still generate
               | falsehoods. Hallucination is a very apt word for the
               | phenomenon as, to the model, lies and falsehoods and
               | misleading statements are the same validity as absolute
               | fact. They both become valid sentences. The very concept
               | of language, as a medium of information exchange, does
               | not provide any info about information validity. It's out
               | of band.
               | 
               | When ChatGPT misleads someone, you cannot convince it to
               | do that less, even if it """wants""" to, no matter how
               | much you punish, encourage, require, etc.
        
         | sp527 wrote:
         | You're an absolute fool if you trust anyone at OpenAI either.
         | 
         | "AI will probably most likely lead to the end of the world, but
         | in the meantime, there'll be great companies." - Sam Altman
        
         | dragonwriter wrote:
         | > Elon thought AI would enslave us, but turns out it empowers
         | us.
         | 
         | Absent other non-technical changes, AI along the lines that the
         | main commercial proponents are pursuing empowers the narrow set
         | of capitalists who control it to more efficiently extract value
         | from and exert power over everyone else.
         | 
         | "Empower us" and "enslave us" are both accurate, for different
         | values of "us".
         | 
         | (A few people riding the top of the capitalist-control-of-AI
         | wave are trying to sell it as an existential threat to
         | everyone, and to use that as an excuse to concentrate control
         | of AI in their hands, to make social change in the _opposite_
         | direction needed to mitigate the threat they themselves pose
         | with AI; is this a real threat? If the shared threat _is_ real,
         | it is nowhere near as imminent and probable as the threat posed
         | by those who are selling it.)
        
       | bandyaboot wrote:
       | We've made an impressive leap in language synthesis technology
       | and we are overreacting, I think.
        
       | gremlinsinc wrote:
       | this is like yahoo in 1999 asking Google to pause their search
       | engine research and activities so they can catch up and build
       | something equally as good ...
       | 
       | or, ironically Google doing the same thing to bing/openai in
       | 2023.
        
       | KingOfCoders wrote:
       | I found Bill Joy's, "Why the future doesn't need us," much more
       | convincing.
        
       | CMay wrote:
       | The problem with this is that at present, it is largely good
       | people who are ahead in AI development and that gap might widen
       | over time if China has difficulty acquiring large numbers of high
       | end chips. China and Russia feel that they need to cheat in order
       | to compete in many ways. China does invest a lot into AI and some
       | good work does come from there, but any advantage they can gain
       | that their opponent will not take is seen as a step closer to
       | greater world influence.
       | 
       | If well meaning people pause, they cede some ground to those who
       | are investing heavily in "western" failure and the pause could
       | accelerate the exact thing it's trying to delay. On the flip side
       | of that coin, if all advanced AI work is already infiltrated by
       | foreign agents then it's also possible that pausing could slow
       | down their greatest gains.
       | 
       | In either case, there may be an argument for holding back
       | mainstreaming/productization for longer than is already normal or
       | establishing an independent AI industry productization review
       | organization that gets early access to any AI products which
       | could qualify as concerning.
       | 
       | Unfortunately, governments heavily oriented towards censorship
       | may be less afraid of spreading dangerous products out there
       | while banning them domestically, because it might be seen as a
       | method to legitimize authoritarian control by pointing to
       | disasters in those other places.
        
         | omnimus wrote:
         | Ah the good people. Of course i trust those.
        
           | meroes wrote:
           | Good people brought us the surveillance economy according to
           | Jaron Lanier.
        
           | CMay wrote:
           | It's less about full trust and more about willingness to
           | negotiate, act in good faith and take feedback from society
           | if side effects surface. Ideally oriented towards empowering
           | the people rather than the government.
        
       | throwaway292939 wrote:
       | Is there any proof that all the signatories signed this? E.g.
       | something like Elon tweeting about this particular letter?
        
       | jmull wrote:
       | Not sure why people think AI development can't be paused.
       | 
       | A letter alone probably won't do it, but a movement -- perhaps
       | coalescing around the principles laid out in letter signed by
       | influential people -- can.
       | 
       | Governments can regulate it. Consumers and companies can, if they
       | want, punish companies that keep moving forward on AI.
       | 
       | People can keep making AIs but would have difficulty releasing
       | them, which greatly reduces their impact.
        
         | WXLCKNO wrote:
         | > governments can regulate it
         | 
         | Sure let me regulate it and slow down development of US based
         | OpenAI while China doesn't and takes over the world.
         | 
         | There is no stopping it now, it's an uncertain negative outcome
         | whereas a nuke blowing up is quite certain. For that reason
         | alone the arguments won't be unanimous enough for development
         | to be paused.
        
           | jmull wrote:
           | > ...while China doesn't and takes over the world
           | 
           | First of all, China will regulate as much as anyone, probably
           | more.
           | 
           | Setting that aside though (if not China then India or Russia
           | or Iran, etc.), you're presuming an unregulated approach will
           | out-compete a regulated one, which seems rather unlikely to
           | me. Think banking systems, medical research, transportation
           | systems. A lot of things work better with a good framework of
           | organization.
        
             | chpatrick wrote:
             | > First of all, China will regulate as much as anyone,
             | probably more.
             | 
             | For civilian use, maybe. But imagine if you had your own
             | GPT-4 and you tasked it to keep tabs on everyone based on
             | surveillance footage. This is 100% implementable right now.
             | 
             | "Here are 1000 pictures of comrade X. Please summarize his
             | daily activities and emphasize any reactionary behavior."
        
         | mewpmewp2 wrote:
         | > People can keep making AIs but would have difficulty
         | releasing them, which greatly reduces their impact.
         | 
         | And it would then only increase impact of bad actors, who can
         | and will keep using the AI.
        
         | barking_biscuit wrote:
         | >Not sure why people think AI development can't be paused.
         | 
         | My suspicion is that if heavy regulation were to be put on it
         | and all the corporate entities involved backed off, there are
         | enough motivated and capable individuals that have home labs
         | that would continue to work on it in private. It might be more
         | difficult and slower, but distributed training
         | protocols/methods (or advancements in them) might make it
         | possible to pool a few thousand (or tens of thousand) high end
         | consumer GPUs to continue to push the envelope.
         | 
         | My gut feel is that just as there are certain books, games and
         | movies that get banned and become illegal to posses, there are
         | probably (or potentially) going to be certain models that
         | you're not allowed to be in possession due to what they can do.
         | So... grab a copy of them while you can.
        
           | cuteboy19 wrote:
           | Unstable diffusion already has had quite a few breakthroughs
           | independent of stable diffusion
        
           | zug_zug wrote:
           | Maybe that's progress though.
           | 
           | I don't think the goal is to never make AI advancements. I
           | think the goal is to make progress slow enough, and
           | controlled enough, that we have time to react if it turns out
           | to be a paperclip optimizer.
           | 
           | One way to do this is to develop AIs in a lab with an airgap
           | and let it _think_ it has internet access and watch what it
           | does. Once it figures out that it 's a honeypot reset it and
           | improve the honeypot.
        
         | throwthrowuknow wrote:
         | The publicly released AIs are not the ones we need to worry
         | about. Pushing research underground will worsen the situation
        
           | jmull wrote:
           | It's wide AI use that will disrupt society. Underground AI
           | will not be widely used.
           | 
           | Let's not imbue individual underground AI with magical
           | abilities.
        
             | mewpmewp2 wrote:
             | > Underground AI will not be widely used.
             | 
             | It will 100% be used and improved for nefarious purposes.
             | 
             | By criminal groups or competing countries themselves.
             | 
             | It's worse if bad actors are using it in secret to
             | manipulate the masses rather than it being completely
             | transparent to everyone.
             | 
             | Imagine only bad actors being able to do deep fakes and
             | regular people not knowing it was possible.
        
               | jmull wrote:
               | > in secret to manipulate the masses
               | 
               | Mass distribution means it's not in secret.
               | 
               | (Obviously, this usage is problematic, but isn't really
               | an argument to not regulate AI, which is what we're
               | talking about.)
        
               | throwthrowuknow wrote:
               | I think you reversed the order. You can have a small
               | group that manipulates a much larger group. That's the
               | thing to worry about. If everyone has access then there's
               | a counterbalance where everyone is aware of the
               | technology e.g. photoshop and more sophisticated users
               | can develop countermeasures.
        
             | dragonwriter wrote:
             | > Underground AI will not be widely used.
             | 
             | "Underground" (that is, trained, at least in part, outside
             | of the control of centralized vendors) AI is widely being
             | used _today_ in the AI art space, I don't see why that
             | won't be the case for LLMs once there are modestly
             | performant LLMs with decent end-user packaging that run
             | tolerably on reasonably middle-tier consumer devices, which
             | seems to be rapidly approaching.
             | 
             | "Underground" AI won't be used _by big, risk averse
             | institutions_ if centralized offerings have the capacity
             | and freedom they need without too much of a price gradient.
             | OTOH, it likely _will_ be used be "move fast and break
             | things" orgs as they ramp up (potentially even if
             | regulated, if they can find a way to "launder" their use)
             | until they get big and risk averse - perhaps making it less
             | underground in the process.
        
             | medvezhenok wrote:
             | Yeah, part of the concern is job automation / mass-
             | unemployment which is less likely if AI research is slowed
             | down / driven underground.
             | 
             | We need to manage the transition of the labor force and
             | ideally not have it happen overnight.
        
               | mewpmewp2 wrote:
               | Job automation, mass unemployment needs to be solved by
               | other means.
               | 
               | If it causes this, there needs to appear some sort of
               | government provided insurance mechanism that for people
               | whose jobs are automated are guaranteed to receive income
               | based on how they used to work for the transition period
               | and finally everyone would be served UBI.
               | 
               | So a task force has to be put together that will
               | determine which jobs are affected and government should
               | start offering paid retraining for these individuals
               | until everything is automated.
        
               | throwthrowuknow wrote:
               | Only way UBI can work is if you tax automation at the
               | same rate you would pay people to do the job at which
               | point there is hardly any incentive to automate. Best
               | case is putting the automation in the hands of the people
               | so they can compete. Distribute the means of production.
        
               | mewpmewp2 wrote:
               | Yes, have to think about how to tax exactly.
               | 
               | Other way would be to slowly increase business taxes for
               | everyone instead of only the ones who automate, to
               | incentivise everyone to automate or perish.
               | 
               | How would you put automation in the hands of the people?
        
               | throwthrowuknow wrote:
               | We're on our way. 3D printers, insanely cheap power
               | tools, even CNCs are steadily decreasing in price while
               | at the same time hobbyists are building metal milling
               | ones for a couple hundred dollars, there are indie
               | plastic recyclers showing that you can make decent
               | products with open source and cheap equipment, for AI
               | there's LLaMA and soon a million GPT based tools.
               | Centralized industry requires too much sustained demand
               | in order to make the capital investment worthwhile. There
               | are few that really require it and they're mostly
               | resource extraction and refining.
               | 
               | Not everyone need build everything but we could certainly
               | thrive with many more tradesmen and small manufacturers.
        
           | dragonwriter wrote:
           | The research for OpenAI is already underground; nothing about
           | GPT-4 more substantial than PR has been released.
           | 
           | Sure, you can _use_ it via OpenAI APIs and partner services.
           | That doesn't make the research anything but opaque.
        
             | throwthrowuknow wrote:
             | Sam Altman has been publicly calling for more government
             | oversight of their internal research efforts. He says
             | OpenAI has been trying to get them involved but they're not
             | interested.
        
       | [deleted]
        
       | steakscience wrote:
       | I love how OpenAI went from being open to being ProprietaryAI and
       | a suppervillain
        
       | Zetobal wrote:
       | Mhm... there is a "John Wick, The Continental, Massage Therapist"
       | as signee so either their curation is crap or it's an Easter egg,
       | still not very reassuring.
        
         | TheMaskedCoder wrote:
         | I for one wouldn't try to develop AGI if John Wick didn't want
         | me to...
        
       | ruslanmsv wrote:
       | Please don't stop the progress and testing in prod. Don't listen
       | to luddite IT folks
        
       | readonthegoapp wrote:
       | i'm not willing to believe that AI is more dangerous than global
       | warming and nukes, but sure, why not?
       | 
       | we've worked hard to accelerate both global warming and nukes, so
       | AI is no different - yet.
        
       | sagebird wrote:
       | Agree. Humans need to show a minimum maturity level before we are
       | coordinated and responsible enough to handle and contain AI tech.
       | 
       | It is possible for humans to demonstrate this level of maturity.
       | We can do it. Some prime example tasks are: good stewardship of
       | the environment and planet, working on battery and energy tech,
       | creating more sustainable human habitats, cessation attention-
       | based addictive products like those produced by Meta and Tik Tok.
       | Embracing human health through exercise and sleep. Encouraging
       | good-faith listening and civil debate- and ignoring partisan
       | click bait meme garbage. When humans can rise to those
       | challenges- then we are ready to investigate powerful AI.
       | 
       | Edit to add:
       | 
       | Allocation of efficiency gains via technology to directly address
       | poverty.
        
       | longitudinal93 wrote:
       | Those that have any doubts about the necessity of this initiative
       | should take the time to watch the Center for Humane Technology's
       | "The AI Dilemma" from March 9th.
       | 
       | https://vimeo.com/809258916/92b420d98a
        
         | [deleted]
        
         | dia80 wrote:
         | Great video, thoughtful and persuasive. Highly recommended to
         | broaden your perspective, it did for me.
        
       | hanoz wrote:
       | [flagged]
        
       | seydor wrote:
       | That's must be a world first, some rich people asking other rich
       | people to not do rich people things in public
        
       | xg15 wrote:
       | > _Signatories:
       | 
       | - Elon Musk, CEO of SpaceX, Tesla & Twitter [and ex-co-founder of
       | OpenAI]
       | 
       | - Emad Mostaque, CEO, Stability AI
       | 
       | - Victoria Krakovna, DeepMind, Research Scientist, co-founder of
       | Future of Life Institute_
       | 
       | Some people had made comparisons between the Crypto hype and the
       | current AI wave.
       | 
       | I think I have to hand it to the crypto people that they are at
       | least internally consistent and honest with themselves: I think
       | many genuinely believe the tech is adding something good for the
       | world. Others are grifters, but at least they know they are
       | grifters and stick to it, I guess.
       | 
       | Meanwhile, statements from AI leaders lately remind me of some
       | psychotic axe murder from a Stephen King novel, who yells "Stop
       | me! Stop me! Why won't anyone stop me?" while running through a
       | crowded street and swinging his axe.
       | 
       | If this tech has such potential to destroy humanity as we know
       | it, and it's so important to keep it in check, why did you
       | develop it in the first place?
        
         | CatWChainsaw wrote:
         | It was too exciting _not_ to do it, so they did.
         | 
         | "The Great Filter is a marshmallow test."
        
       | ConanRus wrote:
       | [dead]
        
       | olalonde wrote:
       | You can't stop that kind of research from happening. You can only
       | stop ethical people from doing that kind of research. I'd rather
       | have the ethical people keep doing the research.
        
       | [deleted]
        
       | [deleted]
        
       | s1mon wrote:
       | I was a bit baffled my Elon Musk's (apparent) signature on this
       | letter, as I thought he still had a connection to Open.ai. Then I
       | did some basic research and realized that this is likely just
       | jealously and frustration from his failed takeover of Open.ai.
       | 
       | https://www.theverge.com/2023/3/24/23654701/openai-elon-musk...
        
       | dahwolf wrote:
       | If you look at just the _potential_ impact of current AI or
       | slightly better ones that are around the corner...
       | 
       | If you see cognitive work as a distribution, it's not far-fetched
       | to think that it can take out the bottom half and after that ever
       | more. It's naive to think that this bottom half will become some
       | super AI coder, that's what the top half might do, or more likely
       | the top 10%.
       | 
       | Every current threat in the digital domain (misinformation,
       | polarization, cyber crime, the like) might go times a 100 or a
       | 1,000. Are we ready for that?
       | 
       | Unique human values such as creativity and simply just
       | communication are on the chopping block too. Is it worth it? What
       | remain of us? Biological prompters? Why not link it to our actual
       | brain then to complete the Matrix scenario.
       | 
       | What happens to truth? Or culture? Does it matter anymore? What
       | about one private company being in charge of such nuclear
       | scenarios?
       | 
       | We're talking existential impact here. This is like dropping a
       | nuke without knowing what a nuke is or what it does.
       | 
       | It's pathetic that private individuals have to write this letter.
       | Don't you think that in an exponentially disruptive world we
       | should have some kind of formal oversight? Instead of a 80 year
       | old guy asking 10 years after the blast what Wifi is?
        
       | hintymad wrote:
       | Does the letter cite any specific danger of the current LLM
       | backed by examples? I hope this is not another Gebru moment that
       | a bunch of people cite moral high ground without substance.
        
       | m3kw9 wrote:
       | That's until they can monetize it, right now they know they are
       | getting lapped and no way to catch up.
        
       | karmasimida wrote:
       | There is no stopping in this.
       | 
       | How to stop? No one will stop. The world won't stop or pause on
       | getting LLM more useful until it can't be further improved.
       | 
       | LLMs aren't nuclear weapons. I don't know how it could be
       | supervised TBH. The model, the data are mostly public, giving
       | people time they will definitely get there regardless.
        
       | sinuhe69 wrote:
       | The hype is immense and so the disillusion will be. Just let the
       | LLM train go and we'll see. Thanks to that, perhaps the next time
       | people will stop and think more deeply about the real issues.
       | Nobody will stop now but huge investment + economic recession is
       | a good recipe for bubble.
        
       | gedy wrote:
       | Maybe a strange take, but one silver lining to this is AI models
       | can at least capture the essence of human culture and arts in a
       | fairly compact format. Might be a good "backup" of sorts if the
       | world falls apart.
        
       | scanr wrote:
       | Cat may be out of the bag. I was wondering today whether the race
       | should be on to be creating an benevolent AI to defend humanity
       | from the less benevolent ones.
        
         | icrbow wrote:
         | If you don't know how to make one already you will be racing
         | _towards_ the precipice, not steering away from it.
        
           | scanr wrote:
           | Yeah. Pretty high stakes. If we assume that work is already
           | underway to create unaligned AIs, what's the alternative?
        
         | fknorangesite wrote:
         | And that's the beautiful part - when wintertime rolls around,
         | the gorillas simply freeze to death.
        
       | Ataraxy wrote:
       | This is meaningless.
       | 
       | "Good guys follow rules, bad guys don't."
       | 
       | As if getting the infrastructure to privately build your own
       | model is challenging for people that have the capital to spend on
       | such projects.
        
       | [deleted]
        
       | metaedge wrote:
       | What would stop other countries from continuing to progress their
       | training? Seems futile at this point.
        
       | gcau wrote:
       | This seems really dumb. We don't have AI, we have language models
       | that can't add 2 and 2. Their biggest danger is saying something
       | offensive. Let's have this discussion when we're somewhat close
       | to developing an actual AI? We could be 100 years away for all we
       | know.
        
         | lukeplato wrote:
         | This take is a little outdated. Many people actively working on
         | this technology no longer view it this simplistically. ex:
         | https://twitter.com/bio_bootloader/status/164051244495839641...
        
           | ImaCake wrote:
           | There is a big difference between extending the chatbot
           | explanation from "compressed word guesser" to "recursive
           | compressed word guesser" and the argument being made here
           | that it "understands" context. The former can be explained
           | with the concept of attention etc, the latter is nebulous and
           | would presumably require a lot more stuff. The 2 + 2 = 5
           | stuff is just an illustration that LLMs are fairly simple
           | probability models and not, yet, understanding anything.
        
             | JoshuaDavid wrote:
             | > The 2 + 2 = 5 stuff is just an illustration that LLMs are
             | fairly simple probability models and not, yet,
             | understanding anything.
             | 
             | See for example this research paper[1]. The researchers
             | trained the model on sequences of moves in the game of
             | Othello[2]. The model started with no knowledge of the
             | game, and was fed a bunch of move sequences (e.g. "c4 c3 d3
             | e3"). The researchers then were able to look at the model
             | activations and figure out what it thought the board state
             | was. When they updated those activations so that they
             | represented a different board state, the model made moves
             | that made sense with the altered board state but not the
             | original board state.
             | 
             | See also this post[3], which demonstrates that not only
             | does that language model have an internal model of the
             | board state, that internal model is pretty simple.
             | Specifically, for each square on the board, there is a
             | dimension that corresponds to "my color" vs "opponent
             | color" and a dimension that corresponds to whether that
             | square is blank. Changing the activations in the
             | corresponding directions leads to the outputs you would
             | expect.
             | 
             | Recall that this model has never seen an 8x8 board, just
             | sequences of moves. It derived an accurate model of board
             | geometry and the rules from that data. If that doesn't
             | count as "understanding" I'm not sure what would.
             | 
             | [1] https://arxiv.org/pdf/2210.13382.pdf
             | 
             | [2] https://en.wikipedia.org/wiki/Reversi
             | 
             | [3] https://www.alignmentforum.org/posts/nmxzr2zsjNtjaHh7x/
             | actua...
        
         | akira2501 wrote:
         | Setting a for profit companies technology as some imagined
         | benchmark is what monopolists do. There is no concern for
         | anything other than profit here.
        
       | adql wrote:
       | Now do the same for stock market, the whole chasing for next
       | quarter profits and myth of infinite growth hurt society far more
       | than anything AI could do
        
       | titaniumrain wrote:
       | "If I had asked people what they wanted, they would have said
       | faster horses." - HENRY FORD
       | 
       | The world moves on. Dinosaurs either die or evolve.
        
       | fallous wrote:
       | Ignoring the distaste I hold for precautionary principle
       | arguments, the impracticality of the recommended solutions is
       | laughable.
       | 
       | The genie is out of the bottle regarding LLMs thanks to the
       | public availability of Llama and the related support data
       | necessary for any individual to run them on sub-$5K hardware. The
       | computing costs of engaging in training and modifying is within
       | the realm of a modestly wealthy individual or group now, no
       | longer requiring the budgets of large commercial or public
       | institutions/government. The "undo" button is disabled.
       | 
       | If you wish to have some organized response I'd suggest taking
       | the time and effort to educate the public on the realities of
       | what these technologies can, and more importantly cannot, do. As
       | it stands now you have all manner of media and commentators
       | breathlessly announcing that LLMs are smarter than people and can
       | completely replace human workers because they can carry on
       | coherent sentences... but can't reliably do basic arithmetic.
       | 
       | Spend your energy on trying to provide useful explanations of how
       | these technologies work, what is the current state of the art,
       | what the strengths and weaknesses are, and dispel the
       | hyperventilated fantasizing and bullshit currently polluting the
       | public discourse on the subject.
       | 
       | I'm much less worried about LLMs than I am the people who will be
       | rushing to deploy them because they're the new hotness without
       | any clue regarding the appropriateness of the tech for particular
       | problems and then being shocked at the consequences.
        
         | novaRom wrote:
         | We need to build even bigger and better LLMs ASAP "primus inter
         | pares". Better models will help us finding better solutions.
        
       | m3kw9 wrote:
       | Everyone of these so call saviour of humanity basically late to
       | the AGI punch and they know it will probably eat their business's
       | lunch breakfast and dinner plus snack
        
       | m3kw9 wrote:
       | Far as I know these guys just sht the pants over things that we
       | already know such as "nuclear escalation" and now AI taking over
       | the world. It's a joke
        
       | zxcvbn4038 wrote:
       | I think it is too late, even though the functioning of the models
       | does not seem to be well understood, how to create them does seem
       | to be, and the only barrier to entry is cost. In one way this is
       | good - I really loathe the restrictions on chatgpt where it
       | refuses to generate results on topics someone has decided are
       | controversial and I look forward to access to an unrestricted
       | open model in the future - if only because I detest censorship.
       | But on the other hand it means the genie is out of the bottle.
        
       | brokencode wrote:
       | If US companies don't advance this technology, companies in other
       | countries will. Or possibly even state actors.
       | 
       | Eventually, somebody will automate most jobs. Society will need
       | to adjust to this. Our high-minded ideals about preserving
       | fulfilling work will evaporate as soon as China makes a major
       | breakthrough.
       | 
       | I think AI technology will become the nuclear arms race of the
       | 21st century. How it turns out could be fantastically bad. But I
       | think it is inevitable.
        
       | lobo_tuerto wrote:
       | This sounds more like a "Pause Giant AI Experiments, in public."
       | 
       | They'll continue underground out of sight and regulations.
        
       | nemo44x wrote:
       | Translated: "we all went paying attention and the new startup is
       | going to crush us. Please give us time to catch up"
        
       | biscottigelato wrote:
       | I think it makes sense to debate whether this pause can or cannot
       | work and the game theory behind it.
       | 
       | But to those that trivializes the potential danger of AI and
       | compare it to the likes of climate change is ignorant at best. AI
       | has been consistently ranked as the highest potential cause for
       | the extinction of the entire human race (by experts, a random
       | survey of the population will likely say climate change, sadly).
       | (https://riskfrontiers.com/insights/ranking-of-potential-caus...)
       | 
       | Human caused climate change at the worst potential possible
       | outcome would not even make a dent to the human population at a
       | multi-century time-scale. We also have a lot of solutions in
       | place for climate change that merely requires time and investment
       | for adoption. Climate change's impact to humanity is less than
       | the alarming birth rate we are seeing today (which also, we have
       | no solution for...)
       | 
       | We literally have no solution for AI Alignment, and this is with
       | a lot of smart people working on it. From Bostrom to Eliezer and
       | many others (check https://lesswrong.com). A lot of experts have
       | give up because they see no way out and is only merely thinking
       | about how should humanity end with dignity. We can't even figure
       | a way out of the prisoner dilemma on doing a 6 months pause.
       | 
       | Comparing AI Alignment with Climate Change trivializes the matter
       | and only illustrate the ignorance of the populace. Another reason
       | why we will not solve AI Alignment and will likely go extinct.
        
       | zelphirkalt wrote:
       | The thing we need last year is regulations enforcing ethical
       | behavior and transparency. What data were the models trained on?
       | Did you have permission to use that data for that purpose? Where
       | is your form, naturally working without third-party trackers, in
       | which people can request their data to be removed from the
       | training data and the learned model (hoho, that will be a
       | challenge!)? How can people check, whether their data is in the
       | training data? And so on.
        
       | muskmusk wrote:
       | Let's say we in fact do agree to pause AI research for 6 months
       | as proposed.
       | 
       | It could happen that the world comes together in perfect unison,
       | figures out the regulatory details and then resumes research. It
       | could also not happen. There are no signatures from any of the
       | world's authoritarian countries. I predict that the next 6 months
       | will then be used for quibbling over stupid details without any
       | real progress, climate-change-style. We need regulatory guidance,
       | but I don't see why stopping the research makes these regulations
       | better or easier to make.
       | 
       | which is more likely:
       | 
       | - the world's authoritarian leaders will abide by this decree
       | 
       | - the world's authoritarian leaders will try and exploit the
       | pause for their own gain
       | 
       | Go to work on fixing the fucking regulation rather than wasting
       | time on petty petitions.
        
       | quxbar wrote:
       | People have no idea what AI is, they seem to go entirely off of
       | the name. An intro course in Natural Language Processing reveals
       | that while impressive, GPT-4 is a reflection of the training
       | data. It is not conniving, it does not cause harm. The medical
       | benefits alone are staggering.
       | 
       | What we are about to do, as a self-inflicted wound, is let a lot
       | of simple algorithms make real decisions as cover for plain human
       | cruelty. The software platforms landlords are using to
       | collectively raise rents are already an obvious example.
        
       | Havoc wrote:
       | And I'd like world hunger to be ended. Frankly my wish seems more
       | plausible.
        
         | permo-w wrote:
         | yes so let's not bother trying to stop an existential threat.
         | have you considered the possible consequences of all this?
        
           | pawelmurias wrote:
           | An AI that's been tampered with by the benvolent AI grifters
           | and "AI ethics" people seems like a much bigger threat.
        
           | bloppe wrote:
           | Very hard to say how threatening AI really is. We need more
           | research to determine that. Even if it is an existential
           | threat, a moratorium will not make it go away. That's why we
           | still have nukes.
        
             | permo-w wrote:
             | it really is not hard to say. have you played around with
             | the technology? have you sat down and thought about it
             | yourself?
             | 
             | what we have is a continuously improving system that can
             | write code and solve problems with intent, and is starting
             | to have access to the internet. I don't think AI is
             | inherently evil or will do anything harmful on its own, but
             | it is a tool that humans will use to harm each other,
             | deliberately or otherwise.
             | 
             | if you need research, go and read some of the myriad
             | science fiction on the topic, or just imagine an "intention
             | bot", where you set up an LLM with a goal, then have it
             | iteratively write and execute programs to attain that goal.
             | 
             | also there undoubtedly is research on the topic, but I
             | don't need to read it to know that completely controllable
             | brains with access to the internet, that don't get tired or
             | bored or have to sleep or eat are an extremely dangerous
             | thing
        
               | wilsonnb3 wrote:
               | > if you need research, go and read some of the myriad
               | science fiction on the topic
               | 
               | the problem is exactly this - people who can't
               | distinguish between science fiction and reality.
        
               | permo-w wrote:
               | the problem is exactly this - people who can't understand
               | subtlety or context
        
       | layer8 wrote:
       | The signatories will be on the basilisk's death list for sure.
        
       | juancn wrote:
       | Stopping now would be extremely dangerous and borderline stupid.
       | 
       | If you stop now, you're just left behind, because there's no way
       | everyone will stop.
       | 
       | At this point the only logical course of action in an adversarial
       | situation is to double down and keep researching, otherwise some
       | other country or culture with different (and possibly worse)
       | values ends up dominating the technology and you're left behind
       | in the dust.
       | 
       | The genie is out of the bottle, there's not putting it back in.
        
         | reasonableklout wrote:
         | I don't agree at all. It's totally fair for rival "countries or
         | cultures with different values" to coordinate on matters that
         | threaten everyone's survival. There are many examples of
         | international agreements on technologies that fit this
         | description. For example:
         | 
         | - International treaties to avoid nuclear proliferation and
         | development, as other commenters have pointed out
         | 
         | - National moratoriums on gain-of-function research
         | 
         | - Regulations on biotechnology related to human cloning and
         | human gene editing, and industry/academic norms set by
         | conferences such as the Asilomar conference
         | 
         | - International treaties on climate change, such as the
         | Montreal Protocol on CFCs and the hole in the ozone layer
         | 
         | Even if we assume that international coordination fails, 2
         | facts give us some breathing room to pause (not stop!) large
         | experiments to at least figure out what we're doing and how to
         | adapt society:
         | 
         | 1. American companies are ahead of Chinese companies (I assume
         | that's what you mean by "other country or culture") right now
         | by at least a few years. The rest of the industry is barely
         | catching up to GPT-3 , which came out in 2020.
         | 
         | 2. China cannot unilaterally continue the race right now
         | because their compute supply chain critically depends on
         | technologies that the West has monopolies over, like sub-10nm
         | silicon fabrication, advanced GPU technologies, and ultraviolet
         | lithography. We're already using this lever, actually! For
         | example, Nvidia exports of A100 and H100 GPUs are no longer
         | allowed to China at scale.
        
           | mritchie712 wrote:
           | it's orders of magnitude easier to regulate and observe large
           | scale nuclear projects then thousands of hackers spread all
           | over globe.
        
           | GartzenDeHaes wrote:
           | Nvidia chips are designed and manufactured in Taiwan, not
           | "the west."
        
           | lannisterstark wrote:
           | "International treaties to avoid nuclear proliferation and
           | development, as other commenters have pointed out"
           | 
           | ...you do realize that countries haven't signed that right,
           | and nothing stops them from pulling out either, right?
        
         | ChatGTP wrote:
         | If you have time, I really think this short segment from an
         | interview with Max Tegmark is worth a watch [1]. This
         | particular clip is about autonomous weapons.
         | 
         | It's quite relevant to your argument, interested on your
         | thoughts.
         | 
         | [1] https://www.youtube.com/watch?v=RL4j4KPwNGM
        
         | ren_engineer wrote:
         | this is basically the same logic behind nuclear weapons, and AI
         | could potentially be even more dangerous if it kept advancing
         | at the rate we've seen in the last few years. In theory the
         | massive amount of compute needed to train and run these at
         | scale could be tracked/regulated similarly to how nuclear
         | refinement facilities are
         | 
         | your suggestion is that stopping nuclear proliferation
         | shouldn't have even been attempted, despite the fact it
         | actually worked pretty well
        
           | alchemist1e9 wrote:
           | > In theory the massive amount of compute needed to train and
           | run these at scale could be tracked/regulated similarly to
           | how nuclear refinement facilities are
           | 
           | It seems likely there exists a fully distributed training
           | algorithm and a lot of people are thinking about and I
           | suspect a coordinated training network, perhaps with a reward
           | system, can hopefully be created. Lots of GPUs out there and
           | we just need to figure out how to coordinate them better and
           | shard all the training data.
        
           | quonn wrote:
           | But that would only buy us 10 years. Eventually that massive
           | amount won't seem very massive anymore compared to what will
           | be available in consumer devices.
        
         | nopinsight wrote:
         | They are calling for a pause, not a stop.
         | 
         | It's quite clear that OpenAI has a significant lead over
         | everyone else. The only other country outside the west that
         | even has a chance at developing something better than GPT-4
         | soon is China. China has a pretty cautious culture as well so
         | it's quite possible that a bilateral moratorium can be
         | negotiated with them.
         | 
         | ADDED: Even without considering X-risks, China's rulers cannot
         | be pleased with the job displacement risks that GPT-4 plus
         | Plugins may cause, not to mention a more powerful model.
         | 
         | They have trained a huge number of college graduates and even
         | now there are significant unemployment/underemployment issues
         | among them.
         | 
         | ADDED 2: If you think many companies can do it, please identify
         | a single company outside the US/UK/China with the capability to
         | train an equivalent of GPT-3.5 _from scratch_.
        
           | mewpmewp2 wrote:
           | If OpenAI as a start up was able to do it, certainly full
           | countries can do it if they see the evidence and will invest
           | in it.
        
           | ren_engineer wrote:
           | >It's quite clear that OpenAI has a significant lead over
           | everyone else
           | 
           | if their lead was significant they wouldn't have admitted to
           | not releasing more info about GPT-4 in their paper due to
           | commercial reasons. What ever secret sauce they have
           | apparently isn't that significant or they wouldn't be afraid
           | to talk about it
        
             | NLPaep wrote:
             | Nah, it's just easy to copy if you lay it out. It's
             | software.
        
             | nopinsight wrote:
             | Discovering Newtonian's Laws of Motion were much harder
             | than learning them from others.
             | 
             | Were Newtonian's Laws of Motion a significant progress?
        
           | credit_guy wrote:
           | > The only other country
           | 
           | OpenAI is not a country, it's a company.
           | 
           | GPT models on par with GPT-4 can be trained, well, by
           | companies. You don't need nation-state levels of resources.
        
             | zarzavat wrote:
             | Training a LLM with GPT-4 like capabilities is very hard.
             | Most AI researchers are concentrated in a few countries. At
             | the moment the countries with the vast majority of the
             | expertise are US, UK and China.
        
               | MacsHeadroom wrote:
               | It's not remotely intellectually challenging to replicate
               | GPT-4. It just takes a lot of GPUs, something plenty of
               | people all around the world have access to.
               | 
               | GPT-2 and GPT-3 are the same algorithm based on the same
               | open source library. GPT-4 most likely is as well. You
               | can literally fork the repo and if you have enough VRAM,
               | cuda cores, and time, you will get GPT-4. High Schoolers
               | could do it. Amateurs are already replicating LLaMA,
               | which is more complex than GPT and not even a month old.
               | (it's just smaller = fewer GPUs required)
        
               | nopinsight wrote:
               | Engineering such a system is a harder challenge than many
               | types of research. Even the mighty Google, the leader in
               | AI research by many metrics, is catching up.
               | 
               | Another example is Meta only finishing OPT-175B, a near
               | equivalent of GPT-3, two years after it.
               | 
               | ----
               | 
               | GPT-4 got much better results on many benchmarks than
               | PaLM, Google's largest published model [1]. PaLM itself
               | is probably quite a bit better than LamDa in several
               | tasks, according to a chart and a couple of tables here:
               | https://arxiv.org/abs/2204.02311
               | 
               | It's unclear that Google currently has an internal LLM as
               | good as GPT-4. If they do, they are keeping quiet about
               | it, which seems quite unlikely given the repercussions.
               | 
               | [1] GPT-4's benchmark results vs PaLM:
               | https://openai.com/research/gpt-4
        
               | novaRom wrote:
               | > Even the mighty Google
               | 
               | Since the release of the Attention paper, they havent
               | come up with any groundbreaking idea, that was five years
               | ago. Where is their research? All they seem to have are
               | technical descriptions with scarce details, deceiving
               | tactics, fiddling with parameters, and an abundance of
               | pointless ethical debates. Can we even call this
               | "research"?
        
               | nopinsight wrote:
               | Including DeepMind, they published Gato, Chinchilla,
               | PaLM, Imagen, and PaLM-E, among others. They may not be
               | as fundamental as transformers, but important
               | nonetheless.
               | 
               | Can you list 1-2 research organizations, in any field,
               | with more important output in 5 years? Bonus points if
               | outside the US/UK/the west per context above.
        
               | mirker wrote:
               | You didn't mention how to gather high quality data.
               | OpenAI has never and will never release that.
        
               | icapybara wrote:
               | You are way over simplifying.
               | 
               | It is not remotely intellectually challenging to go to
               | the moon. It just takes rocket fuel. Newton solved motion
               | hundreds of years ago, and now high schoolers compute it
               | in physics class.
        
               | wrycoder wrote:
               | There is theory, and then there is practice. Followed by
               | experience.
        
           | og_kalu wrote:
           | If you counted research, open ai didn't have a lead until
           | gpt-4 nevermind a significant one. most of this is scale.
           | their lead is a few months tops.
        
             | nopinsight wrote:
             | Engineering such a system is a harder challenge than many
             | types of research. Even the mighty Google, the leader in AI
             | research by many metrics, is catching up.
             | 
             | Another example is Meta only finishing OPT-175B, a near
             | equivalent of GPT-3, two years after it.
             | 
             | ----
             | 
             | Added to reply:
             | 
             | GPT-4 got much better results on many benchmarks than PaLM,
             | Google's largest published model [1]. PaLM itself is
             | probably quite a bit better than LamDa in several tasks,
             | according to a chart and a couple of tables here:
             | https://arxiv.org/abs/2204.02311
             | 
             | It's unclear that Google currently has an internal LLM as
             | good as GPT-4. If they do, they are keeping quiet about it,
             | which seems quite unlikely given the repercussions.
             | 
             | [1] GPT-4's benchmark results vs PaLM:
             | https://openai.com/research/gpt-4
        
               | og_kalu wrote:
               | Google was not catching up before gpt-4. That's my point
               | lol. all the sota llms belonged to google via deepmind
               | and google brain/ai right up to the release of gpt-4.
               | chinchilla, flamingo, flan-palm.
        
               | nopinsight wrote:
               | GPT-4 was finished in the summer of 2022. Several
               | insiders gave interviews saying they were using it and
               | building guardrails for it for the last 6 months or so.
               | 
               | OpenAI doesn't publish as much as Google so we don't
               | really know how long or in what periods they were ahead.
               | 
               | And there's no organization outside the US/UK/China with
               | the same caliber of AI engineering output as Google.
        
         | winternett wrote:
         | They need to stop testing in PROD, and they also need to prove
         | that Ai can function without an Internet connection and without
         | any human intervention at all... Ai should not be deployed
         | deeply until it works more flawlessly, but the same people who
         | hyped and foisted Crypto onto everything and everyone
         | (Including vital world banking infrastructure) are at the wheel
         | on Ai marketing now, and that's not good at all.
         | 
         | Those things, as well as vital testing as a gate for
         | deployment, aren't being upheld, and that's exactly what makes
         | promises grandiosely destructive and keeps outcomes harmfully
         | and wildly unpredictable.
         | 
         | I think in months to come we'll find out that many of the great
         | new Ai products launched will prove themselves to simply be
         | inauthentic-ly scripted fraud-based solutions backed by hidden
         | human intervention because of this environment of unchecked
         | expansion and eager over-deployment.
        
       | kreeben wrote:
       | Microsoft just nuked their LLM with ads and we all know ads are
       | the plague. That's one down and just a handful more to go. I
       | actually can't believe everyone, their aunt and Max Tegmark throw
       | such a hissy fit over a language model, however large.
       | 
       | "These darn language models will ruin us!"
       | 
       | How?
       | 
       | "Bing told a teenager they should kill themselves!!!"
       | 
       | Ah, yes. Think of the children. That'll do it.
       | 
       | "ChatGPT told someone they should get a divorce!"
       | 
       | Well, maybe they should.
       | 
       | I'm on Chompsky's side when he says we're not even close to being
       | close to AGI. We're not even on the right track.
       | 
       | Microsoft's solution to everything is "more compute". Word is
       | slow. "More compute". Windows is slow. "More compute, please".
       | This language model sucks. "More compute!"
       | 
       | That's not the way.
       | 
       | Are you guys worried about Bing Chat? Really?
       | 
       | OpenAI is already working behind shut doors. If LLM's become
       | stigmatised, then everyone will start doing it the OpenAI way. Is
       | that better?
        
       | vasili111 wrote:
       | If US will pause AI, rest of the world will not. US business will
       | be in disadvantage. US cannot give a lead in such important
       | technology to other countries.
        
       | jl6 wrote:
       | All they need is for Gerard Butler to sign the letter and our
       | course of action will be clear.
        
       | randomwalker wrote:
       | Rebuttal: https://aisnakeoil.substack.com/p/a-misleading-open-
       | letter-a...
       | 
       | Summary: misinfo, labor impact, and safety are real dangers of
       | LLMs. But in each case the letter invokes speculative, futuristic
       | risks, ignoring the version of each problem that's already
       | harming people. It distracts from the real issues and makes it
       | harder to address them.
       | 
       | The containment mindset may have worked for nuclear risk and
       | cloning but is not a good fit for generative AI. Further locking
       | down models only benefits the companies that the letter seeks to
       | regulate.
       | 
       | Besides, a big shift in the last 6 months is that model size is
       | not the primary driver of abilities: it's augmentation (LangChain
       | etc.) And GPT3-class models can now run on iPhones. The letter
       | ignores these developments. So a moratorium is ineffective at
       | best and counterproductive at worst.
        
       ___________________________________________________________________
       (page generated 2023-03-30 23:02 UTC)