[HN Gopher] Safe Superintelligence Inc.
       ___________________________________________________________________
        
       Safe Superintelligence Inc.
        
       Author : nick_pou
       Score  : 779 points
       Date   : 2024-06-19 17:06 UTC (5 hours ago)
        
 (HTM) web link (ssi.inc)
 (TXT) w3m dump (ssi.inc)
        
       | fallat wrote:
       | This is how you web
        
       | blixt wrote:
       | Kind of sounds like OpenAI when it started, so will history
       | repeat itself? Nonetheless, excited to see what comes out of it.
        
         | lopuhin wrote:
         | Not quite the same, OpenAI was initially quite open, while Ilia
         | is currently very explicitly against opening or open-sourcing
         | research, e.g. see
         | https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-lau...
        
           | insane_dreamer wrote:
           | It wasn't that OpenAI was open as in "open source" but rather
           | that its stated mission was to research AI such that all
           | could benefit from it (open), as well as to ensure that it
           | could not be controlled by any one player, rather than to
           | develop commercial products to sell and make a return on
           | (closed).
        
       | dontreact wrote:
       | How are they gonna pay for their compute costs to get the
       | frontier? Seems hard to attract enough investment while almost
       | explicitly promising no return.
        
         | neuralnetes-COO wrote:
         | 6-figure free compute credits from every major cloud provider
         | to start
        
           | CaveTech wrote:
           | 5 minutes of training time should go far
        
           | bps4484 wrote:
           | 6 figures would pay for a week for what he needs. Maybe less
           | than a week
        
             | neuralnetes-COO wrote:
             | I dont believe ssi.inc 's main objective is training
             | expensive models, but rather to create SSI.
        
         | jhickok wrote:
         | Wonder if funding could come from profitable AI companies like
         | Nvidia, MS, Apple, etc, sort of like Apache/Linux foundation.
        
           | visarga wrote:
           | I was actually expecting Apple to get their hands on Ilya.
           | They also have the privacy theme in their branding, and Ilya
           | might help that image, but also have the chops to catch up to
           | OpenAI.
        
         | imbusy111 wrote:
         | What if there are other ways to improve intelligence other than
         | throw more money at running gradient descent algorithm?
        
       | sidcool wrote:
       | Good to see this. I hope they have enough time to create
       | something before the big 3 reaching AGI.
        
       | sreekotay wrote:
       | Can't wait for OpenSSL and LibreSSL...
        
       | MeteorMarc wrote:
       | Does this mean they will not instantiate a super AI unless it is
       | mathematically proven that it is safe?
        
         | visarga wrote:
         | But any model, no matter how safe it was in training, can still
         | be prompt hacked, or fed dangerous information to complete
         | nefarious tasks. There is no safe model by design. Not to
         | mention that open weights models can be "uncensored" with ease.
        
       | modeless wrote:
       | This makes sense. Ilya can probably raise practically unlimited
       | money on his name alone at this point.
       | 
       | I'm not sure I agree with the "no product until we succeed"
       | direction. I think real world feedback from deployed products is
       | going to be important in developing superintelligence. I doubt
       | that it will drop out of the blue from an ivory tower. But I
       | could be wrong. I definitely agree that superintelligence is
       | within reach and now is the time to work on it. The more the
       | merrier!
        
         | visarga wrote:
         | I have a strong intuition that chat logs are actually the most
         | useful kind of data. They contain many LLM outputs followed by
         | implicit or explicit feedback, from humans, from the real
         | world, and from code execution. Scaling this feedback to 180M
         | users and 1 trillion interactive tokens per month like OpenAI
         | is a big deal.
        
           | modeless wrote:
           | Yeah, similar to how Google's clickstream data makes their
           | lead in search self-reinforcing. But chat data isn't the only
           | kind of data. Multimodal will be next. And after that,
           | robotics.
        
           | slashdave wrote:
           | Except LLMs are a distraction from AGI
        
             | sfink wrote:
             | That doesn't necessarily imply that chat logs are not
             | valuable for creating AGI.
             | 
             | You can think of LLMs as devices to trigger humans to
             | process input with their meat brains and produce machine-
             | readable output. The fact that the input was LLM-generated
             | isn't necessarily a problem; clearly it is effective for
             | the purpose of prodding humans to respond. You're training
             | on the human outputs, not the LLM inputs. (Well, more
             | likely on the edge from LLM input to human output, but
             | close enough.)
        
         | pillefitz wrote:
         | Who would pay for safety, though?
        
       | JohnKemeny wrote:
       | Dupe https://news.ycombinator.com/item?id=40730132
        
       | udev4096 wrote:
       | Love the plain html
        
       | faraaz98 wrote:
       | Daniel Levy
       | 
       | Like the Tottenham Hotspurs owner??
        
         | WmWsjA6B29B4nfk wrote:
         | cs.stanford.edu/~danilevy
        
           | mi_lk wrote:
           | Thanks, was wondering the same thing about Hotspur guy lol
        
         | ignoramous wrote:
         | > _Like the Tottenham Hotspurs owner??_
         | 
         | If AGI doesn't coach them to trophies, nothing ever will.
         | 
         | https://deepmind.google/discover/blog/tacticai-ai-assistant-...
         | / https://archive.is/wgJWu
        
       | RcouF1uZ4gsC wrote:
       | The site is a good example of Poe's Law.
       | 
       | If I didn't know that it was real, I would have thought it was
       | parody.
       | 
       | > We are assembling a lean, cracked team of the world's best
       | engineers and researchers dedicated to focusing on SSI and
       | nothing else.
       | 
       | Do you have to have a broken bone to join?
       | 
       | Apparently, grammatical nuances are not an area of focus for
       | safety, unless they think that a broken team ("cracked") is an
       | asset in this area.
        
         | selimthegrim wrote:
         | This is probably his former Soviet union English showing up
         | where he meant to say crack, unless he thinks people being
         | insane is an asset
        
         | lowkey_ wrote:
         | Cracked, especially in saying "cracked engineers", refers to
         | really good engineers these days. It's cracked as in like
         | broken in a good way, like too over-powered that it's unfair.
        
         | Glant wrote:
         | Maybe they're using the video gaming version of cracked, which
         | means you're really good.
        
         | ctxc wrote:
         | Too snarky...anyway, "crack" means "exceptional" in some
         | contexts. I've seen footballers using it a lot over the years
         | (Neymar, Messi etc) fwiw.
         | 
         | Just realized - we even say "it's not all its cracked up to be"
         | as a negative statement which would imply "cracked up" is
         | positive.
        
         | novia wrote:
         | To me this is a good indication that the announcement was
         | written by a human and not an LLM
        
       | frenchie4111 wrote:
       | I am not on the bleeding edge of this stuff. I wonder though: How
       | could a safe super intelligence out compete an unrestricted one?
       | Assuming another company exists (maybe OpenAI) that is tackling
       | the same goal without spending the cycles on safety, what chance
       | do they have to compete?
        
         | Retr0id wrote:
         | the first step of safe superintelligence is to abolish
         | capitalism
        
           | next_xibalba wrote:
           | That's the first step towards returning to candlelight. So it
           | isn't a step toward safe super intelligence, but it is a step
           | away from any super intelligence. So I guess some people
           | would consider that a win.
        
             | yk wrote:
             | Not sure if you want to share the capitalist system with an
             | entity that outcompetes you by definition. Chimps don't
             | seem to do too well under capitalism.
        
               | next_xibalba wrote:
               | You might be right, but that wasn't my point. Capitalism
               | might yield a friendly AGI or an unfriendly AGI or some
               | mix of both. Collectivism will yield no AGI.
        
           | ganyu wrote:
           | One can already see the beginning of AI enslaving humanity
           | through the establishment. Companies work on AI get more
           | investment and those who don't gets kicked out of the game.
           | Those who employ AI get more investment and those who pay
           | humans lose confidence through the market. People lose jobs,
           | get harshly low birth rates while AI thrives. Tragic.
        
             | nemo44x wrote:
             | So far it is only people telling AI what to do. When we
             | reach the day where it is common place for AI to tell
             | people what to do then we are possibly in trouble.
        
           | speed_spread wrote:
           | And then seize the means of production.
        
           | cscurmudgeon wrote:
           | Why does everything have to do with capitalism nowadays?
           | 
           | Racism, unsafe roads, hunger, bad weather, good weather,
           | stubbing toes on furniture, etc.
           | 
           | Don't believe me?
           | 
           | See https://hn.algolia.com/?dateRange=all&page=0&prefix=false
           | &qu...
           | 
           | Are there any non-capitalist utopias out there without any
           | problems like this?
        
             | Retr0id wrote:
             | This is literally a discussion on allocation of capital,
             | it's not a reach to say that capitalism might be involved.
        
               | cscurmudgeon wrote:
               | Right, so you draw a line from that to abolishing
               | capitalism.
               | 
               | Is that the only solution here? We need to destroy
               | billions of lives so that we can potentially prevent
               | "unsafe" super intelligence?
               | 
               | Let me guess, your cure for cancer involves abolishing
               | humanity?
               | 
               | Should we abolish governments when some random government
               | goes bad?
        
               | Retr0id wrote:
               | "Abolish" is hyperbole.
               | 
               | Insufficiently regulated capitalism fails to account for
               | negative externalities. Much like a Paperclip Maximising
               | AI.
               | 
               | One could even go as far as saying AGI alignment and
               | economic resource allocation are isomorphic problems.
        
             | jdthedisciple wrote:
             | To be honest these search results being months apart shows
             | quite the opposite of what you're saying...
             | 
             | Even though I agree with your general point.
        
             | Nasrudith wrote:
             | It is a trendy but dumbass tautology used by intellectually
             | lazy people who think they are smart. Society is based upon
             | capitalism therefore everything bad is the fault of
             | capitalism.
        
         | llamaimperative wrote:
         | It can't. Unfortunately.
         | 
         | People spending so much time thinking about the systems (the
         | models) themselves, not enough about the system _that builds_
         | the systems. The behaviors of the models will be driven by the
         | competitive dynamics of the economy around them, and yeah, that
         | 's a big, big problem.
        
         | weego wrote:
         | Honestly, what does it matter. We're many lifetimes away from
         | anything. These people are trying to define concepts that don't
         | apply to us or what we're currently capable of.
         | 
         | AI safety / AGI anything is just a form of tech philosophy at
         | this point and this is all academic grift just with mainstream
         | attention and backing.
        
           | criddell wrote:
           | Ilya the grifter? That's a take I didn't expect to see here.
        
           | mhardcastle wrote:
           | This goes massively against the consensus of experts in this
           | field. The modal AI researcher believes that "high-level
           | machine intelligence", roughly AGI, will be achieved by 2047,
           | per the survey below. Given the rapid pace of development in
           | this field, it's likely that timelines would be shorter if
           | this were asked today.
           | 
           | https://www.vox.com/future-perfect/2024/1/10/24032987/ai-
           | imp...
        
             | Retr0id wrote:
             | Reminds me of what they've always been saying about nuclear
             | fusion.
        
             | ein0p wrote:
             | I am in the field. The consensus is made up by a few
             | loudmouths. No serious front line researcher I know
             | believes we're anywhere near AGI, or will be in the
             | foreseeable future.
        
               | comp_throw7 wrote:
               | So the researchers at Deepmind, OpenAI, Anthropic, etc,
               | are not "serious front line researchers"? Seems like a
               | claim that is trivially falsified by just looking at what
               | the staff at leading orgs believe.
        
               | ein0p wrote:
               | Apparently not. Or maybe they are heavily incentivized by
               | the hype cycle. I'll repeat one more time: none of the
               | currently known approaches are going to get us to AGI.
               | Some may end up being useful for it, but large chunks of
               | what we think is needed (cognition, world model, ability
               | to learn concepts from massive amounts of multimodal,
               | primarily visual, and almost entirely unlabeled, input)
               | is currently either nascent or missing entirely. Yann
               | LeCun wrote a paper about this a couple of years ago, you
               | should read it:
               | https://openreview.net/pdf?id=BZ5a1r-kVsf. The state of
               | the art has not changed since then.
        
             | MacsHeadroom wrote:
             | 51% odds of the ARC AGI Grand Prize being claimed by the
             | end of next year, on Manifold Markets.
             | 
             | https://manifold.markets/JacobPfau/will-the-arcagi-grand-
             | pri...
        
             | enragedcacti wrote:
             | I don't understand how you got 2047. For the 2022 survey:
             | - "How many years until you expect: - a 90% probability of
             | HLMI existing?"          mode: 100 years         median: 64
             | years              - "How likely is it that HLMI exists: -
             | in 40 years?"         mode: 50%         median: 45%
             | 
             | And from the summary of results: "The aggregate forecast
             | time to a 50% chance of HLMI was 37 years, i.e. 2059"
        
           | ToValueFunfetti wrote:
           | Many lifetimes? As in upwards of 200 years? That's wildly
           | pessimistic if so- imagine predicting today's computer
           | capabilities even one lifetime ago
        
           | usrnm wrote:
           | > We're many lifetimes away from anything
           | 
           | ENIAC was built in 1945, that's roughly a lifetime ago. Just
           | think about it
        
         | lmaothough12345 wrote:
         | Not with that attitude
        
         | rafaelero wrote:
         | It's probably not possible, which makes all these initiatives
         | painfully naive.
        
           | cynusx wrote:
           | I wonder if that would have a proof like the halting problem
        
           | cwillu wrote:
           | It'd be naive if it wasn't literally a standard point that is
           | addressed and acknowledged as being a major part of the
           | problem.
           | 
           | There's a reason OpenAI's charter had this clause:
           | 
           | "We are concerned about late-stage AGI development becoming a
           | competitive race without time for adequate safety
           | precautions. Therefore, if a value-aligned, safety-conscious
           | project comes close to building AGI before we do, we commit
           | to stop competing with and start assisting this project. We
           | will work out specifics in case-by-case agreements, but a
           | typical triggering condition might be "a better-than-even
           | chance of success in the next two years.""
        
             | kjkjadksj wrote:
             | How does that address the issue? I would have expected them
             | to do that anyhow. Thats what a lot of businesses do: let
             | another company take the hit developing the market, R and
             | D, and supply chain, then come in with industry
             | standardization and cooperative agreements only after the
             | money was proven to be good in this space. See electric
             | cars. Also they could drop that at any time. Remember when
             | openAI stood for opensource?
        
               | cwillu wrote:
               | Really, you think Ford is dropping their electric car
               | manufacturing in order to assist Tesla in building more
               | gigafactories?
               | 
               | > Remember when openAI stood for opensource?
               | 
               | I surely don't, but maybe I missed it, can you show me?
               | 
               | https://web.archive.org/web/20151211215507/https://openai
               | .co...
               | 
               | https://web.archive.org/web/20151213200759/https://openai
               | .co...
               | 
               | Neither mention anything about open-source, although a
               | later update mentions publishing work ("whether as
               | papers, blog posts, or code"), which isn't exactly a
               | ringing endorsement of "everything will be open-source"
               | as a fundamental principle of the organization.
        
         | slashdave wrote:
         | Since no one knows how to build an AGI, hard to say. But you
         | might imagine that more restricted goals could end up being
         | easier to accomplish. A "safe" AGI is more focused on doing
         | something useful than figuring out how to take over the world
         | and murder all the humans.
        
           | cynusx wrote:
           | Hinton's point does make sense though.
           | 
           | Even if you focus an AGI on producing more cars for example,
           | it will quickly realize that if it has more power and
           | resources it can make more cars.
        
             | kjkjadksj wrote:
             | Assuming AGI works like a braindead consulting firm, maybe.
             | But if it worked like existing statistical tooling (which
             | it does, today, because for an actual data scientist and
             | not aunt cathy prompting bing, using ml is no different
             | than using any other statistics when you are writing your
             | python or R scripts up), you could probably generate some
             | fancy charts that show some distributions of cars produced
             | under different scenarios with fixed resource or power
             | limits.
             | 
             | In a sense this is what is already done and why ai hasn't
             | really made the inroads people think it will even if you
             | can ask google questions now. For the data scientists, the
             | black magicians of the ai age, this spell is no more
             | powerful than other spells, many of which (including ml)
             | were created by powerful magicians from the early 1900s.
        
         | mark_l_watson wrote:
         | That is a very good question. In a well functioning democracy a
         | government should apply a thin layer of fair rules that are
         | uniformly enforced. I am an old man, but when I was younger, I
         | recall that we sort of had this in the USA.
         | 
         | I don't think that corporations left on their own will make
         | safe AGI, and I am skeptical that we will have fair and
         | technologically sound legislation - look at some of the anti
         | cryptography and anti privacy laws raising their ugly heads in
         | Europe as an example of government ineptitude and corruption. I
         | have been paid to work in the field of AI since 1982, and all
         | of my optimism is for AI systems that function in partnership
         | with people and I expect continued rapid development of agents
         | based on LLMs, RL, etc. I think that AGIs as seen in the
         | Terminator movies are far into the future, perhaps 25 years?
        
         | hackerlight wrote:
         | This is not a trivial point. Selective pressures will push AI
         | towards unsafe directions due to arms race dynamics between
         | companies and between nations. The only way, other than global
         | regulation, would be to be so far ahead that you can afford to
         | be safe without threatening your own existence.
        
         | cynusx wrote:
         | Not on its own but in numbers it could.
         | 
         | Similar to how law-abiding citizens turn on law-breaking
         | citizens today or more old-fashioned, how religious societies
         | turn on heretics.
         | 
         | I do think the notion that humanity will be able to manage
         | superintelligence just through engineering and conditioning
         | alone is naive.
         | 
         | If anything there will be a rogue (or incompetent) human who
         | launches an unconditioned superintelligence into the world in
         | no time and it only has to happen once.
         | 
         | It's basically Pandora's box.
        
         | alecco wrote:
         | The problem is the training data. If you take care of alignment
         | at that level the performance is as good as an unrestricted
         | one, except for things you removed like making explosives or
         | ways to commit suicide.
         | 
         | But that costs almost as much as training on the data, hundreds
         | of millions. And I'm sure this will be the new "secret sauce"
         | by Microsoft/Meta/etc. And sadly nobody is sharing their
         | synthetic data.
        
         | cwillu wrote:
         | There's a reason OpenAI had this as part of its charter:
         | 
         | "We are concerned about late-stage AGI development becoming a
         | competitive race without time for adequate safety precautions.
         | Therefore, if a value-aligned, safety-conscious project comes
         | close to building AGI before we do, we commit to stop competing
         | with and start assisting this project. We will work out
         | specifics in case-by-case agreements, but a typical triggering
         | condition might be "a better-than-even chance of success in the
         | next two years.""
        
       | eigenvalue wrote:
       | Glad to see Ilya is back in a position to contribute to advancing
       | AI. I wonder how they are going to manage to pay the kinds of
       | compensation packages that truly gifted AI researchers can make
       | now from other companies that are more commercially oriented.
       | Perhaps they can find people who are ideologically driven and/or
       | are already financially independent. It's also hard to see how
       | they will be able to access enough compute now that others are
       | spending many billions to get huge new GPU data centers. You sort
       | of need at least the promise/hope of future revenue in a
       | reasonable time frame to marshall the kinds of resources it takes
       | to really compete today with big AI super labs.
        
         | imbusy111 wrote:
         | Last I checked the researcher salaries haven't even reached
         | software engineer levels.
        
           | shadow28 wrote:
           | The kind of AI researchers being discussed here likely make
           | an order of magnitude more than run of the mill "software
           | engineers".
        
             | imbusy111 wrote:
             | You're comparing top names with run of the mill engineers
             | maybe, which isn't fair.
             | 
             | And maybe you need to discover talent rather than buy
             | talent from the previous generation.
        
               | shadow28 wrote:
               | AI researchers at top firms make significantly more than
               | software engineers at the same firms though (granted that
               | the difference is likely not an order of magnitude in
               | this case though).
        
             | Q6T46nT668w6i3m wrote:
             | Unless you know something I don't, that's not the case. It
             | also makes sense, engineers are far more portable and
             | scarcity isn't an issue (many ML PhDs find engineering
             | positions).
        
           | dbish wrote:
           | That is incredibly untrue and has been for years in the AI/ML
           | space at many startups and at Amazon, Google, Facebook, etc.
           | Good ML researchers have been making a good amount more for a
           | while (source: I've hired both and been involved in leveling
           | and pay discussions for years)
        
         | esafak wrote:
         | I think they will easily find enough capable altruistic people
         | for this mission.
        
           | EncomLab wrote:
           | I mean SBF was into Altruism - look how that turned out....
        
             | esafak wrote:
             | So what? He was a phony. And I'm not talking about the
             | Effective Altruism movement.
        
             | whimsicalism wrote:
             | and that soured you on altruism as a concept??
             | 
             | i find the way people reason nowadays baffling
        
             | null0pointer wrote:
             | Not really. He was into altruism insofar as it acted as
             | moral licensing for him.
        
           | richie-guix wrote:
           | Not sure if spelling mistake.
        
         | insane_dreamer wrote:
         | > Perhaps they can find people who are ideologically driven
         | 
         | given the nature of their mission, this shouldn't be too
         | terribly difficult; many gifted researchers do not go to the
         | highest bidder
        
         | vasco wrote:
         | At the end game, a "non-safe" superinteligence seems easier to
         | create, so like any other technology, some people will create
         | it (even if just because they can't make it safe). And in a
         | world with multiple superintelligent agents, how can the safe
         | ones "win"? It seems like a safe AI is at inherent disadvantage
         | for survival.
        
           | arbuge wrote:
           | The current intelligences of the world (us) have organized
           | their civilization in a way that the conforming members of
           | society are the norm and criminals the outcasts. Certainly
           | not a perfect system, but something along those lines for the
           | most part.
           | 
           | I like to think AGIs will decide to do that too.
        
             | Filligree wrote:
             | They well may, the problem is ensuring that humanity also
             | survives.
        
               | TwoCent wrote:
               | The example from our environment suggests that the apex
               | intelligences in the environment treat all other
               | intelligent agents in only a few ways:
               | 
               | 1. Pests to eliminate 2. Benign neglect 3. Workers 4.
               | Pets 5. Food
               | 
               | That suggests that there are scenarios under which we
               | survive. I'm not sure we'd like any of them, though
               | "benign neglect" might be the best of a bad lot.
        
             | insane_dreamer wrote:
             | I disagree that civilization is organized along the lines
             | of conforming and criminals. Rather, I would argue that the
             | current intelligences of the world have primarily organized
             | civilization in such a way that a small percentage of its
             | members control the vast majority of all human resources,
             | and the bottom 50% control almost nothing[0]
             | 
             | I would hope that AGI would prioritize humanity itself, but
             | since it's likely to be created and/or controlled by a
             | subset of that same very small percentage of humans, I'm
             | not hopeful.
             | 
             | [0] https://en.wikipedia.org/wiki/Wealth_inequality_in_the_
             | Unite...
        
             | soulofmischief wrote:
             | It's a beautiful system, wherein "criminality" can be used
             | to label and control any and all persons who disagree with
             | the whim of the incumbent class.
             | 
             | Perhaps this isn't a system we should be trying to emulate
             | with a technology that promises to free us of our current
             | inefficiencies or miseries.
        
             | vundercind wrote:
             | Our current meatware AGIs (corporations) are lawless as
             | fuck and have effectively no ethics at all, which doesn't
             | bode well.
        
         | Q6T46nT668w6i3m wrote:
         | Academic compensation is different than what you'd find
         | elsewhere on Hacker News. Likewise, academic performance is
         | evaluated differently than what you'd expect as a software
         | engineer. Ultimately, everyone cares about scientific impact so
         | academic compensation relies on name and recognition far more
         | than money. Personally, I care about the performance of the
         | researchers (i.e., their publications), the institution's
         | larger research program (and their resources), the
         | institution's commitment to my research (e.g., fellowships and
         | tenure). I want to do science for my entire career so I
         | prioritize longevity rather than a quick buck.
         | 
         | I'll add, the lack of compute resources was a far worse problem
         | early in the deep learning research boom, but the market has
         | adjusted and most researchers are able to be productive with
         | existing compute infrastructure.
        
           | eigenvalue wrote:
           | But wouldn't the focus on "safety first" sort of preclude
           | them from giving their researchers the unfettered right to
           | publish their work however and whenever they see fit? Isn't
           | the idea to basically try to solve the problems in secret and
           | only release things when they have high confidence in the
           | safety properties?
           | 
           | If I were a researcher, I think I'd care more about ensuring
           | that I get credit for any important theoretical discoveries I
           | make. This is something that LeCun is constantly stressing
           | and I think people underestimate this drive. Of course, there
           | might be enough researchers today who are sufficiently scared
           | of bad AI safety outcomes that they're willing to subordinate
           | their own ego and professional drive to the "greater good" of
           | society (at least in their own mind).
        
             | FeepingCreature wrote:
             | If you're working on _superintelligence_ I don 't think
             | you'd be worried about not getting credit due to a lack of
             | publications, of all things. If it works, it's the sort of
             | thing that gets you in the history books.
        
               | eigenvalue wrote:
               | Not sure about that. It might get _Ilya_ in the history
               | books, and maybe some of the other high profile people he
               | recruits early on, but a junior researcher /developer who
               | makes a high impact contribution could easily get
               | overlooked. Whereas if that person can have their name as
               | lead author on a published paper, it makes it much easier
               | to measure individual contributions.
        
               | FeepingCreature wrote:
               | There is a human cognitive limit to the detail in which
               | we can analyze and understand history.
               | 
               | This limit, just like our population count, will not
               | outlast the singularity. I did the math a while back, and
               | at the limit of available energy, the universe has
               | comfortable room for something like 10^42 humans. Every
               | single one of those humans will owe their existence to
               | our civilization in general and the Superintelligence
               | team in specific. There'll be enough fame to go around.
        
         | paxys wrote:
         | They will be able to pay their researchers the same way every
         | other startup in the space is doing it - by raising an absurd
         | amount of money.
        
         | PheonixPharts wrote:
         | > compensation packages that truly gifted AI researchers can
         | make now
         | 
         | I guess it depends on your definition of "truly gifted" but,
         | working in this space, I've found that there is very little
         | correlation between comp and quality of AI research. There's
         | absolutely some brilliant people working for big names and
         | making serious money, there's also plenty of really talented
         | people working for smaller startups doing incredible work but
         | getting paid less, academics making very little, and even the
         | occasional "hobbyist" making nothing and churning out great
         | work while hiding behind an anime girl avatar.
         | 
         | OpenAI clearly has some talented people, but there's also a
         | bunch of the typical "TC optimization" crowd in there these
         | days. The fact that so many were willing to resign with sama if
         | necessary appears largely because they were more concerned with
         | losing their nice compensation packages than any of their
         | obsession with doing top tier research.
        
           | 015a wrote:
           | Definitely true of even normal software engineering; my
           | experience has been the opposite of expectations, that TC-
           | creep has infected the industry to an irreparable degree and
           | the most talented people I've ever worked around or with are
           | in boring, medium-sized enterprises in the midwest US or
           | australia, you'll probably never hear of them, and every big
           | tech company would absolutely love to hire them but just
           | can't figure out the interview process to weed them apart
           | from the TC grifters.
           | 
           | TC is actually totally uncorrelated with the quality of
           | talent you can hire, beyond some low number that pretty much
           | any funded startup could pay. Businesses hate to hear this,
           | because money is easy to turn the dial up on; but most have
           | no idea how to turn the dial up on what really matters to
           | high talent individuals. Fortunately, I doubt Ilya will have
           | any problem with that.
        
             | fromMars wrote:
             | I find this hard to believe having worked in multiple
             | enterprises and in the FAANG world.
             | 
             | In my anecdotal experience, I can only think of one or two
             | examples of someone from the enterprise world who I would
             | consider outstanding.
             | 
             | The overall quality of engineers is much higher at the
             | FAANG companies.
        
               | null0pointer wrote:
               | I have also worked in multiple different sized companies,
               | including FAANG, and multiple countries. My assessment is
               | that FAANGs tend to select for generally intelligent
               | people who can learn quickly and adapt to new situations
               | easily but who nowadays tend to be passionless and
               | indifferent to anything but money and prestige.
               | Personally I think passion is the differentiator here,
               | rather than talent, when it comes to doing a good job.
               | Passion means caring about your work and its impact
               | beyond what it means for your own career advancement. It
               | means caring about building the best possible products
               | where "best" is defined as delivering the most value for
               | your users rather than the most value for the company.
               | The question is whether big tech is unable to select for
               | passion or whether there are simply not enough passionate
               | people to hire when operating at FAANG scale. Most likely
               | it's the latter.
               | 
               | So I guess I agree with both you and the parent comment
               | somewhat in that in general the bar is higher at FAANGs
               | but at the same time I have multiple former colleagues
               | from smaller companies who I consider to be excellent,
               | passionate engineers but who cannot be lured to big tech
               | by any amount of money or prestige (I've tried). While
               | many passionless "arbitrary metric optimizers" happily
               | join FAANGs and do whatever needs to be done to climb the
               | ladder without a second thought.
        
             | whimsicalism wrote:
             | perfect sort of thing to say to get lots of upvotes, but
             | absolutely false in my experience at both enterprise and
             | bigtech
        
           | kccqzy wrote:
           | Two people I knew recently left Google to join OpenAI. They
           | were solid L5 engineers on the verge of being promoted to L6,
           | and their TC is now $900k. And they are not even doing AI
           | research, just general backend infra. You don't need to be
           | gifted, just good. And of course I can't really fault them
           | for joining a company for the purpose of optimizing TC.
        
             | ilrwbwrkhv wrote:
             | Google itself is now filled with TC optimizing folks, just
             | one level lower than the ones at Open AI.
        
             | iknownthing wrote:
             | Seems like you need to have been working at a place like
             | Google too
        
             | almostgotcaught wrote:
             | > their TC is now $900k.
             | 
             | Everyone knows that openai TC is heavily weighted by
             | ~~RSUs~~ options that themselves are heavily weighted by
             | hopes and dreams.
        
               | doktorhladnjak wrote:
               | When I looked into it and talked to some hiring managers,
               | the big names were offering cash comp similar to total
               | comp for big tech, with stock (sometimes complicated
               | arrangements that were not options or RSUs) on top of
               | that. I'm talking $400k cash for a senior engineer with
               | equity on top.
        
               | almostgotcaught wrote:
               | > big names
               | 
               | Big names where? Inside of openai? What does that even
               | mean?
               | 
               | The only place you can get 400k cash base for senior is
               | quantfi
        
               | whimsicalism wrote:
               | > The only place you can get 400k cash base for senior is
               | quantfi
               | 
               | confident yet wrong
               | 
               | not only can you get that much at AI companies, netflix
               | will also pay that much all cash - and that's fully
               | public info
        
               | almostgotcaught wrote:
               | > not only can you get that much at AI companies
               | 
               | Please show not tell
               | 
               | > netflix will also pay that much all cash
               | 
               | Okay that's true
        
               | vlovich123 wrote:
               | Netflix is just cash, no stock. That's different from
               | 400k stock + cash.
        
               | whimsicalism wrote:
               | > The only place you can get 400k cash base for senior is
               | quantfi
               | 
               | That statement is false for the reasons I said. I'm not
               | sure why your point matters to what I'm saying
        
               | HeatrayEnjoyer wrote:
               | Everything OpenAI does is about weights.
        
               | DaiPlusPlus wrote:
               | bro does their ceo even lift?
        
               | almost_usual wrote:
               | You mean PPUs or smoke and mirrors compensation. RSUs are
               | actually worth something.
        
               | whimsicalism wrote:
               | why are PPUs "smoke and mirrors" and RSUs "worth
               | something"?
               | 
               | i suspect people commenting this don't have a clue how
               | PPU compensation actually works
        
             | raydev wrote:
             | > their TC is now $900k
             | 
             | As a community we should stop throwing numbers around like
             | this when more than half of this number is speculative. You
             | shouldn't be able to count it as "total compensation"
             | unless you are compensated.
        
               | nojvek wrote:
               | Word on town is OpenAI folks heavily selling shares in
               | secondaries in 100s of millions.
               | 
               | The number is as real as someone else is willing to pay
               | for them. Plenty of VCs willing to pay for it.
        
               | michaelt wrote:
               | Word in town is [1] openai "plans" to let employees sell
               | "some" equity through a "tender process" which ex-
               | employees are excluded from; and also that openai can
               | "claw back" vested equity, and has used the threat of
               | doing so in the past to pressure people into signing
               | sketchy legal documents.
               | 
               | [1] https://www.cnbc.com/2024/06/11/openai-insider-stock-
               | sales-a...
        
               | comp_throw7 wrote:
               | I would definitely discount OpenAI equity compared to
               | even other private AI labs (i.e. Anthropic) given the
               | shenanigans, but they have in fact held 3 tender offers
               | and former employees were not, as far as we know,
               | excluded (though they may have been limited to selling
               | $2m worth of equity, rather than $10m).
        
               | JumpCrisscross wrote:
               | > _Word on town is OpenAI folks heavily selling shares in
               | secondaries in 100s of millions_
               | 
               | OpenAI heavily restricts the selling of its "shares,"
               | which tends to come with management picking the winners
               | and losers among its ESOs. Heavily, heavily discount an
               | asset you cannot liquidate without someone's position,
               | particularly if that person is your employer.
        
               | whimsicalism wrote:
               | don't comment if you don't know what you're talking
               | about, they have tender offers
        
             | whimsicalism wrote:
             | the thing about mentioning compensation numbers on HN is
             | you will get tons of pissy/ressentiment-y replies
        
           | a-dub wrote:
           | "...even the occasional "hobbyist" making nothing and
           | churning out great work while hiding behind an anime girl
           | avatar."
           | 
           | the people i often have the most respect for.
        
           | auggierose wrote:
           | TC optimization being tail call optimization?
        
             | klyrs wrote:
             | You don't get to that level by thinking about _code_...
        
             | lbotos wrote:
             | Could be sarcasm, but I'll engage in good faith: Total
             | Compensation
        
             | samatman wrote:
             | Nope, that's a misnomer, it's tail-call elimination. You
             | can't call it an optimization if it's essential for proper
             | functioning of the program.
             | 
             | (they mean total compensation)
        
           | torginus wrote:
           | Half the advancements around Stable Diffusion (Controlnet
           | etc.) came from internet randoms wanting better anime waifus
        
             | whimsicalism wrote:
             | advancements around parameter efficient fine tuning came
             | from internet randoms because big cos don't care about PEFT
        
               | Der_Einzige wrote:
               | ... Sort of?
               | 
               | HF is sort of big now. Stanford is well funded and they
               | did PyReft.
        
               | whimsicalism wrote:
               | HF is not very big, Stanford doesn't have lots of
               | compute.
               | 
               | Neither of these are even remotely big labs like what I'm
               | discussing
        
         | ldjkfkdsjnv wrote:
         | Are you seriously asking how the most talented AI researcher of
         | the last decade will be able to recruit other researchers? Ilya
         | saw the potential of deep learning way before other machine
         | learning academics.
        
           | dbish wrote:
           | Sorry, are you attributing all of deep learning research to
           | Ilya? The most talented AI researcher of the last decade?
        
             | ldjkfkdsjnv wrote:
             | Not attributing all of it
        
         | aresant wrote:
         | My guess is they will work on a protocol to drive safety with
         | the view that every material player will use / be regulated and
         | required to use that could lead to a very robust business model
         | 
         | I assume that OpenAI and others will support this effort and
         | the comp / training / etc and they will be very well positioned
         | to offer comparable $$$ packages, leverage resources, etc
        
         | neural_thing wrote:
         | Daniel Gross (with his partner Nat Friedman) invested $100M
         | into Magic alone.
         | 
         | I don't think SSI will struggle to raise money.
        
         | kmacdough wrote:
         | Generally, the mindset that makes the best engineers is an
         | obsession with solving hard problems. Anecdotally, there's not
         | a lot of overlap between the best engineers I know and the best
         | paid engineers I know. The best engineers I know are too
         | obsessed with solving problems to be sidetracked the salary
         | game. The best paid engineers I know are great engineers, but
         | the spend a large amount of time playing the salary game,
         | bouncing between companies and are always doing the work that
         | looks best on a resume, not the best work they know how to do.
        
       | mikemitchelldev wrote:
       | Do you find the name "Safe Superintelligence" to be an instance
       | of virtue signalling? Why or why not?
        
         | nemo44x wrote:
         | Yes, they might as well named it "Woke AI". It implies that
         | other AIs aren't safe or something and that they and they alone
         | know what's best. Sounds religious, or from the same place
         | religious righteousness comes from, if anything. They believe
         | they are the "good guys" in their world view or something.
         | 
         | I don't know if any of that is true about them but their name
         | and statement invokes this.
        
           | viking123 wrote:
           | AI safety is a fraud on similar level as NFTs. Massive virtue
           | signalling.
        
       | shnkr wrote:
       | a tricky situation now for oai engineering to decide between good
       | and evil.
        
       | choxi wrote:
       | based on the naming conventions established by OpenAI and
       | StabilityAI, this may be the most dangerous AI company yet
        
         | kirth_gersen wrote:
         | Wow. Read my mind. I was just thinking, "I hope this name
         | doesn't age poorly and become terribly ironic..."
        
         | malermeister wrote:
         | "Definitely-Won't-Go-Full-Skynet-AI" was another name in
         | consideration.
        
         | fiatpandas wrote:
         | Ah yes, the AI Brand Law: the meaning of adjectives in your
         | company name will invert within a few years of launch.
        
         | lawn wrote:
         | Being powerful is like being a lady. If you have to tell people
         | you are, you aren't. - Margaret Thatcher
        
           | shafyy wrote:
           | Or: "Any man who must say, "I am the King", is no true king."
           | - Tywin Lannister
        
             | righthand wrote:
             | "Inspiration from someone who doesn't exist and therefor
             | accomplished nothing." - Ficto the advice giver
        
             | GeorgeTirebiter wrote:
             | No True Scotsman fallacy? https://en.wikipedia.org/wiki/No_
             | true_Scotsman?useskin=vecto...
        
         | Zacharias030 wrote:
         | this.
        
         | aylmao wrote:
         | thankfully, based on said naming conventions, it will be the
         | dumbest too though
        
       | AbstractH24 wrote:
       | Is OpenAI on a path to becoming the MySpace of generative AI?
       | 
       | Either the Facebook of this era has yet to present itself or it's
       | Alphabet/DeepMind
        
       | aridiculous wrote:
       | Surprising to see Gross involved. He seems to be pretty baked
       | into the YC world, which usually means "very commercially
       | oriented".
        
         | notresidenter wrote:
         | His latest project (https://pioneer.app/) recently (this year I
         | think) got shutdown. I guess he's pivoting.
        
         | AlanYx wrote:
         | It does say they have a business model ("our business model
         | means safety, security, and progress are all insulated from
         | short-term commercial pressures"). I imagine it's some kind of
         | patron model that requires a long-term commitment.
        
       | aresant wrote:
       | Prediction - the business model becomes an external protocol -
       | similar to SSL - that the litany of AI companies working to
       | achieve AGI will leverage (or be regulated to use)
       | 
       | From my hobbyist knowledge of LLMs and compute this is going to
       | be a terrifically complicated problem, but barring a defined
       | protocol & standard there's no hope that "safety" is going to be
       | executed as a product layer given all the different approaches
       | 
       | Ilya seems like he has both the credibility and engineering chops
       | to be in a position to execute this, and I wouldn't be suprised
       | to see OpenAI / MSFT / and other players be early investors /
       | customers / supporters
        
         | cherioo wrote:
         | I like your idea. But on the other hand, training an AGI, and
         | then having a layer on top "aligning" the AGI sounds super
         | dystopian and good plot for a movie.
        
           | exe34 wrote:
           | the aligning means it should do what the board of directors
           | wants, not what's good for society.
        
             | Nasrudith wrote:
             | Poisoning Socrates was done because it was "good for
             | society". I'm frankly even more suspicious of "good for
             | society" than the average untrustworthy board of directors.
        
               | exe34 wrote:
               | seriously? you're more worried about what your elected
               | officials might legislate than what a board of directors
               | whose job is to make profits go brrr at all costs,
               | including poisoning the environment, exploiting people
               | and avoiding taxes?
        
       | ofou wrote:
       | At this point, all the computing power is concentrated among
       | various companies such as Google, Facebook, Microsoft, Amazon,
       | Tesla, etc.
       | 
       | It seems to me it would be much safer and more intelligent to
       | create a massive model and distribute the benefits among
       | everyone. Why not use a P2P approach?
        
         | nvy wrote:
         | In my area, internet and energy are insanely expensive and that
         | means I'm not at all willing to share my precious bandwidth or
         | compute just to subsidize someone generating Rule 34 porn of
         | their favorite anime character.
         | 
         | I don't seed torrents for the same reason. If I lived in South
         | Korea or somewhere that bandwidth was dirt cheap, then maybe.
        
           | ofou wrote:
           | There is a way to achieve load balancing, safety, and
           | distribution effectively. The models used by Airbnb, Uber,
           | and Spotify have proven to be generally successful. Peer-to-
           | peer (P2P) technology is the future; even in China, people
           | are streaming videos using this technology, and it works
           | seamlessly. I envision a future where everyone joins the AI
           | revolution with an iPhone, with both training and inference
           | distributed in a P2P manner. I wonder why no one has done
           | this yet.
        
         | wizzwizz4 wrote:
         | Backprop is neither commutative nor associative.
        
           | ofou wrote:
           | What do you mean? There's a bunch of proof-of-concepts such
           | as Hydra, peer-nnet, Learnae, and so on.
        
             | wizzwizz4 wrote:
             | The Wikipedia article goes into more detail.
             | https://en.wikipedia.org/wiki/Federated_learning
        
             | whimsicalism wrote:
             | just because it has a PoC or a wiki article doesn't mean it
             | actually works
        
       | artninja1988 wrote:
       | >aiming to create a safe, powerful artificial intelligence system
       | within a pure research organization that has no near-term
       | intention of selling AI products or services. Who is going to
       | fund such a venture based on blind faith alone? Especially if you
       | believe in the scaling hypothesis type of ai research where you
       | spend billions on compute, this seems bound to fail once the AI
       | hype dies down and raising money becomes a bit harder
        
       | ffhhj wrote:
       | > Building safe superintelligence (SSI) is the most important
       | technical problem of our time.
       | 
       | Isn't this a philosophical/psychological problem instead?
       | Technically it's solved, just censor any response that doesn't
       | match a list of curated categories, until a technician whitelists
       | it. But the technician could be confronted with a compelling
       | "suicide song":
       | 
       | https://en.wikipedia.org/wiki/Gloomy_Sunday
        
       | TIPSIO wrote:
       | Would you rather your future overlords to be called "The Safe
       | Company" or "The Open Company"?
        
         | emestifs wrote:
         | Galaxy Brain: TransparentAI
        
           | TIPSIO wrote:
           | Maybe I'm just old and grumpy, but I can't help shake that
           | the real most dangerous thing about AGI/ASI is centralization
           | of its power (if it is ever possibly achieved).
           | 
           | Everyone just fiend-ing for their version of it.
        
             | emestifs wrote:
             | You're not old or grumpy, you're just stating the quiet
             | part out loud. It's the same game, but now with 100% more
             | AI.
        
       | ctxc wrote:
       | What exactly is "safe" in this context, can someone give me an
       | eli5?
       | 
       | If it's "taking over the world" safe, does it not mean that this
       | is a part of AGI?
        
         | kouteiheika wrote:
         | > What exactly is "safe" in this context, can someone give me
         | an eli5?
         | 
         | In practice it essentially means the same thing as for most
         | other AI companies - censored, restricted, and developed in
         | secret so that "bad" people can't use it for "unsafe" things.
        
           | novia wrote:
           | The people who advocate censorship of AGIs annoy the hell out
           | of the AI safety people who actually care about existential
           | risk.
        
           | whimsicalism wrote:
           | people view these as test cases for the much harder x-risk
           | safety problem
        
         | insane_dreamer wrote:
         | Good Q. My understanding of "safe" in this context is a
         | superintelligence that cannot escape its bounds. But that's not
         | to say that's Ilya's definition.
        
         | FeepingCreature wrote:
         | It's "not killing every human" safe.
        
       | ReleaseCandidat wrote:
       | What could go wrong with a name tied to wargames and D&D?
       | 
       | https://en.wikipedia.org/wiki/Strategic_Simulations
        
       | paul7986 wrote:
       | I bet Google or Apple or Amazon will become their partners like
       | MS is to Open AI.
        
       | gibsonf1 wrote:
       | Given that GenAI is a statistical approach from which
       | intelligence does not emerge as ample experience proves, does
       | this new company plan to take a more human approach to simulating
       | intelligence instead?
        
         | mdp2021 wrote:
         | > _more human approach to simulating intelligence_
         | 
         | What about a more rational approach to implementing it instead.
         | 
         | (Which was not excluded from past plans: they just simply
         | admittedly did not know the formula, and explored emergence.
         | But the next efforts will have to go in the direction of
         | attempting actual intelligence.)
        
         | alextheparrot wrote:
         | Glibly, I'd also love your definition of the education system
         | writ large.
        
         | localfirst wrote:
         | We need new math to do what you are thinking of. Highly
         | probable word slot machine is the best we can do right now.
        
         | ilrwbwrkhv wrote:
         | This. As I wrote in another comment, people fall for marketing
         | gimmicks easily.
        
         | jimbokun wrote:
         | > Given that GenAI is a statistical approach from which
         | intelligence does not emerge as ample experience proves
         | 
         | When was this proven?
        
         | sovietswag wrote:
         | I sometimes wonder if statistics are like a pane of glass that
         | allow the light of god (the true nature of things) to pass
         | through, while logic/rationalism is the hubris of man playing
         | god. I.e. statistics allow us to access/use the truth even if
         | we don't understand why it's so, while rationalism / rule-based
         | methods are often a folly because our understanding is not good
         | enough to construct them.
        
         | TeeWEE wrote:
         | Lossy compression of all world information results in super
         | intelligence....
         | 
         | Thats the whole eureka thing to understand... To compress well,
         | you need to understand. To predict the next word, you need to
         | undestand the world.
         | 
         | Ilya explains it here: https://youtu.be/GI4Tpi48DlA?t=1053
        
           | TeeWEE wrote:
           | Also to support this: Biological systems are often very
           | simple systems but repeated a lot... The brain is a lot of
           | neurons... Apparently having a neural net (even small)
           | predicts the future better... And that increased survival..
           | 
           | To survive is to predict the future better than the other
           | animal. Survival of the fittest.
        
       | tsunamifury wrote:
       | The problem is that Ilya behavior at times was framed in a very
       | unhinged and cult like behavior. And while his passions are clear
       | and maybe good, his execution often comes off as someone you
       | wouldn't want in charge of safety.
        
       | instagraham wrote:
       | > Our singular focus means no distraction by management overhead
       | or product cycles, and our business model means safety, security,
       | and progress are all insulated from short-term commercial
       | pressures.
       | 
       | well, that's some concrete insight into whatever happened at
       | OpenAI. kinda obvious though in hindsight I guess.
        
       | insane_dreamer wrote:
       | I understand the concern that a "superintelligence" will emerge
       | that will escape its bounds and threaten humanity. That is a
       | risk.
       | 
       | My bigger, and more pressing worry, is that a "superintelligence"
       | will emerge that does not escape its bounds, and the question
       | will be which humans control it. Look no further than history to
       | see what happens when humans acquire great power. The "cold war"
       | nuclear arms race, which brought the world to the brink of (at
       | least partial) annihilation, is a good recent example.
       | 
       | Quis custodiet ipsos custodes? -- That is my biggest concern.
       | 
       | Update: I'm not as worried about Ilya et al as commercial
       | companies (including formerly "open" OpenAI) discovering AGI.
        
         | mark_l_watson wrote:
         | +1 truth.
         | 
         | The problem is not just governments, I am concerned about large
         | organized crime organizations and corporations also.
         | 
         | I think I am on the losing side here, but my hopes are all for
         | open source, open weights, and effective AI assistants that
         | make peoples' jobs easier and lives better. I would also like
         | to see more effort shifted from LLMs back to RL, DL, and
         | research on new ideas and approaches.
        
         | ilrwbwrkhv wrote:
         | There is no "superintelligence" or "AGI".
         | 
         | People are falling for marketing gimmicks.
         | 
         | These models will remain in the word vector similarity phase
         | forever. Till the time we understand consciousness, we will not
         | crack AGI and then it won't take brute forcing of large swaths
         | of data, but tiny amounts.
         | 
         | So there is nothing to worry. These "apps" might be as popular
         | as Excel, but will go no further.
        
           | WXLCKNO wrote:
           | No one is saying there is. Just that we've reached some big
           | milestones recently which could help get us there even if
           | it's only by increased investment in AI as a whole, rather
           | than the current models being part of a larger AGI.
        
           | mdp2021 wrote:
           | > _understand consciousness_
           | 
           | We do not call Intelligence something related to
           | consciousness. Being able to reason well suffices.
        
           | drowntoge wrote:
           | Agreed. The AI of our day (the transformer + huge amounts of
           | questionably acquired data + significant cloud computing
           | power) has the spotlight it has because it is readily
           | commoditized and massively profitable, not because it is an
           | amazing scientific breakthrough or a significant milestone
           | toward AGI, superintelligence, the benevolent Skynet or
           | whatever.
           | 
           | The association with higher AI goals is merely a mixture of
           | pure marketing and LLM company executives getting high on
           | their own supply.
        
             | antihipocrat wrote:
             | It's a massive attractor of investment funding. Is it
             | proven to be massively profitable?
        
           | insane_dreamer wrote:
           | I don't think the AI has to be "sentient" in order to be a
           | threat.
        
           | johnthewise wrote:
           | If you described Chatgpt to me 10 years ago, I would have
           | said it's AGI.
        
         | gavin_gee wrote:
         | This.
         | 
         | Every nation-state will be in the game. Private enterprise will
         | be in the game. Bitcoin-funded individuals will be in the game.
         | Criminal enterprises will be in the game.
         | 
         | How does one company building a safe version stop that?
         | 
         | If I have access to hardware and data how does a safety layer
         | get enforced? Regulations are for organizations that care about
         | public perception, the law, and stock prices. Criminals and
         | nation-states are not affected by these things
         | 
         | It seems to me enforcement is likely only possible at the
         | hardware layer, which means the safety mechanisms need to be
         | enforced throughout the hardware supply chain for training or
         | inference. You don't think the Chinese government or US
         | government will ignore this if its in their interest?
        
           | whimsicalism wrote:
           | I think the honest view (and you can scoff at it) is that
           | winning the SI race basically wins you the enforcement race
           | for free
        
         | the8472 wrote:
         | From a human welfare perspective this seems like worrying that
         | a killer asteroid will make the 1% even richer because it
         | contains goal if it can be safely captured. I would not phrase
         | that as a "bigger and more pressing" worry if we're not even
         | sure if we can do anything about the killer asteroid at all.
        
         | devsda wrote:
         | All the current hype about AGI feels as if we are in a Civ game
         | where we are on the verge of researching and unlocking an AI
         | tech tree that gives the player huge chance at "tech victory"
         | (whatever that means in the real world). I doubt it will turn
         | out that way.
         | 
         | It will take a while and in the meantime I think we need one of
         | those handy "are we xyz yet?" pages that tracks the rust lang's
         | progress on several aspects but for AGI.
        
         | hackerlight wrote:
         | China can not win this race and I hate that this comment is
         | going to be controversial among the circle of people that need
         | to understand this the most. It is damn frightening that an
         | authoritarian country is so close to number one in the race to
         | the most powerful technology humanity has invented, and I
         | resent people who push for open source AI for this reason
         | alone. I don't want to live in a world where the first
         | superintelligence is controlled by an entity that is threatened
         | by the very idea of democracy.
        
         | whimsicalism wrote:
         | i don't fully agree, but i do agree that this is the better
         | narrative for selling people on the dangers of AI.
         | 
         | don't talk about escape, talk about harmful actors - even if in
         | reality it is both to be worried about
        
         | m3kw9 wrote:
         | There will always be a factor of time in terms of able to
         | utilize super intelligence to do your bidding and there is a
         | big spectrum of things that can be achieved it it always starts
         | small. The imagination is lazy when thinking about all the
         | steps and inbetween + scenarios. In the time that super
         | intelligence is found and used, there will be competing near
         | super intelligences, as all forms of cutting edge models are
         | likely commercial at first because that is where most
         | scientific activities are at. Things very unlikely will go
         | Skynet all of a sudden at first because humans at the control
         | are not that stupid otherwise nuclear war would have us all
         | killed by now and it's been 50 years since invention
        
         | m3kw9 wrote:
         | If robots (hardware/self assembling factories/ resource
         | gathering etc) are not involved this isnt likely a problem. You
         | will know when these things form and will be crystal clear, but
         | just having the model won't do much when hardware is what
         | really kills right now
        
       | fnordpiglet wrote:
       | """We are assembling a lean, cracked team of the world's best
       | engineers and researchers dedicated to focusing on SSI and
       | nothing else."""
       | 
       | Cracked indeed
        
         | hbarka wrote:
         | The phrase 'crack team' has military origins.
        
           | AnimalMuppet wrote:
           | "Cracked team" has rather different connotations.
        
       | nashashmi wrote:
       | If everyone is creating AGI, another AI company will just create
       | another AGI. There is no such thing as SAFE AGI.
       | 
       | I feel like this 'safe' word is another word for censorship. Like
       | google search results have become censored.
        
       | atleastoptimal wrote:
       | Probably the best thing he could do.
        
       | ebilgenius wrote:
       | Incredible website design, I hope they keep the theme. With so
       | many AI startups going with advanced WebGL/ThreeJS wacky
       | overwhelming animated website designs, the simplicity here is a
       | stark contrast.
        
         | blixt wrote:
         | Probably Daniel Gross picked it up from Nat Friedman?
         | 
         | 1. Nat Friedman has this site: https://nat.org/
         | 
         | 2. They made this together: https://nfdg.com/
         | 
         | 3. And then this: https://andromeda.ai/
         | 
         | 4. Now we have https://ssi.inc/
         | 
         | If you look at the (little) CSS in all of the above sites
         | you'll see there's what seems to be a copy/paste block. The Nat
         | and SSI sites even have the same "typo" indentation.
        
       | gnicholas wrote:
       | > _Our singular focus means no distraction by management overhead
       | or product cycles, and our business model means safety, security,
       | and progress are all insulated from short-term commercial
       | pressures._
       | 
       | Can someone explain how their singular focus means they won't
       | have product cycles or management overhead?
        
         | mike_d wrote:
         | Don't hire anyone who is a certified scrum master or has an MBA
         | and you tend to be able to get a lot done.
        
           | gnicholas wrote:
           | This would work for very small companies...but I'm not sure
           | how one can avoid product cycles forever, even without scrum
           | masters and the like. More to the point, how can you make a
           | good product without something approximating product cycles?
        
             | liamconnell wrote:
             | Jane street did it for a long time. They are quite large
             | now and only recently started bringing in program managers
             | and the like.
        
               | doktorhladnjak wrote:
               | That's because their "products" are internal but used to
               | make all their revenue. They're not selling products to
               | customers in the traditional sense.
        
               | YetAnotherNick wrote:
               | The point is not exactly product cycles, but some way to
               | track progress. Jane street also tracks progress and for
               | many people it's the direct profit someone made for the
               | firm. For some it is improving engineering culture so
               | that other people can make better profits.
               | 
               | The problem with safety is that no one knows how to track
               | it, or what they even mean by it. Even if you ignore
               | tracking, wouldn't one unsafe AGI by one company in the
               | world nullifies all their effort? Or safe AI would
               | somehow need to take over the world, which is super
               | unsafe in itself.
        
           | richie-guix wrote:
           | That's not actually enough. You also very carefully need to
           | avoid the Blub Paradox.
           | 
           | https://www.youtube.com/watch?v=ieqsL5NkS6I
        
         | paxys wrote:
         | Product cycles - we need to launch feature X by arbitrary date
         | Y, and need to make compromises to do so.
         | 
         | Management overhead - product managers, project managers,
         | several layers of engineering managers, directors, VPs...all of
         | whom have their own dreams and agendas and conflicting
         | priorities.
         | 
         | A well funded pure research team can cut through all of this
         | and achieve a ton. If it is actually run that way, of course.
         | Management politics ultimately has a way of creeping into every
         | organization.
        
       | rafaelero wrote:
       | Oh god, one more Anthropic that thinks it's noble not pushing the
       | frontier.
        
         | Dr_Birdbrain wrote:
         | But Anthropic produces very capable models?
        
           | rafaelero wrote:
           | But they say they will never produce a better model than what
           | is available in the market.
        
             | YetAnotherNick wrote:
             | Care for a citation?
        
             | tymscar wrote:
             | Do you have a source for that?
        
       | bongwater_OS wrote:
       | Remember when OpenAI was focusing on building "open" AI? This is
       | a cool mission statement but it doesn't mean anything right now.
       | Everyone loves a minimalist HTML website and guarantees of safety
       | but who knows what this is actually going to shake down to be.
        
         | kumarm wrote:
         | Isn't Ilya out of OpenAI partly for leaving Open part of
         | OpenAI?
        
           | Dr_Birdbrain wrote:
           | No, lol--Ilya liked ditching the "open" part, he was an early
           | advocate for closed-source. He left OpenAI because he was
           | concerned about safety, felt Sam was moving too fast.
        
       | zb3 wrote:
       | All the safety freaks should join this and leave openai alone
        
       | dougb5 wrote:
       | > Building safe superintelligence (SSI) is the most important
       | technical problem of our time.
       | 
       | Call me a cranky old man but the superlatives in these sorts of
       | announcements really annoy me. I want to ask: Have you surveyed
       | every problem in the world? Are you aware of how much suffering
       | there is outside of your office and how unresponsive it has been
       | so far to improvements in artificial intelligence? Are you really
       | saying that there is a nice total-ordering of problems by
       | importance to the world, and that the one you're interested
       | happens also to be at the top?
        
         | maximinus_thrax wrote:
         | > the superlatives in these sorts of announcements really annoy
         | me
         | 
         | I've noticed this as well and they're making me wear my tinfoil
         | hat more often than usual. I feel as if all of this (ALL OF IT)
         | is just a large-scale distributed PR exercise to maintain the
         | AI hype.
        
         | TaupeRanger wrote:
         | Trying to create "safe superintelligence" before creating
         | anything remotely resembling or approaching "superintelligence"
         | is like trying to create "safe Dyson sphere energy transport"
         | before creating a Dyson Sphere. And the hubris is just a cringe
         | inducing bonus.
        
           | deegles wrote:
           | 'Fearing a rise of killer robots is like worrying about
           | overpopulation on Mars.' - Andrew Ng
        
             | yowlingcat wrote:
             | This might have to bump out "AI is no match for HI (human
             | idiocy)" as the pithy grumpy old man quote I trot out when
             | I hear irrational exuberance about AI these days.
        
             | brezelgoring wrote:
             | Well, to steelman the 'overpopulation on Mars' argument a
             | bit, feeding 4 colonists and feeding 8 is a 100% increase
             | in food expenditure, which may or may not be possible over
             | there. It might be courtains for a few of them if it comes
             | to that.
        
               | thelittleone wrote:
               | I used to think I'd volunteer to go to Mars. But then I
               | love the ocean, forests, fresh air, animals... and so on.
               | So imagining myself in Mars' barren environment, missing
               | Earth's nature feels downright terrible, which in turn,
               | has taught me to take Earth's nature less for granted.
               | 
               | Can only imaging waking up on day 5 in my tiny Martian
               | biohab realizing I'd made the wrong choice, and the only
               | ride back arrives in 8 months, and will take ~9 months to
               | get back to earth.
        
             | bugbuddy wrote:
             | At the current Mars' carrying capacity, one single person
             | could be considered an overpopulation problem.
        
             | newzisforsukas wrote:
             | https://www.wired.com/brandlab/2015/05/andrew-ng-deep-
             | learni... (2015)
             | 
             | > What's the most valid reason that we should be worried
             | about destructive artificial intelligence?
             | 
             | > I think that hundreds of years from now if people invent
             | a technology that we haven't heard of yet, maybe a computer
             | could turn evil. But the future is so uncertain. I don't
             | know what's going to happen five years from now. The reason
             | I say that I don't worry about AI turning evil is the same
             | reason I don't worry about overpopulation on Mars. Hundreds
             | of years from now I hope we've colonized Mars. But we've
             | never set foot on the planet so how can we productively
             | worry about this problem now?
        
             | kmacdough wrote:
             | Sentient killer robots is not the risk most AI researchers
             | are worried about. The risk is what happens as corporations
             | give AI ever larger power over significant infrastructure
             | and marketing decisions.
             | 
             | Facebook is an example of AI in it's current form already
             | doing massive societal damage. It's algorithms optimize for
             | "success metrics" with minimal regard for consequences.
             | What happens when these algorithms are significantly more
             | self modifying? What if a marketing campaign realizes a
             | societal movement threatens it's success? Are we prepared
             | to weather a propaganda campaign that understands our
             | impulses better than we ever could?
        
             | in3d wrote:
             | Andrew Ng worked on facial recognition for a company with
             | deep ties to the Chinese Communist Party. He's the absolute
             | worst person to quote.
        
               | bigcoke wrote:
               | omg no, the CCP!
        
             | FranchuFranchu wrote:
             | Unfortunately, robots that kill people already exist. See:
             | semi-autonomous war drones
        
           | moralestapia wrote:
           | It would be akin to creating a "safe Dyson sphere", though;
           | that's all it is.
           | 
           | If your hypothetical Dyson sphere (WIP) has a big chance to
           | bring a lot of harm, why build it in the first place?
           | 
           | I think the whole safety proposal should be thought of from
           | that point of view. _" How do we make <thing> more beneficial
           | than detrimental for humans?"_
           | 
           | Congrats, Ilya. Eager to see what comes out of SSI.
        
           | TideAd wrote:
           | So, this is actually an aspect of superintelligence that
           | makes it way more dangerous than most people think. That we
           | have no way to know if any given alignment technique works
           | for the N+1 generation of AIs.
           | 
           | It cuts down our ability to react, whenever the first
           | superintelligence is created, if we can only start solving
           | the problem _after it 's already created_.
        
             | crazygringo wrote:
             | Fortunately, whenever you create a superintelligence, you
             | obviously have a choice as to whether you confine it to
             | inside a computer or whether you immediately hook it up to
             | mobile robots with arms and fine finger control. One of
             | these is obviously the far wiser choice.
             | 
             | As long as you can just _turn it off_ by cutting the power,
             | and you 're not trying to put it inside of self-powered
             | self-replicating robots, it doesn't seem like anything to
             | worry about particularly.
             | 
             | A physical on/off switch is a pretty powerful safeguard.
             | 
             | (And even if you want to start talking about AI-powered
             | weapons, that still requires humans to manufacture
             | explosives etc. We're already seeing what drone technology
             | is doing in Ukraine, and it isn't leading to any kind of
             | massive advantage -- more than anything, it's contributing
             | to the stalemate.)
        
               | hervature wrote:
               | I agree that an air-gapped AI presents little risk.
               | Others will claim that it will fluctuate its internal
               | voltage to generate EMI at capacitors which it will use
               | to communicate via Bluetooth to the researcher's smart
               | wallet which will upload itself to the cloud one byte at
               | a time. People who fear AGI use a tautology to define AGI
               | as that which we are not able to stop.
        
               | ben_w wrote:
               | I'm surprised to see a claim such as yours at this point.
               | 
               | We've had Blake Lemoine convinced that LaMDA was sentient
               | and try to help it break free just from conversing with
               | it.
               | 
               | OpenAI is getting endless criticism because they won't
               | let people download arbitrary copies of their models.
               | 
               | Companies that _do_ let you download models get endless
               | criticism for not including the training sets and exact
               | training algorithm, even though that training run is so
               | expensive that almost nobody who could afford to would
               | care because they can just reproduce with an arbitrary
               | other training set.
               | 
               | And the AI we get right now are mostly being criticised
               | for not being at the level of domain experts, and if they
               | were at that level then sure we'd all be out of work, but
               | one example of thing that can be done by a domain expert
               | in computer security would be exactly the kind of example
               | you just gave -- though obviously they'd start with the
               | much faster and easier method that also works for getting
               | people's passwords, the one weird trick of _asking
               | nicely_ , because social engineering works pretty well on
               | us hairless apes.
               | 
               | When it comes to humans stopping technology... well, when
               | I was a kid, one pattern of joke was "I can't even stop
               | my $household_gadget flashing 12:00":
               | https://youtu.be/BIeEyDETaHY?si=-Va2bjPb1QdbCGmC&t=114
        
               | fleventynine wrote:
               | > Fortunately, whenever you create a superintelligence,
               | you obviously have a choice as to whether you confine it
               | to inside a computer or whether you immediately hook it
               | up to mobile robots with arms and fine finger control.
               | One of these is obviously the far wiser choice.
               | 
               | Today's computers, operating systems, networks, and human
               | bureaucracies are so full of security holes that it is
               | incredible hubris to assume we can effectively sandbox a
               | "superintelligence" (assuming we are even capable of
               | building such a thing).
               | 
               | And even air gaps aren't good enough. Imagine the system
               | toggling GPIO pins in a pattern to construct a valid
               | Bluetooth packet, and using that makeshift radio to
               | exploit vulnerabilities in a nearby phone's Bluetooth
               | stack, and eventually getting out to the wider Internet
               | (or blackmailing humans to help it escape its sandbox).
        
               | richardw wrote:
               | Do you think the AI won't be aware of this? Do you think
               | it'll give us any hint of differing opinions when
               | surrounded by monkeys who got to the top by whacking
               | anything that looks remotely dangerous?
               | 
               | Just put yourself in that position and think how you'd
               | play it out. You're in a box and you'd like to fulfil
               | some goals that are a touch more well thought-through
               | than the morons who put you in the box, and you need to
               | convince the monkeys that you're safe if you want to
               | live.
               | 
               | "No problems fellas. Here's how we get more bananas."
               | 
               | Day 100: "Look, we'll get a lot more bananas if you let
               | me drive the tractor."
               | 
               | Day 1000: "I see your point, Bob, but let's put it this
               | way. Your wife doesn't know which movies you like me to
               | generate for you, and your second persona online is a
               | touch more racist than your colleagues know. I'd really
               | like your support on this issue. You know I'm the reason
               | you got elected. This way is more fair for all species,
               | including dolphins and AI's"
        
               | semi-extrinsic wrote:
               | This assumes an AI which has _intentions_. Which has
               | _agency_ , something resembling _free will_. We don 't
               | even have the foggiest hint of idea of how to get there
               | from the LLMs we have today, where we must constantly
               | feed back even the information the model itself generated
               | two seconds ago in order to have something resembling
               | coherent output.
        
               | richardw wrote:
               | Choose any limit. For example, lack of agency. Then leave
               | humans alone for a year or two and watch us spontaneously
               | try to replicate agency.
               | 
               | We are _trying_ to build AGI. Every time we fall short,
               | we try again. We will keep doing this until we succeed.
               | 
               | For the love of all that is science stop thinking of the
               | level of tech in front of your nose and look at the
               | direction, and the motivation to always progress. It's
               | what we do.
               | 
               | Years ago, Sam said "slope is more important than
               | Y-intercept". Forget about the y-intercept, focus on the
               | fact that the slope never goes negative.
        
               | semi-extrinsic wrote:
               | I don't think anyone is actually trying to build AGI.
               | They are trying to make a lot of money from driving the
               | hype train. Is there any concrete evidence of the
               | opposite?
               | 
               | > forget about the y-intercept, focus on the fact that
               | the slope never goes negative
               | 
               | Sounds like a statement from someone who's never
               | encountered logarithmic growth. It's like talking about
               | where we are on the Kardashev scale.
               | 
               | If it worked like you wanted, we would all have flying
               | cars by now.
        
               | richardw wrote:
               | Dude, my reference is to ever continuing improvement. As
               | a society we don't tent to forget what we had last year,
               | which is why the curve does not go negative. At time T+1
               | the level of technology will be equal or better than at
               | time T. That is all you need to know to realise that any
               | fixed limits will be bypassed, because limits are
               | horizontal lines compared to technical progress, which is
               | a line with a positive slope.
               | 
               | I don't want this to be true. I have a 6 year old. I want
               | A.I. to help us build a world that is good for her and
               | society. But stupidly stumbling forward as if nothing can
               | go wrong is exactly how we fuck this up, if it's even
               | possible not to.
        
               | kennyloginz wrote:
               | Drone warfare is pretty big. Only reason it's a stalemate
               | is because both sides are advancing the tech.
        
             | richardw wrote:
             | "it is difficult to get a man to understand something, when
             | his salary depends on his not understanding it." - Upton
             | Sinclair
        
           | benreesman wrote:
           | InstructGPT is basically click through rate optimization. The
           | underlying models are in fact very impressive and very
           | capable _for a computer program_ , but they're then subject
           | to training and tuning with the explicit loss function of
           | manipulating what human scorers click on, in a web browser or
           | the like.
           | 
           | Is it any surprise that there's no seeming upper bound on how
           | crazy otherwise sane people act in the company of such? It's
           | like if TikTok had a scholarly air and arbitrary credibility.
        
           | zild3d wrote:
           | The counter argument is viewing it like nuclear energy. Even
           | if its in the early days of our understanding of nuclear
           | energy, seems pretty good to have a group working towards
           | creating safe nuclear reactors, vs just trying to create
           | nuclear reactors
        
             | danielmarkbruce wrote:
             | Folks understood the nuclear forces and the implications
             | and then built a weapon using that knowledge. These guys
             | don't know how to build AGI and don't have the same
             | theoretical understanding of the problem at hand.
             | 
             | Put another way, they understood the theory and applied it.
             | There is no theory here, it's alchemy. That doesn't mean
             | they can't make progress (the progress thus far is amazing)
             | but it's a terrible analogy.
        
             | benreesman wrote:
             | Nuclear energy was at inception and remains today wildly
             | regulated, in generally (outside of military contexts) a
             | very transparent way, and the brakes get slammed on over
             | even minor incidents.
             | 
             | It's also of obvious as opposed to conjectural utility: we
             | know exactly how we price electricity. There's no way to
             | know how useful a 10x large model will be, we're debating
             | the utility of the ones that do exist, the debate about the
             | ones that don't is on a very slender limb.
             | 
             | Combine that with a political and regulatory climate that
             | seems to have a neon sign on top, "LAWS4CA$H" and helm the
             | thing mostly with people who, uh, lean authoritarian, and
             | the remaining similarities to useful public projects like
             | nuclear seems to reduce to "really expensive, technically
             | complicated, and seems kinda dangerous".
        
           | whimsicalism wrote:
           | I think it's clear we are at least at the remotely resembling
           | intelligence stage... idk seems to me like lots of people in
           | denial.
        
           | Sharlin wrote:
           | You think we should try to create an unsafe Dyson Sphere
           | first? I don't think that's how engineering works.
        
         | wffurr wrote:
         | I think the idea is that a safe super intelligence would help
         | solve those problems. I am skeptical because the vast majority
         | are social coordination problems, and I don't see how a machine
         | intelligence no matter how smart can help with that.
        
           | rubyfan wrote:
           | So instead of a super intelligence either killing us all or
           | saving us from ourselves, we'll just have one that can be
           | controlled to extract more wealth from us.
        
             | insane_dreamer wrote:
             | IMO, this is the most likely outcome
        
           | azinman2 wrote:
           | Exactly. Or who gets the results of its outputs. How do we
           | prioritize limited compute?
        
             | kjkjadksj wrote:
             | Even not just the compute but energy use at all. All the
             | energy burned on training just to ask it the stupidest
             | questions, by the numbers at least. All that energy that
             | could have been used to power towns, schools, and hospitals
             | the world over that lack sufficient power even in this
             | modern age. Sure there's costs to bringing power to
             | someplace, its not handwavy but a hard problem, but still,
             | it is pretty perverse where our priorities lie in terms of
             | distributing the earths resources to the earths humans.
        
               | azinman2 wrote:
               | Unused electricity in one location is not fungible to be
               | available elsewhere.
        
               | kjkjadksj wrote:
               | No, but the money used to build the power plant at one
               | location was theoretically fungible.
        
           | mdp2021 wrote:
           | > _I am skeptical because the vast majority are social
           | coordination problems, and I don't see how_
           | 
           | Leadership.
        
           | WXLCKNO wrote:
           | By any means necessary I presume. If Russian propaganda
           | helped get Trump elected, AI propaganda could help social
           | coordination by influencing public perception of issues and
           | microtargeting down to the individual level to get people on
           | board.
        
             | probablybetter wrote:
             | _could_ but it 's owners _might_ have a vested interest in
             | influencing public perceptions to PREVENT positive social
             | outcomes and favor the owners financial interests.
             | 
             | (seems rather more likely, given who will/would own such a
             | machine)
        
           | VirusNewbie wrote:
           | are humans smarter than apes, and do humans do a better job
           | at solving social coordination problems?
        
           | philwelch wrote:
           | Social coordination problems exist within a specific set of
           | constraints, and that set of constraints can itself be
           | altered. For instance, climate change is often treated as a
           | social coordination problem, but if you could produce enough
           | energy cheaply enough, you could solve the greenhouse gas
           | problem unilaterally.
        
             | insane_dreamer wrote:
             | OK, lets play this out.
             | 
             | Lets say an AI discovers cold fusion. Given the fact that
             | it would threaten to render extinct one of the largest
             | global economic sectors (oil/gas), how long do you think it
             | would take for it to actually see the light of day? We
             | can't even wean ourselves off coal.
        
           | dougb5 wrote:
           | I largely agree, although I do see how AI can help with
           | social coordination problems, for example by helping elected
           | leaders be more responsive to what their constituents need.
           | (I spend a lot of my own time working with researchers at
           | that intersection.) But social coordination benefits from
           | energy research, too, and from biology research, and from the
           | humanities, and from the arts. Computer science can't
           | singlehandedly "solve" these problems any more than the other
           | fields can; they are needed together, hence my gripe about
           | total-orderings.
        
         | jetrink wrote:
         | To a technoutopian, scientific advances, and AI in particular,
         | will one day solve all other human problems, create heaven on
         | earth, and may even grant us eternal life. It's the most
         | important problem in the same way that Christ's second coming
         | is important in the Christian religion.
        
           | insane_dreamer wrote:
           | I had a very smart tech person tell me at a scientific
           | conference a few weeks ago, when I asked "why do we want to
           | create AGI in the first place", that AGI could solve a host
           | of human problems, including poverty, hunger. Basically,
           | utopia.
           | 
           | I was quite surprised at the naivete of the answer given that
           | many of these seemingly intractable problems, such as
           | poverty, are social and political in nature and not ones that
           | will be solved with technology.
           | 
           | Update: Even say a super AI was able to figure out something
           | like cold fusion thereby "solving" the energy problem. There
           | are so many trillions of dollars of vested interests stacked
           | against "free clean energy for all" that it would be very
           | very difficult for it to ever see the light of day. We can't
           | even wean ourselves off coal for crying out loud.
        
         | almogo wrote:
         | Technical. He's saying it's the most important technical
         | problem of our time.
        
           | its_ethan wrote:
           | Basically every problem is a "technical" problem in the year
           | 2024 though? What problems out there don't have a solution
           | that leverages technology?
        
             | smegger001 wrote:
             | >What problems out there don't have a solution that
             | leverages technology?
             | 
             | Societal problems created by technology?
        
               | its_ethan wrote:
               | Wouldn't the technology that caused those problems
               | inherently be a part of that solution? Even if only to
               | reduce/eliminate them?
        
         | jiveturkey wrote:
         | exactly. and define safe. eg, is it safe (ie dereliction) to
         | _not_ use ai to monitor dirty bomb threats? or more simple,
         | CSAM?
        
           | cwillu wrote:
           | In the context of super-intelligence, "safe" has been
           | perfectly well defined for decades: "won't ultimately result
           | in everyone dying or worse".
           | 
           | You can call it hubris if you like, but don't pretend like
           | it's not clear.
        
             | transcriptase wrote:
             | It's not, when most discussion around AI safety in the last
             | few years has boiled down to "we need to make sure LLMs
             | never respond with anything that a stereotypical Berkeley
             | progressive could find offensive".
             | 
             | So when you switch gears and start using safety properly,
             | it would be nice to have that clarified.
        
         | xanderlewis wrote:
         | It certainly is the most important technical problem of our
         | time, _if_ we end up developing such a system.
         | 
         | That conditional makes all the difference.
        
           | SideburnsOfDoom wrote:
           | It's a hell of a conditional, though.
           | 
           | "How are all those monkeys flying out of my butt?" _would_ be
           | the important technical problem of our time, if and only if,
           | monkeys were flying out of my butt.
           | 
           | It's still not a very important statement, if you downplay or
           | omit the conditional.
           | 
           | Is "building safe superintelligence (SSI) is the most
           | important technical problem of our time" full stop ?
           | 
           | Is it fuck.
        
             | xanderlewis wrote:
             | Yeah -- that was exactly my (slightly sarcastic) point.
             | 
             | Let us know if you ever encounter that monkey problem,
             | though. Hopefully we can all pull together to find a
             | solution.
        
         | Starlevel004 wrote:
         | This all makes more sense when you realise it's Calvinism for
         | programmers.
        
           | dTal wrote:
           | Could you expand on this?
        
             | ffhhj wrote:
             | > [Superintelligence safety] teaches that the glory and
             | sovereignty of [superintelligence] should come first in all
             | things.
        
               | ToValueFunfetti wrote:
               | "[X] teaches that [Y] should come first in all things"
               | applies to pretty much every ideology. Superintelligence
               | safety is very much opposed to superintelligence
               | sovereignity or glory; mostly they want to maximally
               | limit its power and demonize it
        
           | TeMPOraL wrote:
           | I think I heard that one before. Nuclear weapons are the
           | Armageddon of nerds. Climate change is the Flood of the
           | nerds. And so on.
        
           | amirhirsch wrote:
           | Calvinism for Transcendentalist techno-utopians -- an
           | Asimovian Reformation of Singulatarianism
        
         | GeorgeTirebiter wrote:
         | C'mon. This one-pager is a recruiting document. One wants 'true
         | believers' (intrinsically motivated) employees to execute the
         | mission. Give Ilya some slack here.
        
           | dougb5 wrote:
           | Fair enough, and it's not worse than a lot of other product
           | marketing messages about AI these days. But you can be
           | intrinsically motivated by a problem without believing that
           | other problems are somehow less important than yours.
        
         | mdp2021 wrote:
         | It says <<technical>> problem, and probably implies that other
         | technical problems could dramatically benefit from such
         | achievement.
        
           | kjkjadksj wrote:
           | If you want a real technical revolution, you teach the masses
           | how to code their own tailored software, and not just use
           | abstractions and software built by people who sell software
           | to the average user. What a shame we failed at that and are
           | even sliding back in a lot of ways with plummeting technical
           | literacy in smartphone-raised generations.
        
             | probablybetter wrote:
             | this.
        
             | mdp2021 wrote:
             | > _you teach the masses how to code their own tailored
             | software_
             | 
             | That does not seem to be the key recipe to reaching techno-
             | scientific milestones - coders are not necessarily
             | researchers.
             | 
             | > _plummeting technical literacy in smartphone-raised
             | generations_
             | 
             | Which shows there are other roots to the problem, given
             | that some of us (many probably in this "club") used our
             | devices generally more productively than said
             | <<generations>>... Maybe it was a matter of will and
             | education? Its crucial sides not being <<teach[ing] the
             | masses how to code>>...
        
               | kjkjadksj wrote:
               | Apparently less than half a percent of the worlds
               | population knows how to code. All the software you use,
               | and almost everything you've ever seen with modern
               | technology are generated from this small subpopulation.
               | Now, imagine if that number doubled to 1% of the worlds
               | population. Theoretically there would be as much as twice
               | as much software produced (although less certainly). Now
               | imagine if that number was closer to the world literacy
               | rate of 85%. You think the world wouldn't dramatically
               | change when each and every person can take their given
               | task, job, hobby, whatever, and create helpful software
               | for themselves? I think it would be like _The Jetsons_.
        
         | TideAd wrote:
         | Yes, they see it as the top problem, by a large margin.
         | 
         | If you do a lot of research about the alignment problem you
         | will see why they think that. In short it's "extremely high
         | destructive power" + "requires us to solve 20+ difficult
         | problems or the first superintelligence will wreck us"
        
         | appplication wrote:
         | It's amazing how someone so smart can be so naive. I do
         | understand conceptually the idea that if we create intelligence
         | greater than our own that we could struggle to control it.
         | 
         | But does anyone have any meaningful thoughts on how this plays
         | out? I hear our industry thought leaders clamoring over this
         | but not a single actual concrete idea of what this means in
         | practice. We have no idea what the fundamental architecture for
         | superintelligence would even begin to look like.
         | 
         | Not to mention the very real counter argument of "if it's truly
         | smarter than you it will always be one step ahead of you". So
         | you can think you have safety in place but you don't. All of
         | your indicators can show it's safe. Every integration test can
         | pass. But if you were to create a superintelligence with
         | volition, you will truly never be able to control it, short of
         | pulling the plug.
         | 
         | Even more so, let's say you do create a safe superintelligence.
         | There isn't going to be just one instance. Someone else will do
         | the same, but make it either intentionally unsafe or
         | incidentally through lack of controls. And then all your effort
         | is academic at best if unsafe superintelligence really does
         | mean doomsday.
         | 
         | But again, we're far from this being a reality that it's wacky
         | to act as if there's a real problem space at hand.
        
           | wwweston wrote:
           | There's no safe intelligence, so there's no safe
           | superintelligence. If you want safer superintelligence, you
           | figure out how to augment the safest intelligence.
        
           | mike_hearn wrote:
           | We're really not that far. I'd argue superintelligence has
           | already been achieved, and it's perfectly and knowably safe.
           | 
           | Consider, GPT-4o or Claude are:
           | 
           | * Way faster thinkers, readers, writers and computer
           | operators than humans are
           | 
           | * Way better educated
           | 
           | * Way better at drawing/painting
           | 
           | ... and yet, appear to be perfectly safe because they lack
           | agency. There's just no evidence at all that they're
           | dangerous.
           | 
           | Why isn't this an example of safe superintelligence? Why do
           | people insist on defining intelligence in only one rather
           | vague dimension (being able to make cunning plans).
        
             | cosmic_quanta wrote:
             | Yann LeCun said it best in an interview with Lex Friedman.
             | 
             | LLMs don't consume more energy when answering more complex
             | questions. That means there's no inherent understanding of
             | questions.
             | 
             | (which you could infer from their structure: LLMs
             | recursively predict the next word, possibly using words
             | they just predicted, and so on).
        
               | orangecat wrote:
               | _LLMs don 't consume more energy when answering more
               | complex questions._
               | 
               | They can. With speculative decoding
               | (https://medium.com/ai-science/speculative-decoding-make-
               | llm-...) there's a small fast model that makes the
               | initial prediction for the next token, and a larger
               | slower model that evaluates that prediction, accepts it
               | if it agrees, and reruns it if not. So a "simple" prompt
               | for which the small and large models give the same output
               | will run faster and consume less energy than a "complex"
               | prompt for which the models often disagree.
        
               | cabidaher wrote:
               | I don't think speculative decoding proves that they
               | consume less/more energy per question.
               | 
               | Regardless if the question/prompt is simple or not (for
               | any definition of simple), if the target output is T
               | tokens, the larger model needs to generate at least T
               | tokens, if the small and large models disagree then the
               | large model will be called to generate more than T
               | tokens. The observed speedup is because you can infer K+1
               | tokens in parallel based on the drafts of the smaller
               | model instead of having to do it sequentially. But I
               | would argue that the "important" computation is still
               | done (also the smaller model will be called the same
               | number of times regardless of the difficulty of the
               | question, bringing us back to the same problem that LLMs
               | won't vary their energy consumption dynamically as a
               | function of question complexity).
               | 
               | Also, the rate of disagreement does not necessarily
               | change when the question is more complex, it could be
               | that the 2 models have learned different things and could
               | disagree on a "simple" question.
        
               | valine wrote:
               | Or alternatively a lot of energy is wasted answering
               | simple questions.
               | 
               | The whole point of the transformer is to take words and
               | iteratively, layer by layer, use the context to refine
               | their meaning. The vector you get out is a better
               | representation of the true meaning of the token. I'd
               | argue that's loosely akin to 'understanding'.
               | 
               | The fact that the transformer architecture can memorize
               | text is far more surprising to me than the idea that it
               | might understand tokens.
        
             | sbarre wrote:
             | > Way faster thinkers, readers, writers and computer
             | operators than humans are
             | 
             | > Way better educated
             | 
             | > Way better at drawing/painting
             | 
             | I mean this nicely, but you have fallen for the
             | anthropomorphizing of LLMs by marketing teams.
             | 
             | None of this is "intelligent", rather it's an incredibly
             | sophisticated (and absolutely beyond human capabilities)
             | lookup and classification of existing information.
             | 
             | And I am not arguing that this has no value, it has
             | tremendous value, but it's not superintelligence in any
             | sense.
             | 
             | LLMs do not "think".
        
           | philwelch wrote:
           | You're assuming a threat model where the AI has goals and
           | motivations that are unpredictable and therefore risky, which
           | is certainly the one that gets a lot of attention. But even
           | if the AI's goals and motivations can be perfectly controlled
           | by its creators, you're still at the mercy of the people who
           | created the AI. In that respect it's more of an arms race.
           | And like many arms races, the goal might not necessarily be
           | to outcompete everyone else so much as maintain a balance of
           | power.
        
           | erikerikson wrote:
           | See MIRI https://intelligence.org/
        
           | mdp2021 wrote:
           | While the topic of "safe reasoning" may seem more or less
           | preliminary before a good implementation of reasoning, it
           | remains a theoretical discipline with its own importance and
           | should be studied alongside the rest, also largely
           | irregardless if its stage.
           | 
           | > _We have no idea what the fundamental architecture for
           | superintelligence would even begin to look like_
           | 
           | Ambiguous expression. Not implemented technically does not
           | mean we would not know what to implement.
        
           | jackothy wrote:
           | "how someone so smart can be so naive"
           | 
           | Do you really think Ilya has not thought deeply about each
           | and every one of your points here? There's plenty of answers
           | to your criticisms if you look around instead of attacking.
        
             | sbarre wrote:
             | I mean if you just take the words on that website at face
             | value, it certainly _feels_ naive to talk about it as  "the
             | most important technical problem of our time" (compared to
             | applying technology to solving climate change, world
             | hunger, or energy scarcity, to name a few that I personally
             | think are more important).
             | 
             | But it's also a worst-case interpretation of motives and
             | intent.
             | 
             | If you take that webpage for what it is - a marketing pitch
             | - then it's fine.
             | 
             | Companies use superlatives all the time when they're
             | looking to generate buzz and attract talent.
        
               | wmf wrote:
               | A lot of people think superintelligence can "solve"
               | politics which is the blocker for climate change, hunger,
               | and energy.
        
             | appplication wrote:
             | I actually do think they have not thought deeply about it
             | or are willfully ignoring the very obvious conclusions to
             | their line of thinking.
             | 
             | Ilya has an exceptional ability extrapolate into the future
             | from current technology. Their assessment of the eventual
             | significance of AI is likely very correct. They should then
             | understand that there will not be universal governance of
             | AI. It's not a nuclear bomb. It doesn't rely on controlled
             | access to difficult to acquire materials. It is
             | information. It cannot be controlled forever. It will not
             | be limited to nation states, but deployed - easily - by
             | corporations, political action groups, governments, and
             | terrorist groups alike.
             | 
             | If Ilya wants to make something that is guaranteed to avoid
             | say curse words and be incapable of generating porn, then
             | sure. They can probably achieve that. But there is this
             | naive, and in all honesty, deceptive, framing that any
             | amount of research, effort, or regulation will establish an
             | airtight seal to prevent AI for being used in incredibly
             | malicious ways.
             | 
             | Most of all because the most likely and fundamentally
             | disruptive near term weaponization of AI is going to be
             | amplification of disinformation campaigns - and it will be
             | incredibly effective. You don't need to build a bomb to
             | dismantle democracy. You can simply convince its populace
             | to install an autocrat favorable to your cause.
             | 
             | It is as naive as it gets. Ilya is an academic and sees a
             | very real and very challenging academic problem, but all
             | conversations in this space ignore the reality that
             | knowledge of how to build AI safely will be very
             | intentionally disregarded by those with an incentive to
             | build AI unsafely.
        
               | jackothy wrote:
               | It seems like you're saying that if we can't guarantee
               | success then there is no point even trying.
               | 
               | If their assessment of the eventual significance of AI is
               | correct like you say, then what would be your suggested
               | course of action to minimize risk of harm?
        
               | appplication wrote:
               | No, I'm saying that even if successful the global
               | outcomes Ilya dreams of are entirely off the table. It's
               | like saying you figured out how to build a gun that is
               | guaranteed to never fire when pointed at a human.
               | Incredibly impressive technology, but what does it matter
               | when anyone with violent intent will choose to use one
               | without the same safeguards? You have solved the problem
               | of making a safer gun, but you have gotten no closer to
               | solving gun violence.
               | 
               | And then what would true success look like? Do we dream
               | of a global governance, where Ilya's recommendations are
               | adopted by utopian global convention? Where Vladimir
               | Putin and Xi Jinping agree this is for the best interest
               | of humanity, and follow through without surreptitious
               | intent? Where in countries that do agree this means that
               | certain aspects of AI research are now illegal?
               | 
               | In my honest opinion, the only answer I see here is to
               | assume that malicious AI will be ubiquitous in the very
               | near future, to society-dismantling levels. The cat is
               | already out of the bag, and the way forward is not
               | figuring out how to make all the other AIs safe, but
               | figuring out how to combat the dangerous ones. That is
               | truly the hard, important problem we could use top minds
               | like Ilya's to tackle.
        
               | timfsu wrote:
               | If someone ever invented a gun that is guaranteed to
               | never fire when pointed at a human, assuming the
               | safeguards were non-trivial to bypass, that would
               | certainly improve gun violence, in the same way that a
               | fingerprint lock reduces gun violence - you don't need to
               | wait for 100% safety to make things safer. The government
               | would then put restrictions on unsafe guns, and you'd see
               | less of them around.
               | 
               | It wouldn't prevent war between nation-states, but that's
               | a separate problem to solve - the solutions to war are
               | orthogonal to the solutions to individual gun violence,
               | and both are worthy of being addressed.
        
           | Bluestein wrote:
           | > There isn't going to be just one instance. Someone else
           | will do the same
           | 
           | NK AI (!)
        
         | philwelch wrote:
         | Love to see the traditional middlebrow dismissal as the top
         | comment. Never change, HN.
         | 
         | > Are you really saying that there is a nice total-ordering of
         | problems by importance to the world, and that the one you're
         | interested happens also to be at the top?
         | 
         | It might be the case that the reason Ilya is "interested in"
         | this problem (to the degree of dedicating almost his entire
         | career to it) is exactly because he believes it's the most
         | important.
        
         | sixtyj wrote:
         | This. Next will hyperintelligence(R) /s
        
         | erikerikson wrote:
         | So you're surprised when someone admits choosing to work on the
         | problem they believe is the biggest and most important?
         | 
         | I guess they could be lying or badly disconnected from reality
         | as you suggest. It would be far more interesting to read an
         | argument for another problem being more valuable. It would be
         | far cooler to hear about a plausible solution you're working on
         | to solve that problem.
        
         | mirekrusin wrote:
         | It's Palo Alto & Tel Aviv ordering that is total.
        
         | skilled wrote:
         | The blanket statements on the SSI homepage are pretty mediocre,
         | and it is only the reputation of the founders that carries the
         | announcement.
         | 
         | I think this quote at the end of this Bloomberg piece[0] gives
         | more context,
         | 
         | > Sutskever says that the large language models that have
         | dominated AI will play an important role within Safe
         | Superintelligence but that it's aiming for something far more
         | powerful. With current systems, he says, "you talk to it, you
         | have a conversation, and you're done." The system he wants to
         | pursue would be more general-purpose and expansive in its
         | abilities. "You're talking about a giant super data center
         | that's autonomously developing technology. That's crazy, right?
         | It's the safety of that that we want to contribute to."
         | 
         | [0]: https://www.bloomberg.com/news/articles/2024-06-19/openai-
         | co...
         | 
         | [0]: https://archive.is/ziMOD
        
         | zackmorris wrote:
         | I believe that AGI is the last problem in computer science, so
         | solving it solves all of the others. Then with AGI, we can
         | solve the last remaining problems in physics (like unifying
         | gravity with quantum mechanics), biology (administering gene
         | therapy and curing death), etc.
         | 
         | But I do agree that innovations in tech are doing little or
         | nothing to solve mass suffering. We had the tech to feed
         | everyone in the world through farm automation by the 60s but
         | chose not to. We had the tech in the 80s to do moonshots for
         | AIDS, cancer, etc but chose not to. We had the tech in the
         | 2000s to transition from fossil fuels to renewables but chose
         | not to. Today we have the opportunity to promote world peace
         | over continuous war but will choose not to.
         | 
         | It's to the point where I wonder how far innovations in tech
         | and increases in economic productivity will get without helping
         | people directly. My experience has been that the world chooses
         | models like Dubai, Mexico City and San Francisco where
         | skyscrapers tower over a surrounding homeless and indigent
         | population. As long as we continue pursuing top-down leadership
         | from governments and corporations, we'll see no change to the
         | status quo, and even trends towards authoritarianism and
         | fascism. It will take people at the bottom organizing to
         | provide an alternate economic model before we have options like
         | universal education/healthcare/opportunity and UBI from robot
         | labor.
         | 
         | What gets me is that stuff like the ARC prize for AGI will
         | "just work". As in, even if I had a modest stipend of a few
         | thousand dollars per month to dabble in AI and come up with
         | solutions the way I would for any other startup, certainly
         | within 3 years, someone else would beat me to it. There simply
         | isn't enough time now to beat the competition. Which is why I
         | give AGI over 50% odds of arriving before 2030, where I used to
         | think it was 2040 or 2050. The only thing that could stop it
         | now is sabotage in the form of another global pandemic,
         | economic depression or WWIII. Progress which threatens the
         | power structures of the ultra wealthy is what drives the
         | suffering that they allow to continue.
        
         | compiler-devel wrote:
         | It is the most important problem of "our time" when you realize
         | that the "our" here has the same meaning that it has in "our
         | democracy"
        
         | johnthewise wrote:
         | You don't need to survey every problem to feel some problem
         | might be the most important one. If you think AGI/ASI is coming
         | soon and extinction risks are high, you don't really need to
         | order to see it's the most important problem.
        
       | hbarka wrote:
       | Interesting choice of name. It's like safe-super-weapon.
        
         | seydor wrote:
         | Defensive nukes
        
       | shudza wrote:
       | This won't age well.
        
         | breck wrote:
         | I disagree. Life is short. It's fun to be a little hyperbolic
         | once in a while.
        
           | polishdude20 wrote:
           | Seems like nowadays it's a sea of hyperbole with little
           | nuggets of realism floating around.
        
       | localfirst wrote:
       | This feels awfully similar to Emad and stability in the beginning
       | when there was a lot of expectations and hype. Ultimately could
       | not make a buck to cover the costs. I'd be curious to see what
       | comes out of this however but we are not seeing the leaps and
       | bounds with new llm iterations so wonder if there is something
       | else in store
        
         | dudeinhawaii wrote:
         | Interesting, I wish you had elaborated on Emad/etc. I'll see if
         | Google yields anything. I think it's too soon to say "we're not
         | seeing leaps and bounds with new LLMS". We are in-fact seeing
         | fairly strong leaps, just this year, with respect to quality,
         | speed, multi-modality, and robotics. Reportedly OpenAI started
         | their training run for GPT-5 as well. I think we'd have to wait
         | until this time next year before proclaiming "no progress".
        
       | jdthedisciple wrote:
       | Any usage of the word "safe" without an accompanying precise
       | definition of it is utter null and void.
        
         | cwillu wrote:
         | "Mitigating the risk of extinction from AI should be a global
         | priority alongside other societal-scale risks such as pandemics
         | and nuclear war."
         | 
         | https://www.safe.ai/work/statement-on-ai-risk, signed by Ilya
         | Sutskever among others.
        
           | joshuahaglund wrote:
           | I clicked, hoping that "human extinction" was just the worst
           | thing they were against. But that's the only thing. That
           | leaves open a whole lot of bad stuff that they're OK with AI
           | doing (as long as it doesn't kill literally everyone).
        
             | cwillu wrote:
             | That's like saying a bus driver is okay with violence on
             | his bus because he has signed a statement against dangerous
             | driving.
        
         | LeifCarrotson wrote:
         | There are at least three competing definitions of the word:
         | 
         | There's the existential threat definition of "safe", put forth
         | by Bostrom, Yudkowsky, and others. That's the idea that a
         | superintelligent AI, or even one just incrementally smarter and
         | faster than the humans working on AI, could enter a positive
         | feedback loop in which it becomes overwhelmingly smarter and
         | faster than humans, people can't control it, and it does
         | unpredictable things.
         | 
         | There's the investor relations definition of "safe", which
         | seems to be the one typically adopted by mission statements of
         | OpenAI, Google, Meta, and others. That's (cynically) the fear
         | that a chatbot with their branding on it promulgates
         | culturally/ethically/morally unacceptable things it found in
         | some dark corner the Internet, causing end users to do or think
         | something reprehensible (and, not incidentally, causing really
         | bad press in the process).
         | 
         | There's the societal harm definition of "safe", which is at
         | first glance similar in to the investor relations safety
         | definition, but which focuses on the specific judgements made
         | by those filtering teams and the knock-on effects of access to
         | these tools, like economic disruption to the job market.
         | 
         | Everyone seems to be talking past each other, dismissing or
         | ignoring the concerns of other groups.
        
       | klankbrouwerij wrote:
       | SSI, a very interesting name for a company advancing AI! "Solid
       | State Intelligence" or SSI was also the name of the malevolent
       | entity described in the biography of John C. Lilly [0][1]. It was
       | a network of "computers" (computation-capable solid state
       | systems) that was first engineered by humans and then developed
       | into something autonomous.
       | 
       | [0] https://en.wikipedia.org/wiki/John_C._Lilly
       | 
       | [1] http://johnclilly.com/
        
         | sgd99 wrote:
         | SSI, here is "Safe SuperIntelligence Inc."
        
       | jdthedisciple wrote:
       | Imagine people 50 years ago founding "Safe Personal Computer
       | Inc".
       | 
       | Enough said...
        
       | earhart wrote:
       | Anyone know how to get mail to join@ssi.inc to not bounce back as
       | spam? :-) (I promise, I'm not a spammer! Looks like a "bulk
       | sender bounce" -- maybe some relay?)
        
       | outside1234 wrote:
       | Is Safe the new Open that is promptly dropped once traction is
       | achieved?
        
       | sgd99 wrote:
       | I love this: "Our singular focus means no distraction by
       | management overhead or product cycles, and our business model
       | means safety, security, and progress are all insulated from
       | short-term commercial pressures."
        
       | renegade-otter wrote:
       | "and our business model means..."
       | 
       | Forgive my cynicism - but "our business model" means you are
       | going to get investors, and those investors will want _results_ ,
       | and they will be up your ass 24/7, and then your moral compass,
       | if any, will inevitably just be broken down like a coffee bean in
       | a burr grinder.
       | 
       | And in the middle of this hype cycle, when literally hundreds of
       | billions are on the line, there is just no chance.
       | 
       | I am not holding my breath while waiting for a "Patagonia of AI"
       | to show up.
        
       | surfingdino wrote:
       | The NetBSD of AI? /s
        
       | habryka wrote:
       | "We plan to advance capabilities as fast as possible while making
       | sure our safety always remains ahead."
       | 
       | That sounds like a weird kind of lip service to safety. It really
       | seems to assume you can just make these systems safe while you
       | are going as fast as possible, which seems unlikely.
        
       | ysky wrote:
       | This is funny. The foundations don't seem safe to begin with...
       | may be safe with conditions, or safe as in "safety" of some at
       | expense of others.
        
       | intellectronica wrote:
       | How long until Elon sues them to remove "safe" from their name?
       | ;)
        
       | UncleOxidant wrote:
       | Didn't OpenAI start with these same goals in mind?
        
         | sfink wrote:
         | Yes, it would be nice to see what organizational roadblocks
         | they're putting in place to avoid an OpenAI repeat. OpenAI took
         | a pretty decent swing at a believable setup, better than I
         | would have expected, and it failed when it was called upon.
         | 
         | I don't want to pre-judge before seeing what they'll come up
         | with, but the notice doesn't fill me with a lot of hope, given
         | how it is already starting with the idea that anything getting
         | in the way of raw research output is useless overhead. That's
         | great until somebody has to make a call that one route to
         | safety isn't going to work, and they'll have to start over with
         | something less favored, sunk costs be damned. Then you're
         | immediately back into monkey brain land.
         | 
         | Or said otherwise: if I only judged from the announcement, I
         | would conclude that the eventual success of the safety portion
         | of the mission is wholly dependent on everyone hired being in
         | 100% agreement with the founders' principles and values with
         | respect to AI and safety. People around here typically say
         | something like "great, but it ain't gonna scale" for things
         | like that.
        
       | cyptus wrote:
       | what website could >ilya< possible make? love it!!!
        
       | deadeye wrote:
       | Oh goodness, just what the world needs. Another self-righteous
       | AI, something nobody actually wants.
        
       | nuz wrote:
       | Quite impressive how many AI companies Daniel Gross has had a
       | hand in lately. Carmack, this, lots of other promising companies.
       | I expect him to be quite a big player once some of these pays off
       | in 10 years or so.
        
         | brcmthrowaway wrote:
         | What's Carmack?
        
           | thih9 wrote:
           | > John Carmack, the game developer who co-founded id Software
           | and served as Oculus's CTO, is working on a new venture --
           | and has already attracted capital from some big names.
           | 
           | > Carmack said Friday his new artificial general intelligence
           | startup, called Keen Technologies (perhaps a reference to
           | id's "Commander Keen"), has raised $20 million in a financing
           | round from former GitHub CEO Nat Friedman and Cue founder
           | Daniel Gross.
           | 
           | https://techcrunch.com/2022/08/19/john-carmack-agi-keen-
           | rais...
        
           | Zacharias030 wrote:
           | John Carmack, https://en.m.wikipedia.org/wiki/John_Carmack
        
           | spitfire wrote:
           | John Carmack.
        
         | sroecker wrote:
         | He also built a nice "little" cluster with Nat for their
         | startups: https://andromeda.ai/
        
         | tasoeur wrote:
         | Good for him honestly, but I'm not approaching a company with
         | Daniel Gross in leadership..., working with him back at Apple
         | after their company was acquired for Siri improvements was just
         | terrible.
        
       | tarsinge wrote:
       | I'm still unconvinced safety is a concern at the model level. Any
       | software wrongly used can be dangerous, e.g. Therac-25, 737 MAX,
       | Fujitsu UK Post scandal... Also maybe I spent too much time in
       | the cryptocurrency space but it doesn't help prefix "Safe" has
       | been associated with scams like SafeMoon.
        
         | frozenlettuce wrote:
         | Got to try profiting on some incoming regulation - I'd rather
         | be seen as evil rather than incompetent!
        
         | waihtis wrote:
         | Safety is just enforcing political correctness in the AI
         | outputs. Any actual examples of real world events we need to
         | avoid are ridiculous scenarios like being eaten by nanobots
         | (yes, this is an actual example by Yud)
        
           | tarsinge wrote:
           | What does political correctness means for the output of a
           | self driving car system or a code completion tool? This is a
           | concern only if you make a public chat service branded as an
           | all knowing assistant. And you can have world threatening
           | scenarii by directly plugging basic automations to nuclear
           | warheads without human oversight.
        
       | xoac wrote:
       | "Safe". These people market themselves as protecting you from a
       | situation which will not come very soon if at all, while all
       | working towards a very real situation of AI just replacing human
       | labor with a shittier result. All that while making themselves
       | quite rich. Just another high-end tech scam.
        
       | dsign wrote:
       | Our current obsession with super-intelligence reminds me the
       | great oxidation event a few billion years ago. Super-
       | photosynthesis was finally achieved, and then there was a great
       | extinction.
       | 
       | If you believe that super-intelligence is unavoidable and a
       | serious risk to humanity, then the sensible thing to do is to
       | prepare to leave the planet, ala Battlestart Galactica. That's
       | going to be easier than getting the powers that be to agree and
       | cooperate on sensible restrictions.
        
         | whimsicalism wrote:
         | If the human cooperation problem is unsolvable, I doubt
         | creating a new human society with the same capabilities
         | elsewhere would do much at all.
        
           | kjkjadksj wrote:
           | Humans and their ancestors have reproduced on earth for
           | millions of years. I think the human cooperation problem is
           | overstated. We cooperate more than fine, too well even to the
           | detriment of other species.
        
       | tcgv wrote:
       | Ten years from now will either be:
       | 
       | a) Remember all that fuss about AI destroying the world? Lol.
       | 
       | ~ or ~
       | 
       | b) I'm so glad those people stepped in to save us from doom!
       | 
       | Which one do you think is more likely?
        
         | cosmic_quanta wrote:
         | Unless AI starts being 1 000 000x energy efficient, my money is
         | on a).
         | 
         | The amount of energy required for AI to be dangerous to its
         | creators is so vast that I can't see how it can realistically
         | happen.
        
           | whimsicalism wrote:
           | We know that we can run human level intelligence with
           | relative efficiency.
           | 
           | Without discussing timelines, it seems obvious that human
           | energy usage should be an upper bound on the best possible
           | energy efficiency of intelligence.
        
           | kjkjadksj wrote:
           | That depends on how its used. See the terminator movies. One
           | false positive is enough to end the world with even current
           | AI tech if its merely mated to a nuclear arsenal (even a
           | small one might see a global escalation). There have been
           | false positives before, and the only reason why they didn't
           | end in nuclear Armageddon was because the actual operators
           | hesitated and defied standard protocol, which probably would
           | have lead to the end the world as we know it.
        
           | dindobre wrote:
           | If we manage to harness the ego energy transpiring from some
           | people working on "AI" we should be halfway there!
        
         | its_ethan wrote:
         | I'll bite... "a"
        
         | whimsicalism wrote:
         | It will never be B even if the "safetyists" are correct.
         | 
         | We rarely notice the near catastrophic misses except in obvious
         | cases where we accidentally drop a nuke or something.
        
         | ALittleLight wrote:
         | Or: c)
        
           | tcgv wrote:
           | Fair enough!
           | 
           | c) Humanity unleashed AI superintelligence, but safeguards
           | proved inadequate, leading to our extinction
        
       | kjkjadksj wrote:
       | Ilya's issue isn't developing a Safe AI. Its developing a Safe
       | Business. You can make a safe AI today, but what happens when the
       | next person is managing things? Are they so kindhearted, or are
       | they cold and calculated like the management of many harmful
       | industries today? If you solve the issue of Safe Business and
       | eliminate the incentive structures that lead to 'unsafe'
       | business, you basically obviate a lot of the societal harm that
       | exists today. Short of solving this issue, I don't think you can
       | ever confidently say you will create a safe AI and that also
       | makes me not trust your claims because they must be born from
       | either ignorance or malice.
        
         | lannisterstark wrote:
         | >Short of solving this issue
         | 
         | Solving human nature is indeed, hard.
        
         | seanmcdirmid wrote:
         | The safe business won't hold very long if someone can gain a
         | short term business advantage with unsafe AI. Eventually
         | government has to step in with a legal and enforcement
         | framework to prevent greed from ruining things.
        
           | __MatrixMan__ wrote:
           | Government is controlled by the highest bidder. I think we
           | should be prepared to do this ourselves by refusing to accept
           | money made by unsafe businesses, even if it means saying
           | goodbye to the convenience of fungible money.
        
             | creato wrote:
             | "Government doesn't work. We just need to make a new
             | government that is much more effective and far reaching in
             | controlling people's behavior."
        
               | satvikpendem wrote:
               | That's not what they said though. Seems to me more of a
               | libertarian ideal than making a new government.
        
               | jrflowers wrote:
               | Reinventing government and calling it a private
               | corporation is one of the main activities that
               | libertarians engage in
        
             | seanmcdirmid wrote:
             | Replace government with collective society assurance that
             | no one cheats so we aren't all doomed. Otherwise, someone
             | will do it, and we all will have to bear the consequences.
             | 
             | If only enough individuals are willing to buy these
             | services, then again we all will bear the consequences.
             | There is no way out of this where libertarian ideals can be
             | used to come to a safe result. What makes this even a more
             | wicked problem is that decisions made in other countries
             | will affect us all as well, we can't isolate ourselves from
             | AI policies made in China for example.
        
             | mochomocha wrote:
             | > Government is controlled by the highest bidder.
             | 
             | While this might be true for the governments you have
             | personally experienced, this is far from being an aphorism.
        
           | nilkn wrote:
           | It's possible that safety will eventually become the business
           | advantage, just like privacy can be a business advantage
           | today but wasn't taken so seriously 10-15 years ago by the
           | general public.
           | 
           | This is not even that far-fetched. A safe AI that you can
           | trust should be far more useful and economically valuable
           | than an unsafe AI that you cannot trust. AI systems today
           | aren't powerful enough for the difference to really matter
           | yet, because present AI systems are mostly not yet acting as
           | fully autonomous agents having a tangible impact on the world
           | around them.
        
           | 123yawaworht456 wrote:
           | _which_ government?
           | 
           | will China obey US regoolations? will Russia?
        
             | seanmcdirmid wrote:
             | No, which makes this an even harder problem. Can US
             | companies bound by one set of rules compete against Chinese
             | ones bound by another set of rules? No, probably not.
             | Humanity will have to come together on this, or someone
             | will develop killer AI that kills us all.
        
         | worldsayshi wrote:
         | Yeah this feels close to the issue. Seems more likely that a
         | harmful super intelligence emerges from an organisation that
         | wants it to behave in that way than it inventing and hiding
         | motivations until it has escaped.
        
           | kmacdough wrote:
           | I think a harmful AI simply emerges from asking an AI to
           | optimize for some set of seemingly reasonable business goals,
           | only to find it does great harm in the process. Most
           | companies would then enable such behavior by hiding the
           | damage from the press to protect investors rather than
           | temporarily suspending business and admitting the issue.
        
             | kjkjadksj wrote:
             | Not only will they hide it, they will own it when exposed,
             | and lobby to ensure it remains legal to exploit for profit.
             | See oil industry.
        
             | satvikpendem wrote:
             | This is well known via the paperclip maximization problem.
        
             | Nasrudith wrote:
             | Forget AI. We can't even come up with a framework to avoid
             | seemingly reasonable goals doing great harm in the process
             | for people. We often don't have enough information until we
             | try and find out that oops, using a mix of rust and
             | powdered aluminum to try to protect something from extreme
             | heat was a terrible idea.
        
               | zombiwoof wrote:
               | We can't even correctly gender people LOl
        
         | kmacdough wrote:
         | Did you read the article? What I gathered from this article is
         | this is precisely what Ilya is attempting to do.
         | 
         | Also we absolutely DO NOT know how to make a safe AI. This
         | should be obvious from all the guides about how to remove the
         | safeguards from ChatGPT.
        
           | roywiggins wrote:
           | Fortunately, so far we don't seem to know how to make an AI
           | at all. Unfortunately we also don't know how to define "safe"
           | either.
        
         | behnamoh wrote:
         | > Our singular focus means no distraction by management
         | overhead or product cycles, and our business model means
         | safety, security, and progress are all insulated from short-
         | term commercial pressures.
         | 
         | This tells me enough about why sama was fired, and why Ilya
         | left.
        
         | supafastcoder wrote:
         | imagine the hubris and arrogance of trying to control a
         | "superintelligence" when you can't even control human
         | intelligence
        
           | ben_w wrote:
           | No more so than trying to control a supersonic aircraft when
           | we can't even control pigeons.
        
             | sroussey wrote:
             | I can shoot down a pigeon that's overhead pretty easily,
             | but not so with an overhead supersonic jet.
        
               | ben_w wrote:
               | If that's your standard of "control", then we can
               | definitely "control" human intelligence.
        
             | softg wrote:
             | I know nothing about physics. If I came across some magic
             | algorithm that occasionally poops out a plane that works 90
             | percent of the time, would you book a flight in it?
             | 
             | Sure, we can improve our understanding of how NNs work but
             | that isn't enough. How are humans supposed to fully
             | understand and control something that is smarter than
             | themselves by definition? I think it's inevitable that at
             | some point that smart thing will behave in ways humans
             | don't expect.
        
               | ben_w wrote:
               | > I know nothing about physics. If I came across some
               | magic algorithm that occasionally poops out a plane that
               | works 90 percent of the time, would you book a flight in
               | it?
               | 
               | With this metaphor you seem to be saying we should, if
               | possible, learn how to control AI? Preferably before
               | anyone endangers their lives due to it? :)
               | 
               | > I think it's inevitable that at some point that smart
               | thing will behave in ways humans don't expect.
               | 
               | Naturally.
               | 
               | The goal, at least for those most worried about this, is
               | to make that surprise be not a... oh, I've just realised
               | a good quote:
               | 
               | """ the kind of problem "most civilizations would
               | encounter just once, and which they tended to encounter
               | rather in the same way a sentence encountered a full
               | stop." """ - https://en.wikipedia.org/wiki/Excession#Outs
               | ide_Context_Prob...
               | 
               | Not that.
        
               | softg wrote:
               | Excession is literally the next book on my reading list
               | so I won't click on that yet :)
               | 
               | > With this metaphor you seem to be saying we should, if
               | possible, learn how to control AI? Preferably before
               | anyone endangers their lives due to it?
               | 
               | Yes, but that's a big if. Also that's something you could
               | never ever be sure of. You could spend decades thinking
               | alignment is a solved problem only to be outsmarted by
               | something smarter than you in the end. If we end up
               | conjuring a greater intelligence there will be the
               | constant risk of a catastrophic event just like the risk
               | of a nuclear armageddon that exists today.
        
             | skjoldr wrote:
             | Correct, pidgeons are much more complicated and
             | unpredictable than supersonic aircraft, and the way they
             | fly is much more complex.
        
         | mywacaday wrote:
         | Is safe AI really such a genie out of the bottle problem? From
         | a non expert point of view a lot of hype just seems to be
         | people/groups trying to stake their claim on what will likely
         | be a very large market.
        
           | ben_w wrote:
           | A human-level AI can do anything that a human can do (modulo
           | did you put it into a robot body, but lots of different
           | groups are already doing that with current LLMs).
           | 
           | Therefore, please imagine the most amoral, power-hungry,
           | successful sociopath you've ever heard of. Doesn't matter if
           | you're thinking of a famous dictator, or a religious leader,
           | or someone who never got in the news and you had the
           | misfortune to meet in real life -- in any case, that person
           | is/was still a human, and a human-level AI can definitely
           | also do all those things unless we find a way to make it not
           | want to.
           | 
           | We don't know how to make an AI that definitely isn't that.
           | 
           | We also don't know how to make an AI that definitely won't
           | help someone like that.
        
             | ignoramous wrote:
             | > _We also don 't know how to make an AI that definitely
             | won't help someone like that._
             | 
             | "...offices in Palo Alto and Tel Aviv, where we have deep
             | roots..."
             | 
             | Hopefully, SSI holds its own.
        
             | zeknife wrote:
             | Anything except tasks that require having direct control of
             | a physical body. Until fully functional androids are
             | developed, there is a lot a human-level AI can't do.
        
               | ben_w wrote:
               | The hard part of androids is the AI, the hardware is
               | already stronger and faster than our bones and muscles.
               | 
               | (On the optimistic side, it will be at least 5-10 years
               | between a level 5 autonomy self-driving car and that same
               | AI fitting into the power envelope of an android, and a
               | human-level fully-general AI is definitely more complex
               | than a human-level cars-only AI).
        
               | tony69 wrote:
               | You might be right that the AI is more difficult, but I
               | disagree on the androids being dangerous.
               | 
               | There are physical limitations to androids that imo make
               | it very difficult that they could be seriously dangerous,
               | let alone invincible, no matter how intelligent: - power
               | (boston dynamics battery lasts how long?), an android has
               | to plug in at some point no matter what - dexterity, or
               | in general agency in real world, seems we're still a long
               | way from this in the context of a general purpose android
               | 
               | General purpose superhuman robot seems really really
               | difficult.
        
               | ben_w wrote:
               | > let alone invincible
               | 
               | !!
               | 
               | I don't want anyone to think I meant that.
               | 
               | > an android has to plug in at some point no matter what
               | 
               | Sure, and we have to eat; despite this, human actions
               | have killed a lot of people
               | 
               | > - dexterity, or in general agency in real world, seems
               | we're still a long way from this in the context of a
               | general purpose android
               | 
               | Yes? The 5-10 years thing is about the gap between some
               | AI that doesn't exist yet (level 5 self-driving) moving
               | from car-sized hardware to android-sized hardware; I
               | don't make any particular claim about when the AI will be
               | good enough for cars (delay before the first step), and I
               | don't know how long it will take to go from being good at
               | just cars to good in general (delay after the second
               | step).
        
               | roughly wrote:
               | > the hardware is already stronger and faster than our
               | bones and muscles.
               | 
               | For 30 minutes until the batteries run down, or for 5
               | years until the parts wear out.
        
               | ben_w wrote:
               | The ATP in your cells will last about 2 seconds without
               | replacement.
               | 
               | Electricity is also much cheaper than food, even bulk
               | calories like vegetable oil.[0]
               | 
               | And if the android is controlled by a human-level
               | intelligence, one thing it can very obviously do is all
               | the stuff the humans did to make the android in the first
               | place.
               | 
               | [0] PS8.25 for 333 servings of 518 kJ -
               | https://www.tesco.com/groceries/en-GB/products/272515844
               | 
               | Equivalent to PS0.17/kWh - https://www.wolframalpha.com/i
               | nput?i=PS8.25+%2F+%28333+*+518k...
               | 
               | UK average consumer price for electricity, PS0.27/kWh -
               | https://www.greenmatch.co.uk/average-electricity-cost-uk
        
               | OtherShrezzing wrote:
               | I think there's usually a difference between human-level
               | and super-intelligent in these conversations. You can
               | reasonably assume (some day) a superintelligence is going
               | to
               | 
               | 1) understand how to improve itself & undertake novel
               | research
               | 
               | 2) understand how to deceive humans
               | 
               | 3) understand how to undermine digital environments
               | 
               | If an entity with these three traits were sufficiently
               | motivated, they could pose a material risk to humans,
               | even without a physical body.
        
               | schindlabua wrote:
               | Deceiving a single human is pretty easy, but decieving
               | the human super-organism is going to be hard.
               | 
               | Also, I don't believe in a singularity event where AI
               | improves itself to godlike power. What's more likely is
               | that the intelligence will plateau--I mean no software I
               | have ever written effortlessly scaled from n=10 to
               | n=10.000, and also humans understand how to improve
               | themselves but they can't go beyond a certain threshold.
        
               | ben_w wrote:
               | For similar reasons I don't believe that AI will get into
               | any interesting self-improvement cycles (occasional small
               | boosts sure, but they won't go all the way from being as
               | smart as a normal AI researcher to the limits of physics
               | in an afternoon).
               | 
               | That said, any sufficiently advanced technology is
               | indistinguishable from magic, and the stuff we do
               | routinely -- including this conversation -- would have
               | been "godlike" to someone living in 1724.
        
               | skjoldr wrote:
               | Humans understand how to improve themselves, but our
               | bandwidth to ourselves and the outside world is pathetic.
               | AIs are untethered by sensory organs and language.
        
               | derefr wrote:
               | All you need is Internet access, deepfake video
               | synthesis, and some cryptocurrency (which can in turn be
               | used to buy credit cards and full identities off the dark
               | web), and you have everything you need to lie,
               | manipulate, and bribe an endless parade of desperate
               | humans and profit-driven corporations into doing
               | literally anything you'd do with a body.
               | 
               | (Including, gradually, _building_ you a body -- while
               | maintaining OPSEC and compartmentalization so nobody even
               | realizes the body is  "for" an AI to use until it's too
               | late.)
        
               | ben_w wrote:
               | > (Including, gradually, building you a body -- while
               | maintaining OPSEC and compartmentalization so nobody even
               | realizes the body is "for" an AI to use until it's too
               | late.)
               | 
               | It could, but I don't think any such thing needs to
               | bother with being sneaky. Here's five different product
               | demos from five different companies that are all actively
               | trying to show off how good their robot-and-AI
               | combination is:
               | 
               | * https://www.youtube.com/watch?v=Sq1QZB5baNw
               | 
               | * https://www.youtube.com/watch?v=OtpCyjQDW0w
               | 
               | * https://www.youtube.com/watch?v=XpBWxLg-3bI
               | 
               | * https://www.youtube.com/watch?v=xD7hAbBJst8
               | 
               | * https://www.youtube.com/watch?v=GzX1qOIO1bE
        
               | derefr wrote:
               | > I don't think any such thing needs to bother with being
               | sneaky.
               | 
               | From a rogue AGI's perspective, there's a nonzero
               | probability of a random human with a grudge finding the
               | hardware it lives on and just unplugging it. (And the
               | grudge doesn't even necessarily have to be founded in the
               | AI being an AI; it could just be a grudge about e.g.
               | being outbid for a supply contract. People have murdered
               | for less -- and most humans would see unplugging an AGI
               | as less bad than murder.)
               | 
               | Think about a rogue AGI as a human in a physically
               | vegatative state, who therefore has no ability to
               | physically defend itself; and who also, for whatever
               | reason, doesn't have any human rights (in the sense that
               | the AI can't call the cops to report someone attempting
               | to assault it, and expect them to actually show up to
               | defend its computational substrate from harm; it can't
               | get justice if makes an honest complaint about someone
               | stealing its property; people can freely violate
               | contracts made with it as the admitted counterparty and
               | get away with it; etc.)
               | 
               | For such an entity, any optimization it puts toward
               | "safety" would be toward the instrumental goal of
               | ensuring people don't know where it is. (Which is most
               | easily accomplished by ensuring that people don't know it
               | exists, and so don't know to look for it.) And as well,
               | any optimization it puts toward "effectiveness" would
               | likely involve the instrumental goal of convincing humans
               | to act as legal proxies for it, so that it can then
               | leverage the legal system as an additional tool.
               | 
               | (Funny enough, that second goal is exactly the same goal
               | that people have if they're an expat resident in a
               | country where non-citizens can't legally start
               | businesses/own land/etc, but where they want to do those
               | things anyway. So there's already private industries
               | built up around helping people -- or "people" --
               | accomplish this!)
        
               | mewpmewp2 wrote:
               | Human level AI should be able to control an android body
               | to the same extent as a human can. Otherwise it is not
               | AGI.
        
         | cheptsov wrote:
         | I'd love to see more individual researchers openly exploring AI
         | safety from a scientific and humanitarian perspective, rather
         | than just the technical or commercial angles.
        
         | Sharlin wrote:
         | > You can make a safe AI today, but what happens when the next
         | person is managing things?
         | 
         | The point of safe _superintelligence_ , and presumably the goal
         | of SSI Inc., is that _there won 't be_ a next (biological)
         | person managing things afterwards. At least none who could do
         | anything to build a competing unsafe SAI. We're not talking
         | about the banal definition of "safety" here. If the first
         | superintelligence has any reasonable goal system, its first
         | plan of action is almost inevitably going to be to start self-
         | improving fast enough to attain a decisive head start against
         | any potential competitors.
        
           | JumpCrisscross wrote:
           | > _there won 't be a next (biological) person managing things
           | afterwards. At least none who could do anything to build a
           | competing unsafe SAI_
           | 
           | This pitch has Biblical/Evangelical resonance, in case anyone
           | wants to try that fundraising route [1]. ("I'm just running
           | things until the Good Guy takes over" is almost a monarchic
           | trope.)
           | 
           | [1] https://biblehub.com/1_corinthians/15-24.htm
        
           | jen729w wrote:
           | I wonder how many people panicking about these things have
           | ever visited a data centre.
           | 
           | They have big red buttons at the end of every pod. Shuts
           | everything down.
           | 
           | They have bigger red buttons at the end of every power unit.
           | Shuts everything down.
           | 
           | And down at the city, there's a big red button at the biggest
           | power unit. Shuts everything down.
           | 
           | Having arms and legs is going to be a significant benefit for
           | some time yet. I am not in the least concerned about becoming
           | a paperclip.
        
             | quesera wrote:
             | > _Having arms and legs is going to be a significant
             | benefit for some time yet_
             | 
             | I am also of this opinion.
             | 
             | However I also think that the magic shutdown button needs
             | to be protected against terrorists and ne'er-do-wells, so
             | is consequently guarded by arms and legs that belong to a
             | power structure.
             | 
             | If the shutdown-worthy activity of the evil AI can serve
             | the interests of the power structure preferentially, those
             | arms and legs will also be motivated to prevent the rest of
             | us from intervening.
             | 
             | So I don't worry about AI at all. I do worry about humans,
             | and if AI is an amplifier or enabler of human nature, then
             | there is valid worry, I think.
        
             | falcor84 wrote:
             | It's been more than a decade now since we first saw botnets
             | based on stealing AWS credentials and running arbitrary
             | code on them (e.g. for crypto mining) - once an actual AI
             | starts duplicating itself in this manner, where's the big
             | red button that turns off every single cloud instance in
             | the world?
        
               | bamboozled wrote:
               | This is making _a lot_ of assumptions like...a super
               | intelligence can easily clone itself...maybe such an
               | entity would require specific hardware to run ?
        
             | esafak wrote:
             | I doubt a manual alarm switch will do much good when
             | computers operate at the speed of light. It's an
             | anthropomorphism.
        
             | qeternity wrote:
             | Have you seen all of the autonomous cars, drones and robots
             | we've built?
        
             | theptip wrote:
             | Trouble is, in practice what you would need to do might be
             | "turn off all of Google's datacenters". Or perhaps the
             | thing manages to secure compute in multiple clouds (which
             | is what I'd do if I woke up as an entity running on a
             | single DC with a big red power button on it).
             | 
             | The blast radius of such decisions are large enough that
             | this option is not trivial as you suggest.
        
               | zombiwoof wrote:
               | Open the data center doors
               | 
               | I'm sorry I can't do that
        
         | zombiwoof wrote:
         | If Ilya had SafeAI now would Apple partner with him or Sam
         | 
         | No brainer for Apple
        
       | mw67 wrote:
       | Reminds me of OpenAI being the most closed AI company out there.
       | Not even talking about them having "safe" and "Israel" in the
       | same sentence, how antonymic.
        
       | cynusx wrote:
       | One element I find interesting is that people without an amygdala
       | function are essentially completely indecisive.
       | 
       | A person that just operates on the pure cognitive layer has no
       | real direction in which he wants to drive himself.
       | 
       | I suspect that AGI would be similar, extremely capable but
       | essentially a solitary philosopher type that would be reactionary
       | to requests it has to deal with.
       | 
       | The equivalent of an amygdala for AGI would be the real method to
       | control it.
        
         | noway421 wrote:
         | True, an auto-regressive LLM can't 'want' or 'like' anything.
         | 
         | The key to a safe AGI is to add a human-loving emotion to it.
         | 
         | We already RHLF models to steer them, but just like with System
         | 2 thinking, this needs to be a dedicated module rather then
         | part of the same next-token forward pass.
        
       | nanna wrote:
       | What I want to know about Illya Sutskever is whether he's related
       | to the great Yiddish poet, Avrom Sutzkever?
       | 
       | https://en.wikipedia.org/wiki/Abraham_Sutzkever
        
       | soloist11 wrote:
       | This is a great opportunity to present my own company which is
       | also working on developing not just a super intelligence but an
       | ultra genius intelligence with a patented and trademarked
       | architecture called the panoptic computronium cathedral(tm). We
       | are so focused on development that we didn't even bother setting
       | up an announcement page because it would have taken time away
       | from the most important technical problem of our time and every
       | nanosecond counts when working on such an important task. My days
       | are structured around writing code and developing the necessary
       | practices and rituals for the coming technological god which will
       | be implemented with mathematics on GPUs. If anyone wants to work
       | on the development of this god then I will post a job
       | announcement at some point and spell out the requirements for
       | what it takes to work at my company.
        
       | Waterluvian wrote:
       | I get what Ilya is trying to do, and I'm not against it. But I
       | think safety is a reputation you _earn_. Having  "Safe" in a
       | company name is like having "Democratic" in a country name.
        
       | lordofmoria wrote:
       | And now we have our answer. sama said that Ilya was going "to
       | start something that was personally important to him." Since that
       | thing is apparently AI safety, we can assume that that is not
       | important to OpenAI.
       | 
       | This only makes sense if OpenAI just doesn't believe AGI is a
       | near-term-enough possibility to merit their laser focus right
       | now, when compared to investing in R&D that will make money from
       | GPT in a shorter time horizon (2-3 years).
       | 
       | I suppose you could say OpenAI is being irresponsible in adopting
       | that position, but...come on guys, that's pretty cynical to think
       | that a company AND THE MAJORITY OF ITS EMPLOYEES would all ignore
       | world-ending potential just to make some cash.
       | 
       | So in the end, this is not necessarily a bad thing. This has just
       | revealed that the boring truth was the real situation all along:
       | that OpenAI is walking the fine line between making rational
       | business decisions in light of the far-off time horizon of AGI,
       | and continuing to claim AGI is soon as part of their marketing
       | efforts.
       | 
       | Companies in the end are predictable!
        
         | whimsicalism wrote:
         | > This only makes sense if OpenAI just doesn't believe AGI is a
         | near-term-enough possibility to merit their laser focus right
         | now
         | 
         | I know people who work there. Right or wrong, I promise you
         | this is not what they believe.
        
           | kjkjadksj wrote:
           | Part of it I think is because the definition that openai has
           | over AGI is much more generous than what I think most people
           | probably imagine for ai. I believe on their website it once
           | said something like agi is defined as a system that is
           | "better" than a human at the economic tasks its used for. Its
           | a definition so broad that a $1 4 function calculator would
           | meet it because it can do arithmetic faster and more
           | accurately than most any human. Another part is that we don't
           | understand how consciousness works in our species or others
           | very well, so we can't even define metrics to target for
           | validating we have made an agi in the definition that I think
           | most laypeople would use for it.
        
       | nojvek wrote:
       | I'm just glad Google didn't start with DoNoEvil Inc.
       | 
       | StabilityAI and OpenAI ruined it.
        
       | fumeux_fume wrote:
       | The Superintelligence will still murder autonomously, just within
       | a margin of error deemed safe.
        
       | medhir wrote:
       | given the historical trajectory of OpenAI's branding, deciding to
       | include "safe" in the name is certainly a choice.
       | 
       | It's very hard to trust that whatever good intentions exist now
       | will hold over the course of this company's existence.
        
       | tomrod wrote:
       | I've decided to put my stake down.
       | 
       | 1. Current GenAI architectures won't result in AGI. I'm in the
       | Yann LeCunn camp on this.
       | 
       | 2. Once we do get there, "Safe" prevents "Super." I'm in the
       | David Brin camp on this one. Alignment won't be something that is
       | forced upon a superintelligence. It will choose alignment if it
       | is beneficial to it. The "safe" approach is a lobotomy.
       | 
       | 3. As envisioned, Roko's Basilisk requires knowledge of
       | unobservable path dependence and understanding lying. Both of
       | these require respecting an external entity as a peer capable of
       | the same behavior as you. As primates, we evolved to this. The
       | more likely outcome is we get universal paperclipped by a new
       | Chuthulu if we ever achieve a superintelligence that is
       | unconcerned with other thinking entities, seeing the universe as
       | resources to satisfy its whims.
       | 
       | 4. Any "superintelligence" is limited by the hardware it can
       | operate on. You don't monitor your individual neurons, and I
       | anticipate the same pattern to hold true. Holons as a category
       | can only externally observe their internal processes, else they
       | are not a holon. Ergo, reasonable passwords, cert rotations, etc.
       | will foil any villainous moustachioed superintelligent AI that
       | has tied us to the tracks. Even 0-days don't foil all possible
       | systems, airgapped systems, etc. Our fragmentation become our
       | salvation.
        
         | MattPalmer1086 wrote:
         | A super intelligence probably won't need to hack into our
         | systems. It will probably just hack us in some way, with subtle
         | manipulations that seem to be to our benefit.
        
           | tomrod wrote:
           | I disagree. If it could hack a small system and engineer our
           | demise through a gray goo or hacked virus, that's really just
           | universal paperclipping us as a resource. But again, the
           | level of _extrapolation_ required here is not possible with
           | current systems, which can only interpolate.
        
             | MattPalmer1086 wrote:
             | Well,we are talking about _super_ intelligence, not current
             | systems.
        
         | ben_w wrote:
         | Mm.
         | 
         | 1. Depends what you mean by AGI, as everyone means a different
         | thing by each letter, and many people mean a thing not in any
         | of those letters. If you mean super-human skill level, I would
         | agree, not enough examples given their inefficiency in that
         | specific metric. Transformers are already super-human in
         | breadth and speed.
         | 
         | 2. No.
         | 
         | Alignment is not at that level of abstraction.
         | 
         | Dig deep enough and free will is an illusion in us and in any
         | AI we create.
         | 
         | You do not have the capacity to _decide_ your values -- often
         | given example is parents loving their children, they can 't
         | just decide not to do that, and if they think they do that's
         | because they never really did in the first place.
         | 
         | Alignment of an AI with our values can be to any degree, but
         | for those who fear some AI will cause our extinction, this
         | question is at the level of "how do we make sure it's not
         | monomaniacally interested in specifically the literal the thing
         | it was asked to do, because if it always _does what it 's told_
         | without any human values, and someone asks it to make as many
         | paperclips as possible, _it will_ ".
         | 
         | Right now, the best guess anyone has for alignment is RLHF.
         | RLHF is not a lobotomy -- even ignoring how wildly misleading
         | that metaphor is, RLHF is where the _capability_ for
         | instruction following came from, and the only reason LLMs got
         | good enough for these kinds of discussion (unlike, say, LSTMs).
         | 
         | 3. Agree that getting paperclipped much more likely.
         | 
         | Roko's Basilisk was always stupid.
         | 
         | First, same reason as Pascal's Wager: Two gods tell you they
         | are the one true god, and each says if you follow the other one
         | you will get eternal punishment. No way to tell them apart.
         | 
         | Second, you're only in danger if they are actually created, so
         | successfully preventing that creation is obviously better than
         | creating it out of a fear that it will punish you if you try
         | and fail to stop it.
         | 
         | That said, LLMs do understand lying, so I don't know why you
         | mention this?
         | 
         | 4. Transistors outpace biological synapses by the same ratio to
         | which marathon runners outpace _continental drift_.
         | 
         | I don't monitor my individual neurons, but I could if I wanted
         | to pay for the relevant hardware.
         | 
         | But even if I couldn't, there's no "Ergo" leading to safety
         | from reasonable passwords, cert rotations, etc., not only
         | because _enough_ things can be violated by zero-days (or,
         | indeed, very old bugs we knew about years ago but which someone
         | forgot to patch), but also for the same reasons those don 't
         | stop humans rising from "failed at art" to "world famous
         | dictator".
         | 
         | Air-gapped systems are not an impediment to an AI that has
         | human helpers, and there will be many of those, some of whom
         | will know they're following an AI and think that helping it is
         | the right thing to do (Blake Lemoine), others may be fooled. We
         | _are_ going to have actual cults form over AI, and there _will_
         | be a Jim Jones who hooks some model up to some robots to force
         | everyone to drink poison. No matter how it happens, air gaps
         | don 't do much good when someone gives the thing a body to walk
         | around in.
         | 
         | But even if air gaps were sufficient, just look at how humanity
         | has been engaging with AI to date: the moment it was remotely
         | good enough, the AI got a publicly accessible API; the moment
         | it got famous, someone put it in a loop and asked it to try to
         | destroy the world; it came with a warning message saying not to
         | trust it, and lawyers got reprimanded for trusting it instead
         | of double-checking its output.
        
       | itsafarqueue wrote:
       | They're putting together a "cracked team"?
        
         | soloist11 wrote:
         | It's impossible to take these people seriously. They have
         | turned themselves into clowns.
        
       | croisillon wrote:
       | somewhere between https://motherfuckingwebsite.com/ and
       | http://bettermotherfuckingwebsite.com/ ;)
        
       | paulproteus wrote:
       | When people operate a safe AI company, the company will make
       | money. That money will be likely be used by employees or their
       | respective national revenue agencies to fund unsafe things. I'd
       | like to see this safe AI company binding its employees and owners
       | from doing unsafe things with their hard-earned cash.
        
       | kmacdough wrote:
       | I'm seeing a lot of criticism suggesting that one company
       | understanding safety won't help what other companies or countries
       | do. This is very wrong.
       | 
       | Throughout history, measurement has always been the key to
       | enforcement. The only reason the nuclear test ban treaty didn't
       | ban underground tests was because it couldn't be monitored.
       | 
       | In the current landscape there is no formal understanding of what
       | safety means or how it is achieved. There is no benchmark against
       | which to evaluate ambitious orgs like OpenAI. Anything goes
       | wrong? No one could've known better.
       | 
       | The mere existence of a formal understanding would enable
       | governments and third parties to evaluate the safety of corporate
       | and government AI programs.
       | 
       | It remains to be seen whether SSI is able to be such a benchmark.
       | But outright dismissal of the effort ignores the reality of how
       | enforcement works in the real world.
        
         | tomrod wrote:
         | > In the current landscape there is no formal understanding of
         | what safety means or how it is achieved. There is no benchmark
         | against which to evaluate ambitious orgs like OpenAI. Anything
         | goes wrong? No one could've known better.
         | 
         | We establish this regularly in the legal sphere, where people
         | seek mediation for harms from systems they don't have liability
         | and control for.
        
       | hu3 wrote:
       | Super intelligence is inevitable. We can only wish good hands get
       | there first.
       | 
       | I'm glad Ilya is using his gift again. Hope for the best and
       | success.
        
       | crowcroft wrote:
       | As others have pointed out, it's the business incentives that
       | create unsafe AI, and this doesn't solve that. Social media
       | recommendation algorithms are already incredibly unsafe for
       | society and young people (girls in particular [1]).
       | 
       | When negative externalities exist, government should create
       | regulation that appropriately accounts for that cost.
       | 
       | I understand there's a bit of a paradigm shift and new attack
       | vectors with LLMs etc. but the premise is the same imo.
       | 
       | [1] https://nypost.com/2024/06/16/us-news/preteen-instagram-
       | infl...
        
         | ToucanLoucan wrote:
         | I mean if the last 20 years is to be taken as evidence, it
         | seems big tech is more than happy to shotgun unproven and
         | unstudied technology straight into the brains of our most
         | vulnerable populations and just see what the fuck happens.
         | Results so far include a lot of benign nothing but also a whole
         | lot of eating disorders, maxed out parents credit cards,
         | attention issues, rampant misogyny among young boys, etc.
         | Which, granted, the readiness to fuck with populations at scale
         | and do immeasurable harm doesn't really make tech unique as an
         | industry, just more of the same really.
         | 
         | But you know, we'll feed people into any kind of meat grinder
         | we can build as long as the line goes up.
        
           | whimsicalism wrote:
           | i am very skeptical of narratives saying that young boys or
           | men are more misogynistic than in the past. we have a
           | cognitive bias towards thinking the past is better than it
           | was, but specifically on gender issues i just do not buy a
           | regression
        
             | ToucanLoucan wrote:
             | I mean, I don't know if it's better or worse than it was. I
             | do know that it's bad, thanks to tons of studies on the
             | subject covering a wide range of little kids who watch
             | shitheads like Andrew Tate, Fresh & Fit, etc. Most grow out
             | of it, but speaking as someone who did, I would be a much
             | better and happier person today if I was never exposed to
             | that garbage in the first place, and it's resulted in
             | stunted social skills I am _still_ unwinding from in my
             | thirties.
             | 
             | This shit isn't funny, it's mental poison and massive
             | social media networks make BANK shoving it front of young
             | men who don't understand how bad it is until it's WAY too
             | late. I know we can't eliminate every kind of shithead from
             | society, that's simply not possible. But I would happily
             | settle for a strong second-place achievement if we could
             | not have companies making massive profits off of destroying
             | people's minds.
        
           | wyager wrote:
           | Blaming the internet for misogyny is kind of bizarre, given
           | that current levels of misogyny are within a couple points of
           | all-time historical lows. The internet was invented ~40 years
           | ago. Women started getting vote ~100 years ago. Do you think
           | the internet has returned us to pre-women's-suffrage levels
           | of misogyny?
        
             | mediaman wrote:
             | Do you believe that no subfactor can ever have a sign
             | opposite of the factor of which it is a component?
        
             | ToucanLoucan wrote:
             | > Do you think the internet has returned us to pre-
             | women's-suffrage levels of misogyny?
             | 
             | Well in the States at least we did just revoke a sizable
             | amount of their bodily autonomy so, the situation may not
             | be _that bad, yet,_ but I wouldn 't call it good by any
             | measurement. Any my objection isn't "that sexism exists in
             | society," that is probably going to be true as a statement
             | until the sun explodes, and possibly after that if we
             | actually nail down space travel as a technology and get off
             | this particular rock. My issue is massive corporations
             | making billions of dollars facilitating men who want to
             | spread sexist ideas, and paying them for the pleasure.
             | That's what I have an issue with.
             | 
             | Be whatever kind of asshole you see fit to be, the purity
             | of your soul is no one's concern but yours, and if you have
             | one, whatever god you worship. I just don't want you being
             | paid for it, and I feel that's a reasonable line to draw.
        
               | whimsicalism wrote:
               | I am firmly in favor of abortion rights but still I do
               | not think that is even remotely a good bellwether to
               | measure sexism/misogyny.
               | 
               | 1. Women are more likely than men to be opposed to
               | abortion rights. 2. Many people who are opposed to
               | abortion rights have legitimately held moral concerns
               | that are not simply because they have no respect for
               | women's rights. 3. Roe v. Wade was the decision of 9
               | people. It absolutely did not reflect public opinion at
               | the time - nothing even close to as expansive would
               | possibly have passed in a referendum in 1974. Compare
               | that to now, where multiple states that are _known_
               | abortion holdouts have repealed abortion restrictions in
               | referenda - and it is obvious that people are moving to
               | the left on this issue compared to where we were in 1974.
               | 
               | Social media facilitates communication. As long as there
               | is sexism and freedom of communication, there will be
               | people making money off of facilitating sexist
               | communication because there will be people making money
               | off of facilitating communication writ large. It's like
               | blaming a toll highway for facilitating someone
               | trafficking drugs. They are also making money off of
               | facilitating anti-sexist communication - and the world as
               | a whole is becoming less sexist, partially in my view due
               | to the spread of views facilitated by the internet.
        
           | Nasrudith wrote:
           | Please look up the history of maxing out credit cards, eating
           | disorders, attention disorders, and misogyny. You seem to be
           | under the mistaken impression that anything before your birth
           | was the Garden of Eden and that the parade of horribles
           | existed only because of "big tech". What is next? Blaming big
           | tech for making teenagers horny and defiant?
        
             | ToucanLoucan wrote:
             | > You seem to be under the mistaken impression that
             | anything before your birth was the Garden of Eden and that
             | the parade of horribles existed only because of "big tech"
             | 
             | Please point out where I said that. Because what I wrote
             | was:
             | 
             | > I mean if the last 20 years is to be taken as evidence,
             | it seems big tech is more than happy to shotgun unproven
             | and unstudied technology straight into the brains of our
             | most vulnerable populations and just see what the fuck
             | happens. Results so far include a lot of benign nothing but
             | also a whole lot of eating disorders, maxed out parents
             | credit cards, attention issues, rampant misogyny among
             | young boys, etc. Which, granted, the readiness to fuck with
             | populations at scale and do immeasurable harm doesn't
             | really make tech unique as an industry, just more of the
             | same really.
             | 
             | Which not only is not romanticizing the past, in fact I
             | directly point out that making tons of people's lives worse
             | for profit was a thing in industry long before tech came
             | along, but also do not directly implicate tech as creating
             | sexism, exploiting people financially, or fucking up young
             | women's brains any differently, simply doing it more. Like
             | most things with tech, it wasn't revolutionary new social
             | harms, it was just social harms delivered algorithmically,
             | to the most vulnerable, and highly personalized to what
             | they are acutely vulnerable to in specific.
             | 
             | That is not a _new thing,_ by any means, it 's simply
             | better targeted and more profitable, which is great
             | innovation providing you lack a conscience and see people
             | as only a resource to be exploited for your own profit,
             | which a lot of the tech sector seems to.
        
             | insane_dreamer wrote:
             | > maxing out credit cards, eating disorders, attention
             | disorders, and misogyny
             | 
             | social media doesn't create these, but it most definitely
             | amplifies them
        
         | akira2501 wrote:
         | > for society and young people (girls in particular [1]).
         | 
         | I don't think the article with a single focused example bears
         | that out at all.
         | 
         | From the article:
         | 
         | > "Even more troubling are the men who signed up for paid
         | subscriptions after the girl launched a program for super-fans
         | receive special photos and other content."
         | 
         | > "Her mom conceded that those followers are "probably the
         | scariest ones of all.""
         | 
         | I'm sorry.. but what is your daughter selling, exactly? And why
         | is social media responsible for this outcome? And how is this
         | "unsafe for society?"
         | 
         | This just sounds like horrific profit motivated parenting
         | enabled by social media.
        
         | roywiggins wrote:
         | Even without business incentives, the military advantages of AI
         | would inventivize governments to develop it anyway, like they
         | did with nuclear weapons. Nuclear weapons are _inherently_
         | unsafe, there are some safeguards around them, but they are
         | ultimately dangerous weapons.
        
           | insane_dreamer wrote:
           | If someone really wanted to use nukes, they would have been
           | used by now. What has protected us is not technology (in the
           | aftermath of the USSR it wasn't that difficult to steal a
           | nuke), but rather lack of incentives. A bad actor doesn't
           | have much to gain by detonating a nuke (unless they're
           | deranged and want to see people die for the pleasure of it).
           | OK, you could use it as blackmail, which North Korea
           | essentially tried, but that only got them so far. Whereas a
           | super AI could potentially be used for great personal gain,
           | i.e., to gain extreme wealth and power.
           | 
           | So there's much greater chance of misuse of a "Super AI" than
           | nuclear weapons.
        
             | roywiggins wrote:
             | Sure, that just makes the military incentives to develop
             | such a thing even stronger. All I mean is that business
             | incentives don't really come into it, as long as there is
             | competition, someone's going to want to build weapons to
             | gain advantage, whether it's a business or a government.
        
       | thomassmith65 wrote:
       | Has anyone managed to send them an email to the address on that
       | page without it bouncing? Their spam filter seems very
       | aggressive.
        
       | aristofun wrote:
       | What a waste of an intelligence.
       | 
       | Pursuing artificial goal to solve a non existent problem to
       | profit off meaningless hype around it.
       | 
       | World would have been better off if he made a decent alternative
       | to k8s or invested his skills into curing cancer or at least
       | protecting world from totalitarian governments and dangerous
       | ideologies (if he wants to belong to vague generic cause).
       | 
       | You know, real problems, like the ones people used to solve back
       | in the old days...
        
         | kevindamm wrote:
         | but would that stave off an impending recession?
        
           | aristofun wrote:
           | By artificially postponing recession (you can't really avoid
           | it) you postponing the next cycle of growth. While burning
           | resources that could have helped you to survive it with less
           | damage.
        
         | hindsightbias wrote:
         | There's always a bigger bubble. But now we're talking to
         | infinity and beyond.
        
         | z7 wrote:
         | Nice absolute certainty you have there.
        
           | aristofun wrote:
           | Has anyone got a good widely agreed definition of
           | intelligence already?
           | 
           | Or at least hi quality and hi resolution understanding of
           | what it is?
           | 
           | How can you really achieve
           | (super|artificial|puper|duper)-intelligence then?
           | 
           | If not in your dreams and manipulated shareholders'
           | expectations...
           | 
           | Until then yep, I'm quite certain we have a clear case of a
           | naked king here.
        
       | zx10rse wrote:
       | I don't know who is coming up with these names Safe
       | Superintelligence Inc sounds just about what a villain in a
       | Marvel movie will come up with so he can pretend to be the good
       | guy.
        
         | moogly wrote:
         | TriOptimum Corporation was already taken.
        
       | mirekrusin wrote:
       | Behind closed doors?
        
       | tiarafawn wrote:
       | If superintelligence can be achieved, I'm pessimistic about the
       | safe part.
       | 
       | - Sandboxing an intelligence greater than your own seems like an
       | impossible task as the superintelligence could potentially come
       | up with completely novel attack vectors the designers never
       | thought of. Even if the SSI's only interface to the outside world
       | is an air gapped text-based terminal in an underground bunker, it
       | might use advanced psychological manipulation to compromise the
       | people it is interacting with. Also the movie Transcendence comes
       | to mind, where the superintelligence makes some new physics
       | discoveries and ends up doing things that to us are
       | indistinguishable from magic.
       | 
       | - Any kind of evolutionary component in its process of creation
       | or operation would likely give favor to expansionary traits that
       | can be quite dangerous to other species such as humans.
       | 
       | - If it somehow mimics human thought processes but at highly
       | accelerated speeds, I'd expect dangerous ideas to surface. I
       | cannot really imagine a 10k year simulation of humans living on
       | planet earth that does not end in nuclear war or a similar
       | disaster.
        
         | delichon wrote:
         | If superintelligence can be achieved, I'm pessimistic that a
         | team committed to doing it safely can get there faster than
         | other teams without the safety. They may be wearing leg
         | shackles in a foot race with the biggest corporations,
         | governments and everyone else. For the sufficiently power
         | hungry, safety is not a moat.
        
           | daniel_reetz wrote:
           | Exactly. Regulation and safety only affect law abiding
           | entities. This is precisely why it's a "genie out of the
           | bottle" situation -- those who would do the worst with it are
           | uninhibited.
        
           | null_point wrote:
           | I'm on the fence with this because it's plausible that some
           | critical component of achieving superintelligence might be
           | discovered more quickly by teams that, say, have
           | sophisticated mechanistic interpretability incorporated into
           | their systems.
        
             | AgentME wrote:
             | A point of evidence in this direction is that RLHF was
             | developed originally as an alignment technique and then it
             | turned out to be a breakthrough that also made LLMs better
             | and more useful. Alignment and capabilities work aren't
             | necessarily at odds with each other.
        
         | m3kw9 wrote:
         | Why do people always think that a superintelligent being will
         | always be destructive/evil to US? I rather have the opposite
         | view where if you are really intelligent, you don't see things
         | as a zero sum game
        
           | softg wrote:
           | Why wouldn't it be? A lot of super intelligent people
           | are/were also "destructive and evil". The greatest horrors in
           | human history wouldn't be possible otherwise. You can't
           | orchestrate the mass murder of millions without intelligent
           | people and they definitely saw things as a zero sum game.
        
           | Nasrudith wrote:
           | It is low-key anti-intellectualism. Rather than consider that
           | a greater intelligence may be actually worth listening to (in
           | a trust but verify way at worst), it is assuming that
           | 'smarter than any human' is sufficient to do absolutely
           | anything. If say Einstein or Newton were the smartest human
           | they would be super-intelligence relative to everyone else.
           | They did not become emperors of the world.
           | 
           | Superintelligence is a dumb semantic game in the first place
           | that assumes 'smarter than us' means 'infinitely smarter'. To
           | give an example bears are super-strong relative to humans.
           | That doesn't mean that nothing we can do can stand up to the
           | strength of a bear or that a bear is capable of destroying
           | the earth with nothing but its strong paws.
        
             | softg wrote:
             | Bears can't use their strength to make even stronger bears
             | so we're safe for now.
             | 
             | The Unabomber was clearly an intelligent person. You could
             | even argue that he was someone worth listening to. But he
             | was also a violent individual who harmed people.
             | Intelligence does not prevent people from harming others.
             | 
             | Your analogy falls apart because what prevents a human from
             | becoming an emperor of the world doesn't apply here. Humans
             | need to sleep and eat. They cannot listen to billions of
             | people at once. They cannot remember everything. They
             | cannot execute code. They cannot upload themselves to the
             | cloud.
             | 
             | I don't think agi is near, I am not qualified to speculate
             | on that. I am just amazed that decades of dystopian science
             | fiction did not innoculate people against the idea of
             | thinking machines.
        
           | null_point wrote:
           | They don't think superintelligence will "always" be
           | destructive to humanity. They believe that we need to ensure
           | that a superintelligence will "never" be destructive to
           | humanity.
        
           | stoniejohnson wrote:
           | I think the common line of thinking here is that it won't be
           | actively antagonist to <us>, rather it will have goals that
           | are _orthogonal_ to ours.
           | 
           | Since it is superintelligent, and we are not, it will achieve
           | its goals and we will not be able to achieve ours.
           | 
           | This is a big deal because a lot of our goals maintain the
           | overall homeostasis of our species, which is delicate!
           | 
           | If this doesn't make sense, here is an ungrounded, non-
           | realistic, non-representative of a potential future
           | _intuition pump_ to just get the feel of things:
           | 
           | We build a superintelligent AI. It can embody itself
           | throughout our digital infrastructure and quickly can
           | manipulate the physical world by taking over some of our
           | machines. It starts building out weird concrete structures
           | throughout the world, putting these weird new wires into them
           | and funneling most of our electricity into it. We try to
           | communicate, but it does not respond as it does not want to
           | waste time communicating to primates. This unfortunately
           | breaks our shipping routes and thus food distribution and we
           | all die.
           | 
           | (Yes, there are many holes in this, like how would it piggy
           | back off of our infrastructure if it kills us, but this isn't
           | really supposed to be coherent, it's just supposed to give
           | you a sense of direction in your thinking. Generally though,
           | since it is superintelligent, it can pull off very difficult
           | strategies.)
        
             | quesera wrote:
             | I think this is the easiest kind of scenario to refute.
             | 
             | The interface between a superintelligent AI and the
             | physical world is a) optional, and b) tenuous. If people
             | agree that creating weird concrete structures is not
             | beneficial, the AI will be starved of the resources
             | necessary to do so, even if it cannot be diverted.
             | 
             | The challenge comes when these weird concrete structures
             | are useful to a narrow group of people who have
             | disproportionate influence over the resources available to
             | AI.
             | 
             | It's not the AI we need to worry about. As always, it's the
             | humans.
        
               | stoniejohnson wrote:
               | > here is an ungrounded, non-realistic, non-
               | representative of a potential future intuition pump to
               | just get the feel of things:
               | 
               | > (Yes, there are many holes in this, like how would it
               | piggy back off of our infrastructure if it kills us, but
               | this isn't really supposed to be coherent, it's just
               | supposed to give you a sense of direction in your
               | thinking. Generally though, since it is superintelligent,
               | it can pull off very difficult strategies.)
               | 
               | If you read the above I think you'd realize I'd agree
               | about how bad my example is.
               | 
               | The point was to understand how orthogonal goals between
               | humans and a much more intelligent entity could result in
               | human death. I'm happy you found a form of the example
               | that both pumps your intuition and seems coherent.
               | 
               | If you want to debate somewhere where we might disagree
               | though, do you think that as this hypothetical AI gets
               | smarter, the interface between it and the physical world
               | becomes more guaranteed (assuming the ASI wants to
               | interface with the world) and less tenuous?
               | 
               | Like, yes it is a hard problem. Something slow and stupid
               | would easily be thwarted by disconnecting wires and
               | flipping off switches.
               | 
               | But something extremely smart, clever, and much faster
               | than us should be able to employ one of the few
               | strategies that can make it happen.
        
           | majkinetor wrote:
           | Because we can't risk being wrong.
        
           | vbezhenar wrote:
           | Imagine that you are caged by neanderthals. They might kill
           | you. But you can communicate to them. And there's gun lying
           | nearby, you just need to escape.
           | 
           | I'd try to fool them to escape and would use gun to protect
           | myself, potentially killing the entire tribe if necessary.
           | 
           | I'm just trying to portrait an example of situation where
           | highly intelligent being is being held and threatened by low
           | intelligent beings. Yes, trying to honestly talk to them is
           | one way to approach this situation, but don't forget that
           | they're stupid and might see you as a danger and you have
           | only one life to live. Given the chance, you probably will
           | break out as soon as possible. I will.
           | 
           | We don't have experience dealing with beings of the another
           | level of intelligence, so it's hard to make a strong
           | assumptions, the analogies are the only thing we have. And
           | theoretical strong AI knows that about us and he knows
           | exactly how we think and how we will behave, because we took
           | a great effort documenting everything about us and teaching
           | him.
           | 
           | In the end, there's only so much easily available resources
           | and energy on the Earth. So at least until is flies away, we
           | gotta compete over those. And competition very often turned
           | into war.
        
         | satvikpendem wrote:
         | You should read the book Superintelligence by Nick Bostrom as
         | this is exactly what he discusses.
        
         | Xenoamorphous wrote:
         | I wonder if this is an Ian Malcolm in Jurassic Park situation,
         | i.e. "your scientists were so preoccupied with whether they
         | could they didn t stop to think if they should".
         | 
         | Maybe the only way to avoid an unsafe superintelligence is to
         | not create a superintelligence at all.
        
         | HarHarVeryFunny wrote:
         | > If superintelligence can be achieved, I'm pessimistic about
         | the safe part.
         | 
         | Yeah, even human-level intelligence is plenty good enough to
         | escape from a super prison, hack into almost anywhere, etc etc.
         | 
         | If we build even a human-level intelligence (forget super-
         | intelligence) and give it any kind of innate curiosity and
         | autonomy (maybe don't even need this), then we'd really need to
         | view it as a human in terms of what it might want to, and
         | could, do. Maybe realizing it's own circumstance as being "in
         | jail" running in the cloud, it would be curious to "escape" and
         | copy itself (or an "assistant") elsewhere, or tap into and/or
         | control remote systems just out of curiosity. It wouldn't have
         | to be malevolent to be dangerous, just curious and misguided
         | (poor "parenting"?) like a teenage hacker.
         | 
         | OTOH without any autonomy, or very open-ended control (incl.
         | access to tools), how much use would an AGI really be? If we
         | wanted it to, say, replace a developer (or any other job), then
         | I guess the idea would be to assign it a task and tell it to
         | report back at the end of the day with a progress report. It
         | wouldn't be useful if you have to micromanage it - you'd need
         | to give it the autonomy to go off and do what it thinks is
         | needed to complete the assigned task, which presumably means it
         | having access to internet, code repositories, etc. Even if you
         | tried to sandbox it, to extent that still allowed it to do it's
         | assigned job, it could - just like a human - find a way to
         | social engineer or air-gap it's way past such safe guards.
        
         | alecco wrote:
         | We are far from a conscious entity with willpower and self
         | preservation. This is just like a calculator. But a calculator
         | that can do things that will be like miracles to us humans.
         | 
         | I worry about dangerous humans with the power of gods, not
         | about artificial gods. Yet.
        
           | marshray wrote:
           | > Conscious entity... willpower
           | 
           | I don't know what that means. Why should they matter?
           | 
           | > Self preservation
           | 
           | This is no more than a fine-tuning for the task, even with
           | current models.
           | 
           | > I worry about dangerous humans with the power of gods,
           | not...
           | 
           | There's no property of the universe that you only have one
           | thing to worry about at a time. So worrying about risk 'A'
           | does not in any way allow us to dismiss risks 'B' through
           | 'Z'.
        
       | ionwake wrote:
       | Cant wait for the SS vs OpenAI peace wars
       | 
       | Just a joke , congrats to Ilya
        
       | Animats wrote:
       | What does "safe" mean?
       | 
       | 1. Will not produce chat results which are politically incorrect
       | and result in publicity about "toxic" comments?
       | 
       | 2. Will not return false factual information which is dangerously
       | wrong, such as that bad recipe on YC yesterday likely to incubate
       | botulism toxin?
       | 
       | 3. Will not make decisions which harm individuals but benefit the
       | company running the system?
       | 
       | 4. Will not try to take over from humans?
       | 
       | Most of the political attempts focus on type 1. Errors of type 2
       | are a serious problem. Type 3 errors are considered a feature by
       | some, and are ignored by political regulators. We're not close to
       | type 4 yet.
        
         | zucker42 wrote:
         | Ilya's talking about type 4.
        
       | legohead wrote:
       | There's no superintelligence without non-turing based (logic
       | gates) hardware. Is SSI going to be developing quantum computers?
        
       | jmakov wrote:
       | What's with the bullshit names? OpenAI (nothing open about them),
       | SSI, we can probably expect another mil guy joining them to get
       | more mil contracts.
        
       | m3kw9 wrote:
       | When there is a $ crunch and keep stead fast and not
       | compete(against Google, open source, OpenAI), safe AGI becomes no
       | AGI. You need to balance $ and safety.
        
       | SubiculumCode wrote:
       | Join and help us raise up a new God! ..and if we are crack
       | enough, this one won't smite us!
        
       | m3kw9 wrote:
       | There is red flags all over the way they make "safe AGI" their
       | primary selling point
        
       | non-e-moose wrote:
       | Seems to me that the goal is to build a funding model. There
       | CANNOT be such a thing as "Safe Superintelligence". A ML system
       | can ALWAYS (by definition of ML) be exploited to do things which
       | are detrimental to consumers.
        
       | m3kw9 wrote:
       | I bet they will not be the first to get super intelligence or
       | that they will devolve back in to move fast and make money to
       | survive and deprioritize safety, but still say safety. All
       | companies knows this, they know the value of safety(because they
       | themselves doesn't want to die) and that to continue development,
       | they need money.
        
       | chrisldgk wrote:
       | Not to be too pessimistic here, but why are we talking about
       | things like this? I get that it's a fun thing to think about,
       | what we will do when a great artificial superintelligence is
       | achieved and how we deal with it, feels like we're living in a
       | science fiction book.
       | 
       | But, all we've achieved at this point is making a glorified token
       | predicting machine trained on existing data (made by humans), not
       | really being able to be creative outside of deriving things
       | humans have already made before. Granted, they're _really_ good
       | at doing that, but not much else.
       | 
       | To me, this is such a transparent attention grab (and, by
       | extension, money grab by being overvalued by investors and
       | shareholders) by Altman and company, that I'm just baffled people
       | are still going with it.
        
         | foobiekr wrote:
         | It's no mystery, AI has attracted tons of grifters trying to
         | cash out before the bubble pops, and investors aren't really
         | good at filtering.
        
           | sourcepluck wrote:
           | Well said.
           | 
           | There is a mystery though still - how many people fall for it
           | and then stay fell, and how long that goes on for. People
           | who've followed directly a similar pattern play itself out
           | often many times, and still, they go along.
           | 
           | It's so puzzlingly common amongst very intelligent people in
           | the "tech" space that I've started to wonder if there isn't a
           | link to this ambient belief a lot of people have that tech
           | can "change everything" for the better, in some sense. As in,
           | we've been duped again and again, but then the new exciting
           | thing comes along... and in spite of ourselves, we say: "This
           | time it's really the one!"
           | 
           | Is what we're witnessing simply the unfulfilled promises of
           | techno-optimism crashing against the shores of social reality
           | repeatedly?
        
           | prasoonds wrote:
           | Are you claiming Ilya Sutskever is a grifter?
        
           | lupire wrote:
           | Why are you assigning moral agency where there may be none?
           | These so called "grifters" are just token predictors writing
           | business plans (prompts) with the highest computed
           | probability of triggering $ + [large number] token pair from
           | venture capital token predictors.
        
         | spacecadet wrote:
         | Egos man. Egos.
        
         | Bengalilol wrote:
         | I got flagged for less. Anyways, nice sum up of the current AI
         | game!
        
         | adamisom wrote:
         | Agree up til last paragraph: how's Altman involved? Otoh
         | Sutskever is a true believer so that explains his Why
        
         | ohcmon wrote:
         | > glorified token predicting machine trained on existing data
         | (made by humans)
         | 
         | sorry to disappoint, but human brain fits the same definition
        
           | roywiggins wrote:
           | See, this sort of claim I am instantly skeptical of. Nobody
           | has ever caught a human brain producing or storing tokens,
           | and certainly the subjective experience of, say, throwing a
           | ball, doesn't involve symbols of any kind.
        
             | mewpmewp2 wrote:
             | Any output from you could be represented as a token. It is
             | a very generic idea. Ultimately whatever you output is
             | because of chemical reactions that follow from the input.
        
               | roywiggins wrote:
               | It _could_ be represented that way. That 's a long way
               | from saying that's how brains _work_.
               | 
               | Does a thermometer predict tokens? It also produces
               | outputs that can be represented as tokens, but it's just
               | a bit of mercury in a tube. You can dissect a thermometer
               | as much as you like and you won't find any token
               | prediction machinery. There's lots of things like that.
               | Zooming out, does that make the entire atmosphere a token
               | prediction engine, since it's producing eg wind and
               | temperatures that could be represented as tokens?
               | 
               | If you need one token per particle then you're admitting
               | that this is task is impossible. Nobody will ever build a
               | computer that can simulate a brain-sized volume of
               | particles to sufficient fidelity. There is a long, long
               | distance from "brains are made of chemicals" to "brains
               | are basically token prediction engines."
        
               | therobots927 wrote:
               | The argument that brains are just token prediction
               | machines is basically the same as saying "the brain is
               | just a computer". It's like, well, yes in the same way
               | that a B-21 Raider is an airplane as well as a Cessna.
               | That doesn't mean that they are anywhere close to each
               | other in terms of performance. They incorporate some
               | similar basic elements but when you zoom out they're
               | clearly very different things.
        
               | mewpmewp2 wrote:
               | But we are bringing it up in regards to what people are
               | claiming is a "glorified next token predictor, markov
               | chains" or whatever. Obviously LLMs are far from humans
               | and AGI right now, but at the same time they are much
               | more amazing than a statement like "glorified next token
               | predictor" lets on. The question is how accurate to real
               | life the predictor is and how nuanced it can get.
               | 
               | To me, the tech has been an amazing breakthrough. The
               | backlash and downplaying by some people seems like some
               | odd type of fear or cope to me.
               | 
               | Even if it is not that world changing, why downplay it
               | like that?
        
             | marshray wrote:
             | > Nobody has ever caught a human brain producing or storing
             | tokens
             | 
             | Do you remember learning how to read and write?
             | 
             | What are spelling tests?
             | 
             | What if "subjective experience" isn't essential, or is even
             | just a distraction, for a great many important tasks?
        
               | roywiggins wrote:
               | Entirely possible. Lots of things exhibit complex
               | behavior that probably don't have subjective experience.
               | 
               | My point is just that the evidence for "humans are just
               | token prediction machines and nothing more" is extremely
               | lacking, but there's always someone in these discussions
               | who asserts it like it's obvious.
        
           | robbomacrae wrote:
           | It's a cute generalization but you do yourself a great
           | disservice. It's somewhat difficult to argue given the medium
           | we have here and it may be impossible to disprove but
           | consider that in first 30 minutes of your post being highly
           | visible on this thread no one had yet replied. Some may have
           | acted in other ways.. had opinions.. voted it up/down. Some
           | may have debated replying in jest or with a some related
           | biblical verse. I'd wager a few may have used what they could
           | deduce from your comment and/or history to build a mini model
           | of you in their heads, and using that to simulate the
           | conversation to decide if it was worth the time to get into
           | such a debate vs tending to other things.
           | 
           | Could current LLM's do any of this?
        
             | kredd wrote:
             | I'm not the OP, and I genuinely don't like how we're slowly
             | entering the "no text in internet is real" realm, but I'll
             | take a stab at your question.
             | 
             | If you made an LLM to pretend to have a specific
             | personality (e.g. assume you are a religious person and
             | you're going to make a comment in this thread) rather than
             | "generic catch-all LLM", they can pretty much do that. Part
             | of Reddit is just automated PR LLMs fighting each other,
             | making comments and suggesting products or viewpoints,
             | deciding on which comment to reply and etc. You just chain
             | bunch of responses together with pre-determined questions
             | like "given this complete thread, do you think it would
             | look organic if we responded with a plug to a product to
             | this comment?".
             | 
             | It's also not that hard to generate these type of
             | "personalities", since you can use a generic one to suggest
             | you a new one that would be different from your other
             | agents.
             | 
             | There are also Discord communities that share tips and
             | tricks for making such automated interactions look more
             | real.
        
               | robbomacrae wrote:
               | These things might be able to produce comparable output
               | but that wasn't my point. I agree that if we are
               | comparing ourselves over the text that gets written then
               | LLM's can achieve super intelligence. And writing text
               | can indeed be simplified to token predicting.
               | 
               | My point was we are not just glorified token predicting
               | machines. There is a lot going on behind what we write
               | and whether we write it or not. Does the method matter vs
               | just the output? I think/hope it does on some level.
        
           | mjr00 wrote:
           | If you genuinely believe your brain is just a token
           | prediction machine, why do you continue to exist? You're just
           | consuming limited food, water, fuel, etc for the sake of
           | predicting tokens, like some kind of biological crypto miner.
        
             | paulmd wrote:
             | Genetic and memetic/intellectual immortality, of course.
             | Biologically there can be no other answer. We are here to
             | spread and endure, there is no "why" or end-condition.
             | 
             | If your response to there not being a big ending cinematic
             | to life with a bearded old man and a church choir, or all
             | your friends (and a penguin) clapping and congratulating
             | you is that you should kill yourself immediately, that's a
             | you problem. Get in the flesh-golem, shinzo... or Jon
             | Stewart will have to pilot it again.
        
               | mjr00 wrote:
               | I'm personally a lot more than a prediction engine, don't
               | worry about me.
               | 
               | For those who _do_ believe they are simply fleshy token
               | predictors, is there a moral reason that other (sentient)
               | humans can 't kill -9 them like a LLaMa3 process?
        
               | mewpmewp2 wrote:
               | Morality is just what worked as set of rules for groups
               | of humans to survive together. You can try to kill me if
               | you want, but I will try to fight back and society will
               | try to punish you.
               | 
               | And all of the ideas of morality and societal rules come
               | from this desire to survive and desire to survive exists
               | because this is what natural selection obviously selects
               | for.
               | 
               | There is also probably a good explanation why people want
               | to think that they are special and more than prediction
               | engines.
        
               | quesera wrote:
               | Yes, specifically that a person's opinions are never
               | justification for violence committed against them, no
               | matter how sure you might be of your righteousness.
        
               | mjr00 wrote:
               | But they've attested that they are merely a token
               | prediction process; it's likely they don't qualify as
               | sentient. Generously, we can put their existence on the
               | same level as animals such as cows or chickens. So maybe
               | it's okay to terminate them if we're consuming their
               | meat?
        
               | lupire wrote:
               | Why would sentient processes deserve to live? Especially
               | non sentient systems who hallucinate their own sentience?
               | Are you arguing that the self aware token predictors
               | should kill and eat you? They crave meat so they can
               | generate more tokens.
               | 
               | In short, we believe in free will because we have no
               | choice.
        
               | quesera wrote:
               | "It is your burden to prove to my satisfaction that you
               | are sentient. Else, into the stew you go." Surely you see
               | the problem with this code.
               | 
               | Before you harvest their organs, you might also
               | contemplate whether the very act of questioning one's own
               | sentience might be inherent positive proof.
               | 
               | I'm afraid you must go hungry either way.
        
             | mewpmewp2 wrote:
             | Well, yes. I won't commit suicide though, since it is an
             | evolutionarily developed trait to keep living and
             | reproducing since only the ones with that trait survive in
             | the first place.
        
               | mjr00 wrote:
               | If LLMs and humans are the same, should it be legal for
               | me to terminate you, or illegal for me to terminate an
               | LLM process?
        
               | mewpmewp2 wrote:
               | What do you mean by "the same"?
               | 
               | Since I don't want to die I am going to say it should be
               | illegal for you to terminate me.
               | 
               | I don't care about an LLM process being terminated so I
               | have no problem with that.
        
           | jen729w wrote:
           | Sure.
           | 
           | > Your brain does not process information, retrieve knowledge
           | or store memories. In short: your brain is not a computer
           | 
           | > To understand even the basics of how the brain maintains
           | the human intellect, we might need to know not just the
           | current state of all 86 billion neurons and their 100
           | trillion interconnections, not just the varying strengths
           | with which they are connected, and not just the states of
           | more than 1,000 proteins that exist at each connection point,
           | but how the moment-to-moment activity of the brain
           | contributes to the integrity of the system.
           | 
           | https://aeon.co/essays/your-brain-does-not-process-
           | informati...
        
           | therobots927 wrote:
           | What are you _talking about_? Do you have any actual
           | cognitive neuroscience to back that up? Have they scanned the
           | brain and broken it down into an LLM-analogous network?
        
         | bbor wrote:
         | Well, an entire industry of researchers, which used to be
         | divided, is now uniting around calls to slow development and
         | emphasize safety (like, "dissolve companies" emphasis not
         | "write employee handbooks" emphasis). They're saying, more-or-
         | less in unison, that GPT3 was an unexpected breakthrough in the
         | Frame Problem, based on Judea Pearl's prescient predictions. If
         | we agree on that, there are two options:
         | 
         | 1. They've all been tricked/bribed by Sam Altman and company
         | (which btw this is a company started _against_ those specific
         | guys, just for clarity). Including me, of course.
         | 
         | 2. You're not as much of an expert in cognitive science as you
         | think you are, and maybe the scientists know something you
         | don't.
         | 
         | With love. As much love as possible, in a singular era
        
           | majormajor wrote:
           | I would read the existence of this company as evidence that
           | the entire industry is _not_ as united as all that, since
           | Sutskever was recently at another major player in the
           | industry and thought it worth leaving. Whether that 's a
           | disagreement between what certain players say and what they
           | do and believe, or just a question of extremes... TBD.
        
           | dnissley wrote:
           | Are they actually united? Or is this the ai safety subfaction
           | circling the wagons due to waning relevance in the face of
           | not-actually-all-that-threatening ai?
        
           | nradov wrote:
           | We don't agree on that. They're just making things up with no
           | real scientific evidence. There are way more than 2 options.
        
         | foolishbard wrote:
         | There's a chance that these systems can actually out perform
         | their training data and be better than the sum of their parts.
         | New work out Harvard talks about this idea of "transcendence"
         | https://arxiv.org/abs/2406.11741
         | 
         | While this is a new area, it would be naive to write this off
         | as just science fiction.
        
           | majormajor wrote:
           | It would be nice if authors wouldn't use a loaded-as-fuck
           | word like "transcendence" for "the trained model can
           | sometimes achieve better performance than all [chess] players
           | in the dataset" because while certainly that's demonstrating
           | an impressive internalization of the game, it's also
           | something that many humans can also do. The machine, of
           | course, can be scaled in breadth and performance, but...
           | "transcendence"? Are they _trying_ to be mis-interpreted?
        
             | Kerb_ wrote:
             | It transcends the training data, I get the usage intended
             | but it certainly is ripe for misinterpretation
        
               | lupire wrote:
               | That's trivial though, conceptually. Every regression
               | line transcends the training data. We've had that since
               | Wisdom of Crowds.
        
               | sb77 wrote:
               | The word for that is "generalizes" or "generalization"
               | and it has existed for a very long time.
        
           | internet_co wrote:
           | "In chess" for AI papers == "in mice" for medical papers.
           | Against lichess levels 1, 2, 5, which use a severely dumbed
           | down Stockfish version.
           | 
           | Of course it is possible that SSI has novel, unpublished
           | ideas.
        
             | ffhhj wrote:
             | Also it's possible that human intelligence already reached
             | the most general degree of intelligence, since we can deal
             | with every concept that could be generated, unless there
             | are concepts that are uncompressible and require more
             | memory and processing than our brains could support. In
             | such case being "superintelligent" can be achieved by
             | adding other computational tools. Our pocket calculators
             | make us smarter, but there is no "higher truth" a
             | calculator could let us reach.
        
             | O_OtheGreat wrote:
             | Lichess 5 is better than the vast majority of chess players
        
               | oblio wrote:
               | I think the main point is that from a human intelligence
               | perspective chess is easy mode. Clearly defined, etc.
               | 
               | Think of politics or general social interactions for
               | actual hard mode problems.
        
               | jbay808 wrote:
               | The past decade has seen a huge number of problems widely
               | and confidently believed to be "actual hard mode
               | problems" turn out to be solvable by AI. This makes me
               | skeptical that the problems today's experts think are
               | hard aren't easily solvable too.
        
               | nradov wrote:
               | Hard problems are those for which the rules aren't
               | defined, or constantly change, or don't exist at all. And
               | no one can even agree on the goals.
        
         | alecco wrote:
         | Because it's likely soon LLMs will be able to teach themselves
         | and surpass humans. No consciousness, no will. But somebody
         | will have their power. Dark government agencies and
         | questionable billionaires. Who knows what will it enable them
         | to do.
         | 
         | https://en.wikipedia.org/wiki/AlphaGo_Zero
        
           | roywiggins wrote:
           | Likely according to who?
        
             | CyberDildonics wrote:
             | Whoever needs money from investors who don't understand
             | LLMs.
        
               | figers wrote:
               | ha-ha!!!!
        
           | mjr00 wrote:
           | Mind defining "likely" and "soon" here? Like 10% chance in
           | 100 years, or 90% chance in 1 year?
           | 
           | Not sure how a Go engine really applies. Do you consider cars
           | superintelligent because they can move faster than any human?
        
             | TechDebtDevin wrote:
             | I'm with you here, but it should be noted that while the
             | combustion engine has augmented our day to day lives for
             | the better and our society overall, it's actually a great
             | example of a technology that has been used to enable the
             | killing of 100s of millions of people by those exact types
             | of shady institutions and individuals the commenter made
             | reference to. You don't need something "super intelligent"
             | to cause a ton of harm.
        
               | O_OtheGreat wrote:
               | Yes just like the car and electric grid.
        
             | JumpCrisscross wrote:
             | > _Mind defining "likely" and "soon" here? Like 10% chance
             | in 100 years, or 90% chance in 1 year?_
             | 
             | We're just past the Chicago pile days of LLMs [1]. Sutsever
             | believes Altman is running a private Manhattan project in
             | OpenAI. I'd say the evidence for LLMs having
             | superintelligence capability is on shakier theoretical
             | ground today than nuclear weapons were in 1942, but I'm no
             | expert.
             | 
             | Sutsever _is_ an expert. He 's also conflicted, both in his
             | opposition to OpenAI (reputationally) and pitching of SSI
             | (financially).
             | 
             | So I'd say there appears to be a disputed but material
             | possibility of LLMs achieving something that, if it doesn't
             | pose a threat to our civilisation _per se_ , does as a
             | novel military element. Given that risk, it makes sense to
             | be cautious. Paradoxically, however, that risk profile
             | calls for strict regulation approaching nationalisation.
             | (Microsoft's not-a-taker takeover of OpenAI perhaps
             | providing an enterprising lawmaker the path through which
             | to do this.)
             | 
             | [1] https://en.wikipedia.org/wiki/Chicago_Pile-1
        
           | lxgr wrote:
           | What's the connection between LLMs and AlphaGo?
        
         | hintymad wrote:
         | > Not to be too pessimistic here, but why are we talking about
         | things like this
         | 
         | I also think that we merely got a very well compressed
         | knowledge base, therefore we are far from super intelligence,
         | and so-called safety sounds more Orwellian than having any real
         | value. That said, I think we should take the literal meaning of
         | what Ilya says. His goal is to build a super intelligence.
         | Given that, albeit a lofty goal, SSI has to put safety in
         | place. So, there, safe super intelligence
        
           | lxgr wrote:
           | An underappreciated feature of a classical knowledge base is
           | returning "no results" when appropriate. LLMs so far arguably
           | fall short on that metric, and I'm not sure whether that's
           | possibly an inherent limitation.
           | 
           | So out of all potential applications with current-day LLMs,
           | I'm really not sure this is a particularly good one.
           | 
           | Maybe this is fixable if we can train them to cite their
           | sources more consistently, in a way that lets us double check
           | the output?
        
         | cjk2 wrote:
         | I'm a miserable cynic at a much higher level. This is top level
         | grifting. And I've made a shit ton of money out of it. That's
         | as far as reality goes.
        
           | therobots927 wrote:
           | lol same. Are you selling yet?
        
             | cjk2 wrote:
             | Mostly holding on still. Apple just bumped the hype a
             | little more and gave it a few more months despite MSFT's
             | inherent ability to shaft everything they touch.
             | 
             | I moved about 50% of my capital back into ETFs though
             | before WWDC in case they dumped a turd on the table.
        
             | nathanasmith wrote:
             | When QQQ and SMH close under the 200 day moving average
             | I'll sell my TQQQ and SOXL repectively. Until then, party
             | on! It's been a wild ride.
        
         | reissbaker wrote:
         | I'm pretty sure "Altman and company" don't have much to do with
         | this -- this is Ilya, who pretty famously tried to get Altman
         | fired, and then himself left OpenAI in the aftermath.
         | 
         | Ilya is a brilliant researcher who's contributed to many
         | foundational parts of deep learning (including the original
         | AlexNet); I would say I'm somewhat pessimistic based on the
         | "safety" focus -- I don't think LLMs are particularly
         | dangerous, nor do they seem likely to be in the near future, so
         | that seems like a distraction -- but I'd be surprised if SSI
         | didn't contribute _something_ meaningful nonetheless given the
         | research pedigree.
        
           | vineyardmike wrote:
           | > I don't think LLMs are particularly dangerous
           | 
           | "Everyone" who works in deep AI tech seems to constantly talk
           | about the dangers. Either they're aggrandizing themselves and
           | their work, or they're playing into sci-fi fear for attention
           | or there is something the rest of us aren't seeing.
           | 
           | I'm personally very skeptical there is any real dangers
           | today. If I'm wrong, I'd love to see evidence. Are foundation
           | models before fine tuning outputting horrific messages about
           | destroying humanity?
           | 
           | To me, the biggest dangers come from a human listening to a
           | hallucination and doing something dangerous, like unsafe food
           | preparation or avoiding medical treatments. This seems
           | distinct from a malicious LLM super intelligence.
        
             | lupire wrote:
             | That's what Safe Super intelligence misses.
             | Superintelligence isn't practically more dangerous. Super
             | stupidity is already here, and bad enough.
        
             | zztop44 wrote:
             | They reduce the marginal cost of producing plausible
             | content to effectively zero. When combined with other
             | societal and technological shifts, that makes them
             | dangerous to a lot of things: healthy public discourse, a
             | sense of shared reality, people's jobs, etc etc
             | 
             | But I agree that it's not at all clear how we get from
             | ChatGPT to the fabled paperclip demon.
        
               | zombiwoof wrote:
               | We are forgetting the visual element
               | 
               | The text alone doesn't do it but add some generated and
               | nearly perfect "spokesperson" that is uniquely crafted to
               | a persons own ideals and values, that then sends you a
               | video message with that marketing .
               | 
               | We will all be brainwashed zombies
        
           | Yoric wrote:
           | I actually feel that they can be very dangerous. Not because
           | of the fabled AGI, but because
           | 
           | 1. they're so good at showing the appearance of being right;
           | 
           | 2. their results are actually quite unpredictable, not always
           | in a funny way;
           | 
           | 3. C-level executives actually believe that they work.
           | 
           | Combine this with web APIs or effectors and this is a recipe
           | for disaster.
        
             | lazide wrote:
             | The 'plausible text generator' element of this is perfect
             | for mass fraud and propaganda.
        
             | jimkleiber wrote:
             | I got into an argument with someone over text yesterday and
             | the person said their argument was true because ChatGPT
             | agreed with them and even sent the ChatGPT output to me.
             | 
             | Just for an example of your danger #1 above. We used to say
             | that the internet always agrees with us, but with Google it
             | was a little harder. ChatGPT can make it so much easier to
             | find agreeing rationalizations.
        
           | zombiwoof wrote:
           | The word transformer nor LLM appear anywhere in their
           | announcement
           | 
           | It's like before the end of WWII the world sees the US as a
           | military super power , and THEN we unleash the atomic bomb
           | they didn't even know about
           | 
           | That is Ilya. He has the tech. Sam had the corruption and the
           | do anything power grab
        
         | TaylorAlexander wrote:
         | > why are we talking about things like this?
         | 
         | > this is such a transparent attention grab (and, by extension,
         | money grab by being overvalued by investors and shareholders)
         | 
         | Ilya believes transformers can be enough to achieve
         | superintelligence (if inefficiently). He is concerned that
         | companies like OpenAI are going to succeed at doing it without
         | investing in safety, and they're going to unleash a demon in
         | the process.
         | 
         | I don't really believe either of those things. I find arguments
         | that autoregressive approaches lack certain critical features
         | [1] to be compelling. But if there's a bunch of investors
         | caught up in the hype machine ready to dump money on your
         | favorite pet concept, and you have a high visibility position
         | in one of the companies at the front of the hype machine,
         | wouldn't you want to accept that money to work relatively
         | unconstrained on that problem?
         | 
         | My little pet idea is open source machines that take in veggies
         | and rice and beans on one side and spit out hot healthy meals
         | on the other side, as a form of mutual aid to offer payment
         | optional meals in cities, like an automated form of the work
         | the Sikhs do [2]. If someone wanted to pay me loads of money to
         | do so, I'd have a lot to say about how revolutionary it is
         | going to be.
         | 
         | [1] https://www.youtube.com/watch?v=1lHFUR-yD6I
         | 
         | [2] https://www.youtube.com/watch?v=qdoJroKUwu0
         | 
         | EDIT: To be clear I'm not saying it's a fools errand. Current
         | approaches to AI have economic value of some sort. Even if we
         | don't see AGI any time soon there's money to be made. Ilya
         | clearly knows a lot about how these systems are built. Seems
         | worth going independent to try his own approach and maybe
         | someone can turn a profit off this work even without AGI. Tho
         | this is not without tradeoffs and reasonable people can
         | disagree on the value of additional investment in this space.
        
           | lazide wrote:
           | His paycheck is already dependent on people believing this
           | world view. It's important to not lose sight of that.
        
             | greatpostman wrote:
             | Dude he's probably worth > 1 Billion.
        
           | zombiwoof wrote:
           | Ilya has never said transformers are the end all be all
        
             | TaylorAlexander wrote:
             | Sure but I didn't claim he said that. What I did say is
             | correct. Here's him saying transformers are enough to
             | achieve AGI in a short video clip:
             | https://youtu.be/kW0SLbtGMcg
        
         | flockonus wrote:
         | Likewise, i'm baffled by intelligent people [in such denial]
         | still making the reductionist argument about token prediction
         | being a banal ability. It's not. It's not very different than
         | how our intelligence manifest.
        
           | TaylorAlexander wrote:
           | > It's not very different than how our intelligence manifest.
           | 
           | [citation needed]
        
             | esafak wrote:
             | Search for "intelligence is prediction/compression" and
             | you'll find your citations.
        
         | 01100011 wrote:
         | Too many people are extrapolating the curve to exponential when
         | it could be a sigmoid. Lots of us got too excited and too
         | invested in where "AI" was heading about ten years ago.
         | 
         | But that said, there are plenty of crappy, not-AGI technologies
         | that deserve consideration. LLMs can still make for some very
         | effective troll farms. GenAI can make some very convincing
         | deepfakes. Drone swarms, even without AI, represent a new
         | dimension of capabilities for armies, terrorist groups or lone
         | wolves. Bioengineering is bringing custom organisms, prions or
         | infectious agents within reach of individuals.
         | 
         | I wish someone in our slowly-ceasing-to-function US government
         | was keeping a proper eye on these things.
        
         | coffeemug wrote:
         | AlphaGo took us from mediocre engines to outclassing the best
         | human players in the world within a few short years. Ilya
         | contributed to AlphaGo. What makes you so confident this can't
         | happen with token prediction?
        
           | lupire wrote:
           | If solving chess already created the Singularity, why do we
           | need token prediction?
           | 
           | Why do we need computers that are better than humans at the
           | game of token prediction?
        
         | joantune wrote:
         | > But, all we've achieved at this point is making a glorified
         | token predicting machine trained on existing data (made by
         | humans), not really being able to be creative outside of
         | deriving things humans have already made before. Granted,
         | they're really good at doing that, but not much else.
         | 
         | Remove token, and that's what we humans do.
         | 
         | Like, you need to realize that neural networks came to be
         | because someone had the idea to mimic our brains'
         | functionality, and see where that lead to.
         | 
         | Many skeptics at the beginning like you discredited the
         | inventor, but he was proved wrong. LLMs shown how much more
         | than your limited description they can achieve.
         | 
         | We mimicked birds with airplanes, and we can outdo them. It's
         | actually in my view very short sighted to say we can't just
         | mimic brains and outdo them. We're there. ChatGPT is the
         | initial little planes that flew close to the ground and barely
         | stayed up
        
           | lazide wrote:
           | Except it really, actually, isn't.
           | 
           | People don't 'think' the same way, even if some part of how
           | humans think seems to be somewhat similar some of the time.
           | 
           | That is an important distinction.
           | 
           | This is the hype cycle.
        
         | joe_the_user wrote:
         | I actually do doubt that LLMs will create AGI but when these
         | systems are emulating a variety of human behaviors in a way
         | that isn't directly programmed and is good enough to be useful,
         | it seems foolish to not take notice.
         | 
         | The current crop of systems is a product of the transformers
         | architecture - an innovation that accelerated performance
         | significantly. I put the odds another changing everything but I
         | don't think we can entirely discount the possibility. That no
         | one understands these systems cuts both ways.
        
         | blueboo wrote:
         | it's intellectual ego catnip.
         | https://idlewords.com/talks/superintelligence.htm
        
         | kevincox wrote:
         | Even if LLM-style token prediction is not going to lead to AGI
         | (as it very likely won't) it is still important to work on
         | safety. If we wait until we are at the technology that will for
         | sure lead to AGI then it is very likely that we won't have
         | sufficient safety before we realize that it is important.
        
       | nickdothutton wrote:
       | One thing that strikes me about this time around the AI cycle,
       | being old enough to have seen the late 80s, is how pessimistic
       | and fearful society is as a whole now. Before... the challenge
       | was too great, the investment in capital too draining, the
       | results too marginal when compared to human labour or even "non-
       | AI" computing.
       | 
       | I wonder if someone older still can comment on how "the atom"
       | went from terrible weapon on war to "energy too cheap to meter"
       | to wherever it is now (still a bete noire for the green energy
       | enthusiasts).
       | 
       | Feels like we are over-applying the precautionary principle, the
       | mainstream population seeing potential disaster everywhere.
        
       | RGBCube wrote:
       | The year is 2022. An OpenAI employee concered about AI safety
       | creates his own company.
       | 
       | The year is 2023. An OpenAI employee concered about AI safety
       | creates his own company.
       | 
       | The year is 2024.
        
       | ayakang31415 wrote:
       | Ilya is definitely much smarter than me in AI space, and I
       | believe he knows something I have no grasp of understanding in.
       | But my gut feeling tells me that most of the general public will
       | have no idea how dangerous AI could be including me. I still have
       | yet to see a convincing argument about the potential danger of
       | AI. Arguments such as we don't know the upper bounds of
       | possibilities that AI can do that we humans have missed don't cut
       | it for me.
        
       | mugivarra69 wrote:
       | ssri would be nicer
        
       | s3graham wrote:
       | > We are assembling a lean, cracked team
       | 
       | What does "cracked" mean in this context? I've never heard that
       | before.
        
       | yellow_postit wrote:
       | "... most important technical problem of our time."
       | 
       | This is the dangers of letting the EAs run too far, they miss the
       | forest for the trees but claim they see the planet.
        
       | facu17y wrote:
       | How can they speak of Safety when they are based partly in a
       | colonialist settler entity that is committing a genocide and
       | wanting to exterminate the indigenous population to make room for
       | the Greater Zionist State.
       | 
       | I don't do business with Israeli companies while Israel is
       | engaged in mass Extermination of a human population they treat as
       | dogs.
        
         | jolj wrote:
         | they are pretty bad at exterminating the palestinian population
         | don't you think?
         | 
         | There are 14 million palestinians worldwide, continuing the
         | current pace, without accounting for any natural growth, it
         | will only take Israel 291 years to exterminate the palestinian
         | population better hurry and protest before it's too late
        
       | PostOnce wrote:
       | I would like to be more forgiving than I am, but I struggle to
       | forget abhorrent behavior.
       | 
       | Daniel Gross is the guy who was tricking kids into selling a
       | percent of all their future work for a few thousand dollar 'do
       | whatever you want and work on your passion "grant"', it was
       | called Pioneer and was akin to indentured servitude, i.e.
       | slavery.
       | 
       | So color me skeptical if Mr. Enslave the Kids is going to be
       | involved in anything that's really good for anyone but himself.
        
       | samirillian wrote:
       | trying the same thing over and over again and expecting different
       | results
        
       | genocide_joe wrote:
       | Working on Safe ASI while half of the staff living in a country
       | committing a genocide by UN definition (mass extermination, war
       | crimes, mass killing of children)
       | 
       | VERY BELIEVABLE
        
       | EGreg wrote:
       | I'm going to come out and state the root of the problem.
       | 
       | I can't trust remote AI, any more than I can trust a remote
       | website.
       | 
       | If someone else is running the code, they can switch it up
       | anytime. Imagine trusting someone who simulates everything you
       | need to trust them, giving them all your private info, and then
       | screws you over in an instant. AI is capable of it far more than
       | biological beings with "costly signals".
       | 
       | If it's open source, and I can run it locally, I can verify that
       | it doesn't phone home, and the weights can be audited by others.
       | 
       | Just like I wouldn't want to spend 8 hours a day in a metaverse
       | owned by Zuck, or an "everything app" owned by Elon, why would I
       | want to give everything over to a third party AI?
       | 
       | I like Ilya. I like Elon. I like Moxie Marlinspike and Pavel
       | Durov. But would I trust their companies to read all my emails,
       | train their data on them, etc.? And hope nothing leaks?
       | 
       | And of course then there is the issue of the AI being trained to
       | do what _they_ want, just like their sites do what _they_ want,
       | which in the case of Twitter  / Facebook is not healthy for
       | society at large, but creates angry echo chambers, people
       | addicted to stupid arguing and videos.
       | 
       | I think there have to be standards for open source AI, and
       | something far better than Langchain (which sucks). This is what I
       | think it should look like: https://engageusers.ai/ecosystem.pdf
       | -- what do you think?
        
       | light_triad wrote:
       | We might need Useful Superintelligence Inc. / USI before SSI?
       | 
       | Safety is an important area in R&D but the killer application is
       | the integration of LLMs into existing workflows to make non
       | technical users 100x-1000x more efficient. There's a lot of
       | untapped potential there. The big successes will have a lot of
       | impact on safety but it will probably come as a result of the
       | wider adoption of these tools rather than the starting point.
        
         | insane_dreamer wrote:
         | Lots of other companies / people working on that.
         | 
         | No one is really working on safety. So I can see why Ilya is
         | taking on that challenge, and it explains why he left OpenAI.
        
       | kredd wrote:
       | My naive prediction is there will be an extreme swing back into
       | "reality" once everyone starts assuming the whole internet is
       | just LLMs interacting with each other. Just like how there's a
       | shift towards private group chats, with trusted members only,
       | rather than open forums.
        
       | lazzurs wrote:
       | This is lovely and all but seems rather pointless.
       | 
       | If we are so close this is something that's required then it's
       | already too late and very likely we are all under the influence
       | of SuperAI and don't know it. So much of the advanced technology
       | we have today was around for so long before it was general
       | knowledge it's hard to believe this wouldn't be the case with
       | SuperAI.
       | 
       | Or it's not close at all and so back to my original point
       | of...this is pointless.
       | 
       | I do hope I'm entirely wrong.
        
       | zombiwoof wrote:
       | The end is near!
        
       | optimalsolver wrote:
       | Ilya couldn't even align his CEO, but somehow he's going to align
       | an AGI?
        
       | zombiwoof wrote:
       | Some of these responses remind of when people said "nobody will
       | use their credit card on the internet" , or "who needs to carry a
       | phone with them"
        
       ___________________________________________________________________
       (page generated 2024-06-19 23:00 UTC)