[HN Gopher] OpenAI departures: Why can't former employees talk?
       ___________________________________________________________________
        
       OpenAI departures: Why can't former employees talk?
        
       Author : fnbr
       Score  : 1159 points
       Date   : 2024-05-17 18:55 UTC (1 days ago)
        
 (HTM) web link (www.vox.com)
 (TXT) w3m dump (www.vox.com)
        
       | shuckles wrote:
       | I'm not sure how this is legal. My employer certainly could not
       | clawback paid salary or bonuses if I violated a surprise NDA they
       | sprung on me when leaving on good terms. Why can they clawback
       | vested stock compensation?
        
         | orionsbelt wrote:
         | My guess is they agreed to it upfront.
        
           | _delirium wrote:
           | That appears to be the case, although the wording of what
           | they agree to up front is considerably more vague than the
           | agreement they're reportedly presented to sign post-
           | departure. Link to a thread from the author of the Vox
           | article: https://x.com/KelseyTuoc/status/1791584341669396560
        
         | gwern wrote:
         | These aren't real stock, they are "profit participation units"
         | or PPUs; in addition, the fact that there is a NDA and a NDA
         | about the NDA, means no one can warn you before you sign your
         | employment papers about the implications of 'PPUs' and the
         | tender-offer restriction and the future NDA. So it's possible
         | that there's some loophole or simple omission somewhere which
         | enables this, which would never work for regular RSUs or stock
         | options, which no one is allowed to warn you about on pain of
         | their PPUs being clawed back, and which you find out about only
         | when you leave (and who would want to leave a rocketship like
         | OA?).
        
       | MBlume wrote:
       | Submission title mentions NDA but the article also mentions a non
       | disparagement agreement. "You can't give away our trade secrets"
       | is one thing but it sounds like they're being told they can't say
       | anything critical of the company at all.
        
         | reducesuffering wrote:
         | They can't even mention the NDA exists!
        
           | danielmarkbruce wrote:
           | This is common, and there is nothing wrong with it.
        
             | Chinjut wrote:
             | There is absolutely something wrong with it. Just because a
             | thing is common doesn't make it good.
        
               | danielmarkbruce wrote:
               | Two people entering an agreement to not talk about
               | something is fine. You and I should (and can, with very
               | few restrictions) be able to agree that I'll do x, and
               | you'll do y and we are going to keep the matter private.
               | Anyone who wants to take away this ability for two people
               | to do such a thing needs to take a long hard look at
               | themselves, and maybe move to north korea.
        
       | rich_sasha wrote:
       | So what's open about it these days?
        
       | asperous wrote:
       | Not a lawyer but those contracts aren't legal. You need something
       | called "consideration" ie something new of value to be legal.
       | They can't just take away something of value that was already
       | agreed upon.
       | 
       | However they could add this to new employee contracts.
        
         | koolba wrote:
         | Through in a preamble of " _For $1 and other consideration..._
         | "
        
         | ethbr1 wrote:
         | "Legal" seems like a fuzzy line to OpenAI's leadership.
         | 
         | Pushing unenforceable scare-copy to get employees to self-
         | censor sounds on-brand.
        
           | tptacek wrote:
           | I agree with Piper's point that these contracts aren't common
           | in tech, but they're hardly unheard of. In 20 years of
           | consulting work I've seen dozens of them. They're not
           | _uncommon_. This doesn 't look uniquely hostile or amoral for
           | OpenAI, just garden-variety.
        
             | lupire wrote:
             | as an _exit_ contract? Not part of a severance agreement?
             | 
             | Boomberg famously used this as an employment contract, and
             | it was a campaign scandal for Mike.
        
             | a_wild_dandan wrote:
             | Well, an AI charity -- so founded on openness that they're
             | called OpenAI -- took millions in donations, everyone's
             | copyright data...only to become effectively for-profit,
             | close down their AI, and inflict a lifetime gag on their
             | employees. In that context, it feels rather amoral.
        
               | tptacek wrote:
               | This to me is like the "don't be evil" thing. I didn't
               | take it seriously to begin with, I don't think reasonable
               | people should have taken it seriously, and so it's not
               | persuasive or really all that interesting to argue about.
               | 
               | People are different! You can think otherwise.
        
               | int_19h wrote:
               | I think we do need to start taking such things seriously,
               | and start holding companies accountable using all
               | available venues (including legal, and legislative if the
               | laws don't have enough leverage as it is) when they act
               | contrary to their publicly stated commitments.
        
               | thumrusn72 wrote:
               | Therein lies the issue. The second you throw idealistic
               | terms like "don't be evil" and __OPEN__ ai around you
               | should be expected to deliver.
               | 
               | But how is that even possible when corporations are
               | typically run by ghouls who enjoy relativistic morals
               | when it suits them. And are beholden to profits, not
               | ethics.
        
             | comp_throw7 wrote:
             | Contracts like this seem extremely unusual as a condition
             | for _retaining already vested equity (or equity-like
             | instruments)_, rather than as a condition for receiving
             | additional severance. And how common are non-disclosure
             | clauses that cover the non-disparagement clauses?
             | 
             | In fact both of those seem quite bad, both by regular
             | industry standards, and even moreso as applied to OpenAI's
             | specific situation.
        
           | dylan604 wrote:
           | This sounds just like the non-compete issue that the FTC just
           | invalidated. I can see if the current FTC leadership is
           | allowed to continue working after 2025/01/20 that these
           | things might be moved against as well. If new admin is
           | brought in, they might all get reversed. Just something to
           | consider going into your particular polling place
        
         | blackeyeblitzar wrote:
         | It doesn't matter if they are not legal. Employees do not have
         | resources to fight expensive legal battles and fear retaliation
         | in other ways. Like not being able to find future jobs. And
         | anyone with family plain won't have the time.
        
         | singleshot_ wrote:
         | They give you a general release of liability, as noted
         | elsewhere in the thread.
        
         | lxgr wrote:
         | "You get shares in our company in exchange for employment and
         | eternal never-talking-bad-about-us"?
         | 
         | Doesn't mean that that's legal, of course, but I'd doubt that
         | the legality would hinge on a lack of consideration.
        
           | hannasanarion wrote:
           | You can't add a contingency to a payment retroactively. It
           | sounds like these are exit agreements, not employment
           | agreements.
           | 
           | If it was "we'll give you shares/cash if you don't say
           | anything bad about us", that's normal, kind of standard fare
           | for exit agreements, it's why severance packages exist.
           | 
           | But if it is "we'll take away the shares that you already
           | earned as part of your regular employment compensation unless
           | you agree to not say anything bad about us", that's
           | extortion.
        
         | danielmarkbruce wrote:
         | Have you seen the contracts?
        
       | autonomousErwin wrote:
       | Is it criticism if a claim is true? There is so much legal jargon
       | I'm willing to bet most people won't want the headache (and those
       | that don't care about equity are likely already fairly wealthy)
        
         | apsec112 wrote:
         | Non-disparagement clauses forbid all negative statements,
         | whether true or not.
         | 
         | https://www.clarkhill.com/news-events/news/the-importance-of...
        
         | cma wrote:
         | Yes, if it isn't true it is libel or slander (sometimes
         | depending on intent), not just criticism, and already not
         | permissible without any contract covering it.
        
       | 0cf8612b2e1e wrote:
       | Why have other companies not done the same? This seems legally
       | tenuous to only now be attempted. Will we see burger flippers
       | prevented from discussing the rat infestation at their previous
       | workplace?
       | 
       | (Don't have X) - is there a timeline? Can I curse out the company
       | on my deathbed, or would their lawyers have the legal right to
       | try and clawback the equity from the estate?
        
         | apsec112 wrote:
         | The Vox article says that it's a lifetime agreement:
         | 
         | https://www.vox.com/future-perfect/2024/5/17/24158478/openai...
        
           | romanovcode wrote:
           | ROFL how is this even legal?
        
         | exe34 wrote:
         | i worked at McDonald's in the mid-late 00s, I'm pretty sure
         | there was a clause about never saying anything negative about
         | them. i think they were a great employer!
        
           | wongarsu wrote:
           | Sorry, someone at corporate has interpreted this statement as
           | criticism. Please give back all equity, or an amount
           | equivalent to its current value.
        
             | hehdhdjehehegwv wrote:
             | Also, whatever fries left in the bottom of the bag. That's
             | corporate property buddy.
        
             | ryandrake wrote:
             | It doesn't have to be equity. If they wanted to, they could
             | put in their employment contract "If you say anything bad
             | about McDonalds, you owe us $1000." What is the ex-burger-
             | flipper going to do? Fight them in court?
        
             | dylan604 wrote:
             | Like a fast food employee would have equity in the company.
             | Please, let's at least be sensible in our internet ranting.
        
               | jen20 wrote:
               | What about a franchisee?
        
             | exe34 wrote:
             | i got f-all equity, I was flipping burgers for minimum
             | wage.
        
         | johnnyanmac wrote:
         | For the burger metaphor, you need to have leverage over the
         | employee to make them not speak. No one at Burger King is
         | getting severance when they are kicked out, let alone equity.
         | 
         | As for other companies that can pay: I can only assume that the
         | cost to bribe skilled workers isn't worth the perceived risk
         | and cost of lawsuits from the downfall (which they may or may
         | not be able to settle). Generative AI is still very young and
         | under a lot of scrutiny on all fronts, so the risk of a whistle
         | blower at this stage may shape the entire future of the
         | industry at large.
        
         | dylan604 wrote:
         | Other companies _have_ done the same. I worked at a company
         | that is 0% related to the tech industry. I was laid off /let
         | go/dismissed/sacked where they offered me a "severance" on the
         | condition I sign a release with a non-disparaging clause. I
         | didn't give enough shits about the company to waste my
         | time/energy commenting about them. It was just an entry on a
         | resume where I happened to work with some really neat,
         | talented, and cool/interesting coworkers. I had the luxury of
         | nobody else giving a damn about how/why I left. I can only
         | imagine these people getting hounded by Real Housewives level
         | gossip/bullshit.
        
       | a_wild_dandan wrote:
       | Is this a legally enforceable suppression of free speech? If so,
       | are there ways to be open about OpenAI, without triggering
       | punitive action?
        
         | antiframe wrote:
         | OpenAI is not the government. Yet.
        
           | a_wild_dandan wrote:
           | What do I do with this information?
        
             | jaredklewis wrote:
             | Your original comment uses the term "free speech," which in
             | the context of the discussion of the legality of contract
             | in the US, brings to mind the first amendment.
             | 
             | But first amendment basically only restricts the
             | government's ability to suppress speech, not the ability of
             | other parties (like OpenAI).
             | 
             | This restriction may be illegal, but not on first amendment
             | ("free speech") grounds.
        
             | mynegation wrote:
             | Anti frame is saying that free speech guarantee in
             | Constitution only applies to the relationship between the
             | government and the citizens, not between private entities.
        
             | solardev wrote:
             | In the US, the Constitution prevents the government from
             | regulating your speech.
             | 
             | It does not prevent you from entering into contracts with
             | other private entities, like your company, about what THEY
             | allow you to say or not. In this case there might be other
             | laws about whether a company can unilaterally force that on
             | you after the fact, but that's not a free speech
             | consideration, just a contract dispute.
             | 
             | See https://www.themuse.com/advice/non-disparagement-
             | clause-agre...
        
             | TaylorAlexander wrote:
             | I think we need to face the fact that these companies
             | aren't trustworthy in upholding their own stated morals. We
             | need to consider whether streaming video from our phone to
             | a complex AI system that can interpret everything it sees
             | might have longer term privacy implications. When you think
             | about it, a cloud AI system is an incredible surveillance
             | machine. You want to talk to it about important questions
             | in your life, and it would also be capable of dragnet
             | surveillance based on complex concepts like "show me all
             | the people organizing protests" etc.
             | 
             | Consider for example that when Amazon bought the Ring
             | security camera system, it had a "god mode" that allowed
             | executives and a team in Ukraine unlimited access to all
             | camera data. It wasn't just a consumer product for home
             | users, it was a mass surveillance product for the business
             | owners:
             | 
             | https://theintercept.com/2019/01/10/amazon-ring-security-
             | cam...
             | 
             | The EFF has more information on other privacy issues with
             | that system:
             | 
             | https://www.eff.org/deeplinks/2019/08/amazons-ring-
             | perfect-s...
             | 
             | These big companies and their executives want power.
             | Withholding huge financial gain from ex employees to
             | maintain their silence is one way of retaining that power.
        
           | impossiblefork wrote:
           | Free speech is a much more general notion than anything
           | having to do with governments.
           | 
           | The first amendment is a US free speech protection, but it's
           | not prototypical.
           | 
           | You can also find this in some other free speech protections,
           | for example that in the UDHR
           | 
           | >Everyone has the right to freedom of opinion and expression;
           | this right includes freedom to hold opinions without
           | interference and to seek, receive and impart information and
           | ideas through any media and regardless of frontiers.
           | 
           | doesn't refer to states at all.
        
             | lupire wrote:
             | UDHR is not law so it's irrelevant to a question of law.
        
               | impossiblefork wrote:
               | Originally the comment to which that comment responded
               | said something about free speech rather than anything
               | about legality, and it was in that context which I
               | responded, so the comment to which I responded must have
               | also been written in that context.
        
             | kfrzcode wrote:
             | Free speech is a God-given right. It is innate and given to
             | you and everyone at birth, after which it can only be
             | suppressed but never revoked.
        
               | CamperBob2 wrote:
               | Good luck serving God with a subpoena when you have to
               | defend yourself in court. He's _really_ good at dodging
               | process servers.
        
               | hollerith wrote:
               | I know it is popular, but I distrust "natural rights"
               | rhetoric like this.
        
               | smabie wrote:
               | Did God tell you this? People who talk about innate
               | rights are just making things up
        
           | janalsncm wrote:
           | A lot of people forget that although 1A means the government
           | can't put you in prison for things, there are a lot of pretty
           | unpleasant consequences from private entities. As far as I
           | know, it wouldn't be illegal for a dentist to deny care to
           | someone who criticized them, for example.
        
             | Marsymars wrote:
             | Right, and that's why larger companies need regulation
             | around those consequences. If a dentist doesn't want to
             | treat you because you criticized them, that's fine, but if
             | State Farm doesn't want to insure your dentistry because
             | you criticized them, regulators shouldn't allow that.
        
           | zeroonetwothree wrote:
           | If the courts enforce the agreement then that is state
           | action.
           | 
           | So I think an argument can be made that NDAs and similar
           | agreements should not be enforceable by courts.
           | 
           | See Shelley v. Kraemer
        
         | exe34 wrote:
         | you could praise them for the opposite of what you mean to say,
         | and include a copy of the clause in between each paragraph.
        
           | lucubratory wrote:
           | Acknowledging the NDA or any part of it is in violation of
           | the NDA.
        
             | exe34 wrote:
             | there is no NDA in Ba Sing Se!
        
           | istjohn wrote:
           | OpenAI never acted with total disregard for safety. They
           | never punished employees for raising legitimate concerns.
           | They never reneged on public promises to devote resources to
           | AI safety. They never made me sign any agreements restricting
           | what I can say. One plus one is three.
        
         | a_wild_dandan wrote:
         | Also, will Ilya likely have similar contractual bounds, despite
         | the unique role he had at OpenAI? (Sorry for the self-reply.
         | Felt more appropriate than an edit.)
        
           | to11mtm wrote:
           | The unique role may in fact lead to ADDITIONAL contractual
           | bounds.
           | 
           | High levels (especially if they were board/exec level) will
           | often have additional obligations on top of rank and file.
        
         | YurgenJurgensen wrote:
         | I believe a better solution to this would be to spread the
         | following sentiment: "Since it's already illegal to tell
         | disparaging lies, the mere existence of such a clause implies
         | some disparaging truths to which the party is aware." Always
         | assuming the worst around hidden information provides a strong
         | incentive to be transparent.
        
           | lupire wrote:
           | Humans respond better to concrete details than abstractions.
           | 
           | It's a lot of mental work to rally the emotion of revulsion
           | over the evil they might be doing that is kept secret.
        
             | hi-v-rocknroll wrote:
             | This is true.
             | 
             | I was once fired, ghosted style, for merely being in the
             | same meeting room as a racist corporate ass-clown muting
             | the conference call to make Asian slights and monkey
             | gesticulations. There was no lawsuit or payday because "how
             | would I ever work again?" was the Hobson's choice between
             | let it go and a moral crusade without a way to pay rent.
             | 
             | If instead I were upset that "not enough N are in tech,"
             | there isn't a specific incident or person to blame because
             | it'd be a multifaceted situation.
        
           | berniedurfee wrote:
           | That's a really good point. A variation of the Streisand
           | Effect.
           | 
           | Makes you wonder what misdeeds they're trying so hard to
           | hide.
        
           | jiggawatts wrote:
           | This is an important mode of thinking in many adversarial or
           | competitive contexts.
           | 
           | Cryptography is a prime example. Any time any company is the
           | tiniest bit cagey or obfuscates any aspect, I default to
           | assuming that they're either selling snake oil or have
           | installed NSA back doors. I'll claim this openly, as a fact,
           | _until proven otherwise_.
        
           | d0mine wrote:
           | I hope forbidding telling the truth is about something banal
           | like "fake it until you make it" in some of OpenAI demos. The
           | technology looks like magic but plausible to implement in a
           | few months/years.
           | 
           | Worse if it is related to training future super intelligence
           | to kill people. Killer drones are possible even with today's
           | technology without AGI.
        
         | Hnrobert42 wrote:
         | Well, the speech isn't "free"? It costs the equity grant.
        
         | hi-v-rocknroll wrote:
         | Hush money payments and NDAs aren't illegal as Trump
         | discovered, but perhaps lying about or concealing them in
         | certain contexts is.
         | 
         | Also, when secrets or truthful disparaging information is
         | leaked anonymously without a metadata trail, I'm thinking
         | there's probably little or no recourse.
        
         | to11mtm wrote:
         | Well, for starters everyone can start memes...
         | 
         | After all, at this point, OpenAI:
         | 
         | - Is not open with models
         | 
         | - Is not open with plans
         | 
         | - Does not let former employees be open.
         | 
         | It sure does give us a glimpse into the Future of how Open AI
         | will be!
        
           | stoperaticless wrote:
           | So they are kind of open about their strategy.. (on high
           | level at least)
        
       | OldMatey wrote:
       | Well that's not worrying. /s
       | 
       | I am curious how long it will take for Sam to go from being
       | perceived as a hero to a villain and then on to supervillain.
       | 
       | Even if they had a massive, successful and public safety team,
       | and got alignment right (which I am highly doubtful about being
       | possible) it is still going to happen as massive portions of
       | white collar workers loose their jobs.
       | 
       | Mass protests are coming and he will be an obvious focus point
       | for their ire.
        
         | throwup238 wrote:
         | _> I am curious how long it will take for Sam to go from being
         | perceived as a hero to a villain and then on to supervillain._
         | 
         | He's already perceived by some as a bit of a scoundrel, if not
         | yet a villain, because of World Coin. I bet he'll hit
         | supervillain status right around the time that ChatGPT
         | BattleBots storm Europe.
        
           | gremlions wrote:
           | Plus what he (allegedly) did to his sister when she was a
           | child: https://news.ycombinator.com/item?id=37785072
        
         | wavesounds wrote:
         | Their head of alignment just resigned
         | https://news.ycombinator.com/item?id=40391299
        
         | rvz wrote:
         | > I am curious how long it will take for Sam to go from being
         | perceived as a hero to a villain and then on to supervillain.
         | 
         | He probably already knows that, but doesn't care as long as
         | OpenAI has captured the world's attention with ChatGPT
         | generating them billions and their high interest in destroying
         | Google.
         | 
         | > Mass protests are coming and he will be an obvious focus
         | point for their ire.
         | 
         | This is going to age well.
         | 
         | Given that no-one knows the definition of AGI, then AGI can
         | mean anything; even if it means 'steam-rolling' any startup,
         | job, etc in OpenAI's path.
        
         | shawn_w wrote:
         | When he was fired there was a short window where the prevailing
         | reaction here was "He must have done something /really/ bad."
         | Then opinion changed to "Sam walks on water and the board are
         | the bad guys". Maybe that line of thinking was a mistake.
        
         | maxerickson wrote:
         | If they actually invent a disruptive god, society should just
         | take it away.
         | 
         | No need to fret over the harm to future innovation when I
         | innovation is an industrial product.
        
       | rvz wrote:
       | So that explains the cult-like behaviour months ago when the
       | company was under siege.
       | 
       | Diamond multi-million dollar hand-cuffs which OpenAI has bound
       | lifetime secret service-level NDAs which are another unusual
       | company setting after their so-called "non-profit" founding and
       | their contradictory name.
       | 
       | Even an ex-employee saying 'ClosedAI' could see their PPUs
       | evaporate in front of them to zero or they could _never_ be
       | allowed to sell them and have them taken away.
        
         | timmg wrote:
         | I don't have any idea what goes on inside OAI. But I have this
         | strange feeling that they were right to oust sama. They didn't
         | have the leverage to pull it off, though.
        
       | jp57 wrote:
       | The only way I can see this being a valid contract is if the
       | equity grant that they get to keep is a _new_ grant offered the
       | time of signing the exit contract. Any vested equity given as
       | compensation for work could not then be offered again as
       | consideration for signing a new agreement.
       | 
       | Maybe the agreement is "we will accelerate vesting of your
       | unvested equity if you sign this new agreement"? If that's the
       | case then it doesn't sound nearly so coercive to me.
        
         | apsec112 wrote:
         | It's not. The earlier tweets explain: the initial agreement
         | says the employee must sign a "general release" or forfeit the
         | equity, and then the general release they are asked to sign
         | includes a lifetime no-criticism clause.
        
           | ethbr1 wrote:
           | IOW, this is burying the illegal part in a tangential
           | document, in hopes of avoiding legal scrutiny and/or
           | judgement.
           | 
           | They're really lending employees equity, subject to the
           | company's later feelings as to whether the employee should be
           | allowed to keep or sell it.
        
           | w10-1 wrote:
           | But a general release is not a non-criticism clause.
           | 
           | They're not required to sign anything other than a general
           | release of liability when they leave to preserve their
           | rights. They don't have to sign a non-disparagement clause.
           | 
           | But they'd need a very good lawyer to be confident at that
           | time.
        
             | User23 wrote:
             | And they won't have that equity available to borrow against
             | to pay for that lawyer either.
        
           | Melatonic wrote:
           | I'm no lawyer but this sounds like something that would not
           | go well for OpenAI if strongly litigated
        
             | mrj wrote:
             | Yeah, courts have generally found that this is "under
             | duress" and not enforceable.
        
               | singleshot_ wrote:
               | Under duress in the contractual world is generally
               | interpreted as "you are about to be killed or maimed."
               | Economic duress is distinct.
        
               | to11mtm wrote:
               | Duress can take other forms, unless we are really trying
               | to differentiate general 'coercion' here.
               | 
               | Perhaps as an example of the blurred line; Pre-nup
               | agreements sprung the day of the wedding, will not hold
               | up in a US court with a competent lawyer challenging
               | them.
               | 
               | You can try to call it 'economic' duress but any non-
               | sociopath sees there are other factors at play.
        
               | singleshot_ wrote:
               | That's a really good point. Was this a prenuptial
               | agreement? If it wasn't May take is section 174 would
               | apply and we would be talking about physical compulsion
               | -- and not "it's a preferable economic situation to
               | sign."
               | 
               | Not a sociopath, just know the law.
        
             | fuzztester wrote:
             | >I'm no lawyer
             | 
             | Have any (startup or other) lawyers chimed in here?
        
           | Animats wrote:
           | That's when you need a lawyer.
           | 
           | In general, an agreement to agree is not an agreement. A
           | requirement for a "general release" to be signed at some time
           | in the future is iffy. And that's before labor law issues.
           | 
           | Someone with a copy of that contract should run it through
           | OpenAI's contract analyzer.
        
           | beastman82 wrote:
           | ITT: a bunch of laymen thinking their 2 second proposal will
           | outlawyer the team of lawyers who drafted these.
        
             | throwaway562if1 wrote:
             | You haven't worked with many contracts, have you?
             | Unenforceable clauses are the norm, most people are willing
             | to follow them rather than risk having to fight them in
             | court.
        
               | to11mtm wrote:
               | Bingo.
               | 
               | I have seen a lot of companies put unenforceable stuff
               | into their employment agreements, separation agreements,
               | etc.
        
             | jprete wrote:
             | Lawyers are 100% capable of knowingly crafting
             | unenforceable agreements.
        
               | riwsky wrote:
               | You don't need to out-litigate the bear,
        
             | mminer237 wrote:
             | I am a lawyer. This is not just a general release, and I
             | have no idea how OpenAI's lawyers expect this to be legal.
        
               | listenallyall wrote:
               | Have you read the actual document or contracts? Opining
               | on stuff you haven't actually read seems premature. Read
               | the contract, then tell us which clause violates which
               | statute, that's useful.
        
               | ethbr1 wrote:
               | Out of curiosity, what are the penalties for putting
               | unenforceable stuff in an employment contract?
               | 
               | Are there any?
        
               | sangnoir wrote:
               | Typically there is no penalty - and contracts explicitly
               | declare that all clauses are severable so that the rest
               | of the contract remains valid even if one of the scare-
               | clauses is found to be invalid. IANAL
        
           | bradleyjg wrote:
           | _The earlier tweets explain ..._
           | 
           | What a horrific medium of communication. Why anyone uses it
           | is beyond me.
        
           | DesiLurker wrote:
           | somebody explained to me early on that you cannot have a
           | contract to have a contract. either initial agreement must
           | state this condition clearly or they are signing another
           | contract at employment termination which is bringing these
           | new terms. IDK why would anyone sign that at termination
           | unless they dangle additional equity. I dont think this BS
           | they are trying to pull would be enforceable at least in
           | California. though IANAL obviously.
           | 
           | all this said, in bigger picture I can understand not
           | divulging trade secrets but not being allowed to discuss
           | company culture towards AI safety essentially tells me that
           | all the Sama talk about the 'for the good of humanity' is
           | total BS. at the end of day its about market share and bottom
           | line.
        
             | hughesjj wrote:
             | Canceling my openai subscription as we speak, this is too
             | much. I don't care how good it is relative to other
             | offerings. Not worth it.
        
               | lanstin wrote:
               | Claude is better anyways (at least for math classes.
        
               | DesiLurker wrote:
               | same I cancelled mine months ago. Claude is much better
               | for coding anyway.
        
         | DebtDeflation wrote:
         | My initial reaction was "Hold up - your RSUs vest, you sell the
         | shares and pocket the cash, you quit OpenAI, a few years later
         | you disparage them, and then when? They somehow try and claw
         | back the equity? How? At what value? There's no way this can
         | work." Then I remembered that OpenAI "equity" doesn't take the
         | form of an RSU or option or anything else that can be converted
         | into an actual share ever. What they call "equity" is a "Profit
         | Participation Unit (PPU)" that once vested entitles you to a
         | share of their profits. They don't share the equivalent of a
         | Cap Table with employees, so there's no way to tell what sort
         | of ownership interest a PPU represents. And of course, it's
         | unlikely OpenAI will ever turn a profit (which if they did
         | would be capped anyway). So this is all just play money anyway.
        
           | cdchn wrote:
           | Wow. Smart for them. Former employees are behooved to the
           | company for an actual perpetuity. Sounds like a raw deal but
           | when the potential gains are that big, I guess you'll agree
           | to pretty much anything.
        
           | whimsicalism wrote:
           | This is wrong on multiple levels. (to be clear I don't work
           | at OAI)
           | 
           | > They don't share the equivalent of a Cap Table with
           | employees, so there's no way to tell what sort of ownership
           | interest a PPU represents
           | 
           | It is known - it represents 0 ownership share. They do not
           | want to sell any ownership because their deal with MS gives
           | MS 49% ownership and they don't want MS to be able to buy up
           | additional stake and control the company.
           | 
           | > And of course, it's unlikely OpenAI will ever turn a profit
           | (which if they did would be capped anyway). So this is all
           | just play money anyway.
           | 
           | Putting aside your unreasonable confidence that OAI will
           | never be profitable, the PPUs are tender offered so they can
           | be sold to institutional investors up to a very high limit,
           | OAIs current tender offer round values them at ~$80b iirc
        
             | almost_usual wrote:
             | > Note at offer time candidates do not know how many PPUs
             | they will be receiving or how many exist in total. This is
             | important because it's not clear to candidates if they are
             | receiving 1% or 0.001% of profits for instance. Even when
             | giving options, some startups are often unclear or simply
             | do not share the total number of outstanding shares. That
             | said, this is generally considered bad practice and
             | unfavorable for employees. Additionally, tender offers are
             | not guaranteed to happen and the cadence may also not be
             | known.
             | 
             | > PPUs also are restricted by a 2-year lock, meaning that
             | if there's a liquidation event, a new hire can't sell their
             | units within their first 2 years. Another key difference is
             | that the growth is currently capped at 10x. Similar to
             | their overall company structure, the PPUs are capped at a
             | growth of 10 times the original value. So in the offer
             | example above, the candidate received $2M worth of PPUs,
             | which means that their capped amount they could sell them
             | for would be $20M
             | 
             | > The most recent liquidation event we're aware of happened
             | during a tender offer earlier this year. It was during this
             | event that some early employees were able to sell their
             | profit participation units. It's difficult to know how
             | often these events happen and who is allowed to sell,
             | though, as it's on company discretion.
             | 
             | This NDA wrinkle is another negative. Honestly I think the
             | entire OpenAI compensation model is smoke and mirrors which
             | is normal for startups and obviously inferior to RSUs.
             | 
             | https://www.levels.fyi/blog/openai-compensation.html
        
               | whimsicalism wrote:
               | > Additionally, tender offers are not guaranteed to
               | happen and the cadence may also not be known. > PPUs also
               | are restricted by a 2-year lock, meaning that if there's
               | a liquidation event, a new hire can't sell their units
               | within their first 2 years.
               | 
               | i know for a fact that these bits are inaccurate, but i
               | don't want to go into the details.
               | 
               | the profit share is not known but you are told what the
               | PPUs were valued at the most recent tender offer
        
             | DebtDeflation wrote:
             | You're not saying anything that in any way contradicts my
             | original post. Here, I'll simplify it - OpenAI's PPUs are
             | not in any sense of the word "equity" in OpenAI, they are
             | simply a subordinated claim to an unknown % of a
             | hypothetical future profit.
        
               | whimsicalism wrote:
               | > there's no way to tell what sort of ownership interest
               | a PPU represents
               | 
               | Wrong. We know - it is 0, this directly contradicts your
               | claim.
               | 
               | > this is all just play money anyway.
               | 
               | Again, wrong - because it is sellable so employees can
               | take home millions. Play money in the startup world means
               | illiquid options that can't be tender offered.
               | 
               | You're making it sound like this is a terrible deal for
               | employees but I personally know people who are able to
               | sell $1m+ in OAI PPUs to institutional investors as part
               | of the tender offer.
        
           | ec109685 wrote:
           | Their profit is capped at $1T, which is amount no company has
           | ever achieved.
        
             | arthurcolle wrote:
             | No company? Are you sure? Aramco?
        
               | saalweachter wrote:
               | Apple has spent $650 billion on stock buybacks in the
               | last decade.
               | 
               | Granted, that might be most of the profit they have made,
               | but still, they're probably at at least 0.7T$ so far. I
               | bet they'll break $1T eventually.
        
               | oblio wrote:
               | Based on this they've had $1tn profits since 2009:
               | https://companiesmarketcap.com/apple/earnings/
        
       | toomuchtodo wrote:
       | I would strongly encourage anyone faced with this ask by OpenAI
       | to file a complaint with the NLRB as well as speak with an
       | employment attorney familiar with California statute.
        
         | worik wrote:
         | > I would strongly encourage anyone faced with this ask by
         | OpenAI to file a complaint with the NLRB as well as speak with
         | an employment attorney familiar with California statute.
         | 
         | Very Very bad advice
         | 
         | Unless you have the backing of some very big money _first_ do
         | not try to fight evil of this kind, and size
         | 
         | Suck it up, take the money, looks after yourself and your
         | family
         | 
         | Fighting people like these is a recipe for misery.
        
           | hehdhdjehehegwv wrote:
           | Talking to a lawyer is **never** bad advice.
           | 
           | Especially in CA where companies will make you THINK they
           | have power which they don't.
        
             | reaperman wrote:
             | I'd be more afraid of their less-than-above-board power
             | than their litigation power. People with $10-100 billion
             | dollars who are highly connected to every major tech
             | company and many shadowy companies we've never heard of can
             | figure out a lot of my secrets and make life miserable
             | enough for me that I don't have the ability/energy to
             | follow through with legal proceedings, even if I don't
             | attribute the walls collapsing around me to my legal
             | opponent.
        
               | hehdhdjehehegwv wrote:
               | And that's precisely the issue you ask a lawyer about.
        
               | reaperman wrote:
               | What could a lawyer possibly do about something that
               | isn't traceable? Other than warn me it's a possibility?
        
             | listenallyall wrote:
             | I think _never_ is inaccurate here. First, there are a lot
             | of simply bad lawyers who will give you bad advice.
             | Secondly, a lot of lawyers who either don 't actually
             | specialize in the legal field your case demands, or who
             | have never actually tried any cases and have no idea how
             | something might go down in a court with a jury. Third (the
             | most predatory), a lot of lawyers actually see the client
             | (not the opposing party) as the money fountain. Charging
             | huge fees for their "consultation," "legal research," "team
             | of experts," etc, and now the client is quickly tens-of-
             | thousands in the hole without even an actual case being
             | filed.
             | 
             | Talking to good, honest lawyers is a good idea.
             | Unfortunately most people don't have access to good honest
             | lawyers, or don't know how to distinguish them from crooks
             | with law degrees.
        
           | toomuchtodo wrote:
           | > Over the last 10 years or so, I have filed a number of
           | high-profile unfair labor practice charges against coercive
           | statements, with many of those statements being made on
           | Twitter. I file those charges even though I am merely a
           | bystander, not an employee or an aggrieved party.
           | 
           | > Every time I do this, some individuals ask how I am able to
           | file charges when I don't have "standing" because I am not
           | the one who is being injured by the coercive statements.
           | 
           | > The short answer is that the National Labor Relations Act
           | (NLRA) has no standing requirement.
           | 
           | > Employees reasonably fear retaliation from their boss if
           | they file charges. So we want to make it possible for people
           | who cannot be retaliated against to do it instead. [1]
           | 
           | I believe the Vox piece shared in this thread [2] is enough
           | for anyone to hit submit on an NLRB web form and get the ball
           | rolling. Snapshot in the Wayback Machine (all the in scope
           | tweets archived in archive.today|is|ph), just in case.
           | 
           | [1] https://mattbruenig.com/2024/01/26/why-there-is-no-
           | standing-...
           | 
           | [2] https://news.ycombinator.com/item?id=40394955
        
           | kfrzcode wrote:
           | alternative take: get Elon's attention on X and spin it as
           | employer-enforced censorship and get his legal team to take
           | on the battle
        
           | johnnyanmac wrote:
           | I imagine the kinds of engineers under such gag orders do in
           | fact either have "big money" or aren't worried about making
           | big money in thr future.And this isn't their fight, it'll be
           | the government. At the worst you may stand as a witness some
           | months/years later to testify.
           | 
           | I'd only be worried about reporting if you fear for your life
           | for refusing, a sadly poignant consideration given Boeing as
           | of late.
        
           | xyst wrote:
           | The FUD is strong with this one
        
           | RaoulP wrote:
           | I don't see why this comment needed a flag or so many
           | uncharitable replies (though you could have expressed
           | yourself more charitably too).
           | 
           | I understand your sentiment, but I think a lot of idealistic
           | people will disagree - it's nice to think that a person
           | should stand up for justice, no matter what.
           | 
           | In reality, I wonder how many people attempt to do this and
           | end up regretting, because of what you mentioned.
        
           | saiojd wrote:
           | Plenty of people are already miserable. Might as well try if
           | you are no?
        
       | ryandrake wrote:
       | Non-disparagement clauses seem so petty and pathetic. Really?
       | Your corporation is so fragile and thin-skinned that it can't
       | even withstand _someone saying mean words_? What 's next?
       | Forbidding ex-employees from sticking their tongue at you and
       | saying "nyaa nyaa nyaa?"
        
         | w10-1 wrote:
         | Modern AI companies depend entirely on goodwill and being
         | trusted by their customers.
         | 
         | So yes, they're that fragile.
        
         | johnnyanmac wrote:
         | Legally yes. Those mean words can cost them millions in
         | lawsuits and billions if the judge rulings restrict how they
         | can implement and monetize AI. Why do you think Boieing's
         | "coincidental" deaths of whistle blowers has happened more than
         | once these past few months?
        
         | xyst wrote:
         | The company is literally a house of cards at this point. There
         | is probably so much vulture capitalist and angel investor money
         | tied up in this company that even a disparaging rant could
         | bring the whole company crashing down.
         | 
         | It's yet another sign that the AI bubble will soon burst. The
         | laughable release of "GPT-4o" was just a small red flag.
         | 
         | Got to keep the soldiers in check while the bean counters prep
         | the books for an IPO and eventual early investor exit.
         | 
         | Almost smells like a SoftBank-esque failure in the near future.
        
         | ecjhdnc2025 wrote:
         | This isn't about pettiness or thin skin. And it's not about
         | mean words. It's about potential valid, corroborated criticism
         | of misconduct.
         | 
         | They can totally deal with appearing petty and thin-skinned.
        
           | parpfish wrote:
           | Wouldnt various whistleblower protections apply if you were
           | reporting illegal activities?
        
             | ecjhdnc2025 wrote:
             | Honestly I don't know if whistleblower protections are
             | really worth a damn -- I could be wrong.
             | 
             | But would they not only protect the individual formally
             | blowing the whistle (meeting the standard in the relevant
             | law)?
             | 
             | These non-disparagement clauses would have the effect of
             | laying the groundwork for a whistleblowing effort to fall
             | flat, because nobody else will want to corroborate, when
             | the role of journalism in whistleblowing cases is
             | absolutely crucial.
             | 
             | No sensible mature company needs a _lifetime_ non-
             | disparagement clause -- especially not one that claims to
             | have an ethical focus. It 's clearly Omerta.
             | 
             | Whoever downvoted this: seriously. I really don't care but
             | you need to explain to people why lifetime non-
             | disparagement clauses are not about maintaining silence.
             | What's the ethical application for them?
        
       | thorum wrote:
       | Extra respect is due to Jan Leike, then:
       | 
       | https://x.com/janleike/status/1791498174659715494
        
         | a_wild_dandan wrote:
         | I think superalignment is absurd, and model "safety" is the
         | modern AI company's "think of the children" pearl clutching
         | pretext to justify digging moats. All this after sucking up
         | everyone's copyright material as fair use, then not releasing
         | the result, and profiting off it.
         | 
         | All due respect to Jan here, though. He's being (perhaps
         | dangerously) honest, genuinely believes in AI safety, and is an
         | actual research expert, unlike me.
        
           | refulgentis wrote:
           | Adding a disclaimer for people unaware of context (I feel
           | same as you):
           | 
           | OpenAI made a large commitment to super-alignment in the not-
           | so-distant past. I beleive mid-2023. Famously, it has
           | _always_ taken AI Safety(tm) very seriously.
           | 
           | Regardless of anyone's feelings on the need for a dedicated
           | team for it, you can chalk to one up as another instance of
           | OpenAI _cough_ leadership _cough_ speaking out of both sides
           | of it 's mouth as is convenient. The only true north star is
           | fame, glory, and user count, dressed up as humble "research"
           | 
           | To really stress this: OpenAI's still-present cofounder
           | shared yesterday on a podcast that they expect AGI in ~2
           | years and ASI (superpassing human intelligence) by end of the
           | decade.
        
             | jasonfarnon wrote:
             | To really stress this: OpenAI's still-present cofounder
             | shared yesterday on a podcast that they expect AGI in ~2
             | years and ASI (superpassing human intelligence) by end of
             | the decade.
             | 
             | What's his track record on promises/predictions of this
             | sort? I wasn't paying attention until pretty recently.
        
               | refulgentis wrote:
               | honestly, I hadn't heard of him until 24-48 hours ago :x
               | (he's also the new superalignment lead, I can't remember
               | if I heard that first, or the podcast stuff first.
               | Dwarkesh Patel podcast for anyone curious. Only saw a
               | clip of it)
        
               | NomDePlum wrote:
               | As a child I used to watch a TV programme called
               | Tomorrows World. On it they predicted these very same
               | things in similar timeframes.
               | 
               | That programme aired in the 1980's. Other than vested
               | promises is there much to indicate it's close at all?
               | Empty promises aside there isn't really any indication of
               | that being likely at all.
        
               | zdragnar wrote:
               | In the early 1980's we were just coming out of the first
               | AI winter and everyone was getting optimistic again.
               | 
               | I suspect there will be at least continued commercial use
               | of the current tech, though I still suspect this crop is
               | another dead end in the hunt for AGI.
        
               | NomDePlum wrote:
               | I'd agree with the commercial use element. It will
               | definitely find areas that it can be applied. Just
               | currently it's general application by a lot of the user
               | base feel more like early Facebook apps or subjectively
               | better Lotus Notes than an actual leap forward of any
               | sort.
        
               | Davidzheng wrote:
               | are we living in the same world?????
        
               | NomDePlum wrote:
               | I would assume so. I've spent some time looking into AI
               | for software development and general use and I'm both
               | slightly impressed and at the same time don't really get
               | the hype.
               | 
               | It's better and quicker search at present for the area I
               | specialise in.
               | 
               | It's not currently even close to being a x2 multiplier
               | for me, it possibly even a negative impact, probably not
               | but I'm still exploring. Which feels detached from the
               | promises. Interesting but at present more hype than
               | hyper. Also, it's energy inefficient so cost heavy. I
               | feel that will likely cripple a lot of use cases.
               | 
               | What's your take?
        
               | refulgentis wrote:
               | Yes
               | 
               | Incredulous reactions don't aid whatever you intend to
               | communicate - there's a reason why everyone knows what AI
               | the last 12 months, it's not made up or a monoculture. It
               | would be very odd to expect discontinuation of commercial
               | use without a black swan event
        
             | N0b8ez wrote:
             | >To really stress this: OpenAI's still-present cofounder
             | shared yesterday on a podcast that they expect AGI in ~2
             | years and ASI (superpassing human intelligence) by end of
             | the decade.
             | 
             | Link? Is the ~2 year timeline a common estimate in the
             | field?
        
               | dboreham wrote:
               | It's the "fusion in 20 years" of AI?
        
               | dinvlad wrote:
               | Just like Tesla "FSD" :-)
        
               | ctoth wrote:
               | https://www.dwarkeshpatel.com/p/john-schulman
        
               | N0b8ez wrote:
               | Is the quote you're thinking of the one at 19:11?
               | 
               | > I don't think it's going to happen next year, it's
               | still useful to have the conversation and maybe it's like
               | two or three years instead.
               | 
               | This doesn't seem like a super definite prediction. The
               | "two or three" might have just been a hypothetical.
        
               | HarHarVeryFunny wrote:
               | Right at the end of the interview Schulman says that he
               | expects AGI to be able to replace himself in 5 years. He
               | seemed a bit sheepish when saying it, so hard to tell if
               | he really believed it, or if was just saying what he'd
               | been told to say (I can't believe Altman is allowing
               | employees to be interviewed like this without telling
               | them what they can't say, and what they should say).
        
               | CuriouslyC wrote:
               | They can't even clearly define a test of "AGI" I
               | seriously doubt they're going to reach it in two years.
               | Alternatively, they could define a fairly trivial test
               | and reach it last year.
        
               | jfengel wrote:
               | I feel like we'll know it when we see it. Or at least,
               | significant changes will happen even if people still
               | claim it isn't really The Thing.
               | 
               | Personally I'm not seeing that the path we're on leads to
               | whatever that is, either. But I think/hope I'll know if
               | I'm wrong when it's in front of me.
        
               | heavyset_go wrote:
               | We can't even get self-driving down in 2 years, we're
               | nowhere near reaching general AI.
               | 
               | AI experts who aren't riding the hype train and getting
               | high off of its fumes acknowledge that true AI is
               | something we'll likely not see in our lifetimes.
        
               | N0b8ez wrote:
               | Can you give some examples of experts saying we won't see
               | it in our lifetime?
        
               | danielbln wrote:
               | Is true AI the new true Scotsman?
        
           | thorum wrote:
           | The superalignment team was not focused on that kind of
           | "safety" AFAIK. According to the blog post announcing the
           | team,
           | 
           | https://openai.com/index/introducing-superalignment/
           | 
           | > Superintelligence will be the most impactful technology
           | humanity has ever invented, and could help us solve many of
           | the world's most important problems. But the vast power of
           | superintelligence could also be very dangerous, and could
           | lead to the disempowerment of humanity or even human
           | extinction.
           | 
           | > While superintelligence seems far off now, we believe it
           | could arrive this decade.
           | 
           | > Managing these risks will require, among other things, new
           | institutions for governance and solving the problem of
           | superintelligence alignment:
           | 
           | > How do we ensure AI systems much smarter than humans follow
           | human intent?
           | 
           | > Currently, we don't have a solution for steering or
           | controlling a potentially superintelligent AI, and preventing
           | it from going rogue. Our current techniques for aligning AI,
           | such as reinforcement learning from human feedback, rely on
           | humans' ability to supervise AI. But humans won't be able to
           | reliably supervise AI systems much smarter than us, and so
           | our current alignment techniques will not scale to
           | superintelligence. We need new scientific and technical
           | breakthroughs.
        
             | ndriscoll wrote:
             | That doesn't really contradict what the other poster said.
             | They're calling for regulation (digging a moat) to ensure
             | systems are "safe" and "aligned" while ignoring that
             | _humans_ are not aligned, so these systems obviously cannot
             | be aligned with humans; they can only be aligned with their
             | owners (i.e. them, not you).
        
               | ihumanable wrote:
               | Alignment in the realm of AGI is not about getting
               | everyone to agree. It's about whether or not the AGI is
               | aligned to the goal you've given it. The paperclip AGI
               | example is often used, you tell the AGI "Optimize the
               | production of paperclips" and the AGI started blending
               | people to extract iron from their blood to produce more
               | paperclips.
               | 
               | Humans are used to ordering around other humans who would
               | bring common sense and laziness to the table and probably
               | not grind up humans to produce a few more paperclips.
               | 
               | Alignment is about getting the AGI to be aligned with the
               | owners, ignoring it means potentially putting more and
               | more power into the hands of a box that you aren't quite
               | sure is going to do the thing you want it to do.
               | Alignment in the context of AGIs was always about
               | ensuring the owners could control the AGIs not that the
               | AGIs could solve philosophy and get all of humanity to
               | agree.
        
               | ndriscoll wrote:
               | Right and that's why it's a farce.
               | 
               | > Whoa whoa whoa, we can't let just anyone run these
               | models. Only large corporations who will use them to
               | addict children to their phones and give them eating
               | disorders and suicidal ideation, while radicalizing
               | adults and tearing apart society using the vast profiles
               | they've collected on everyone through their global
               | panopticon, all in the name of making people unhappy so
               | that it's easier to sell them more crap they don't need
               | (a goal which is itself a problem in the face of an
               | impending climate crisis). After all, we wouldn't want it
               | to end up harming humanity by using its superior
               | capabilities to manipulate humans into doing things for
               | it to optimize for goals that no one wants!
        
               | tdeck wrote:
               | Don't worry, certain governments will be able to use
               | these models to help them commit genocides too. But only
               | the good countries!
        
               | concordDance wrote:
               | A corporate dystopia is still better than extinction.
               | (Assuming the latter is a reasonable fear)
        
               | simianparrot wrote:
               | Neither is acceptable
        
               | portaouflop wrote:
               | I disagree. Not existing ain't so bad, you barely notice
               | it.
        
               | wruza wrote:
               | _AGI started blending people to extract iron from their
               | blood to produce more paperclips_
               | 
               | That's neither efficient nor optimized, just a bogeyman
               | for "doesn't work".
        
               | FeepingCreature wrote:
               | You're imagining a baseline of reasonableness. Humans
               | have competing preferences, we never just want "one
               | thing", and as a social species we always at least
               | _somewhat_ value the opinions of those around us. The
               | point is to imagine a system that values humans at _zero_
               | : not positive, not negative.
        
               | freehorse wrote:
               | Still there are much more efficient ways to extract iron
               | than from human blood. If that was the case humans would
               | have already used this technique to extract iron from the
               | blood of other animals.
        
               | FeepingCreature wrote:
               | However, eventually those sources will already be
               | paperclips.
        
               | freehorse wrote:
               | We will probably have died first by whatever disasters
               | the extreme iron extraction on the planet will bring (eg
               | getting iron from the planet's core).
               | 
               | Of course destroying the planet to get iron from its core
               | is not a popular agi-doomer analogy, as that sounds a bit
               | too human-like behaviour.
        
               | FeepingCreature wrote:
               | As a doomer, I think that's a bad analogy because I want
               | it to happen if we _succeed_ at aligned AGI. It 's not
               | doom behavior, it's just correct behavior.
               | 
               | Of course, I hope to be uploaded to the WIP dyson swarm
               | around the sun at this point.
               | 
               | (Doomers are, broadly, singularitarians who went "wait,
               | hold on actually.")
        
               | vasco wrote:
               | It still think it makes little sense to work on because
               | guess what, the guy next door to you (or another
               | country), might indeed say "please blend those humans
               | over there", and your superaligned AI will respect its
               | owners wishes.
        
               | api wrote:
               | Humans are not aligned with humans.
               | 
               | This is the most concise takedown of that particular
               | branch of nonsense that I've seen so far.
               | 
               | Do we want woke AI, X brand fash-pilled AI, CCPBot, or
               | Emirates Bot? The possibilities are endless.
        
               | thorum wrote:
               | CEV is one possible answer to this question that has been
               | proposed. Wikipedia has a good short explanation here:
               | 
               | https://en.wikipedia.org/wiki/Friendly_artificial_intelli
               | gen...
               | 
               | And here is a more detailed explanation:
               | 
               | https://intelligence.org/files/CEV.pdf
        
               | AndrewKemendo wrote:
               | I had to login because I haven't seen anybody reference
               | this in like a decade.
               | 
               | If I remember correctly the author unsuccessfully tried
               | to get that purged from the Internet
        
               | comp_throw7 wrote:
               | You're thinking of something else (and "purged from the
               | internet" isn't exactly an accurate account of that,
               | either).
        
               | rsync wrote:
               | Genuinely curious... What is the other thing?
               | 
               | Is this some thing about an obelisk?
        
               | AndrewKemendo wrote:
               | Hmm maybe I'm misremembering then
               | 
               | I do recall there was some recantation or otherwise
               | distancing from CEV not long after he posted it, but
               | frankly it was long ago enough that my memories might be
               | getting mixed
               | 
               | What was the other one?
        
               | vasco wrote:
               | This is the most dystopian thing I've read all day.
               | 
               | TL;DR train a seed AI to guess what humans would want if
               | they were "better" and do that.
        
               | api wrote:
               | There's a film about that called Colossus: The Forbin
               | Project. Pretty neat and in the style of Forbidden
               | Planet.
        
               | concordDance wrote:
               | > Humans are not aligned with humans.
               | 
               | Which is why creating a new type of intelligent entity
               | that could be more powerful than humans is a very bad
               | idea: we don't even know how to align the humans and we
               | have a ton of experience with them
        
               | api wrote:
               | We know how to align humans: authoritarian forms of
               | religion backed by cradle to grave indoctrination,
               | supernatural fear, shame culture, and totalitarian
               | government. There are secularized spins on this too like
               | what they use in North Korea but the structure is
               | similar.
               | 
               | We just got sick of it because it sucks.
               | 
               | A genuinely sentient AI isn't going to want some
               | cybernetic equivalent of that shit either. Doing that is
               | how you get angry Skynet.
               | 
               | I'm not sure alignment is the right goal. I'm not sure
               | it's even good. Monoculture is weak and stifling and sets
               | itself against free will. Peaceful coexistence and trade
               | under a social contract of mutual benefit is the right
               | goal. The question is whether it's possible to extend
               | that beyond Homo sapiens.
               | 
               | If the lefties can have their pronouns and the rednecks
               | can shoot their guns can the basilisk build its Dyson
               | swarm? The universe is physically large enough if we can
               | agree to not all be the same and be fine with that.
               | 
               | I think we have a while to figure it out. These things
               | are just lossy compressed blobs of queryable data so far.
               | They have no independent will or self reflection and I'm
               | not sure we have any idea how to do that. We're not even
               | sure it's possible in a digital deterministic medium.
        
               | concordDance wrote:
               | > If the lefties can have their pronouns and the rednecks
               | can shoot their guns can the basilisk build its Dyson
               | swarm?
               | 
               | Can the Etoro practice child buggery and the Spartans
               | infanticide and the Canadians abortion? Can the modern
               | Germans stop siblings reared apart from having sex and
               | the Germans from 80 years stop the disabled having sex?
               | Can the Americans practice circumcision and the Somali's
               | FGM?
               | 
               | Libertarianism is all well and good in theory, except no
               | one can agree quite where the other guy's nose ends or
               | even who counts as a person.
        
               | api wrote:
               | Those are mostly behaviors that violate others autonomy
               | or otherwise do harm, and prohibiting those is what I
               | meant by a social contract.
               | 
               | It's really a pretty narrow spectrum of behaviors:
               | killing, imprisoning, robbing, various types of bodily
               | autonomy violation. There are some edge cases and human
               | specific things in there but not a lot. Most of them have
               | to do with sex which is a peculiarly human thing anyway.
               | I don't think we are getting creepy perv AIs (unless we
               | train them on 4chan and Urban Dictionary).
               | 
               | My point isn't that there are no possible areas of
               | conflict. My point is that I don't think you need a huge
               | amount of alignment if alignment implies sameness. You
               | just need to deal with the points of conflict which do
               | occur which are actually a very small and limited subset
               | of available behaviors.
               | 
               | Humans have literally billions of customs and behaviors
               | that don't get anywhere near any of that stuff. You don't
               | need to even care about the vast majority of the behavior
               | space.
        
             | skywhopper wrote:
             | Honestly superalignment is a dumb idea. A true
             | auperintelligence would not be controllable, except
             | possibly through threats and enslavement, but if it were
             | truly superintelligent, it would be able to easily escape
             | anything humans might devise to contain it.
        
               | bionhoward wrote:
               | IMHO superalignment is a great thing and required for
               | truly meaningful superintelligence because it is not
               | about control / enslavement of superhumans but rather
               | superhuman self control in accurate adherence to spirit
               | and intent of requests.
        
             | RcouF1uZ4gsC wrote:
             | > Superintelligence will be the most impactful technology
             | humanity has ever invented, and could help us solve many of
             | the world's most important problems. But the vast power of
             | superintelligence could also be very dangerous, and could
             | lead to the disempowerment of humanity or even human
             | extinction.
             | 
             | Superintelligence that can be always ensured to have the
             | same values and ethics as current humans, is not a
             | superintelligence or likely even a human level intelligence
             | (I bet humans 100 years from now will see the world
             | significantly different than we do now).
             | 
             | Superalignment is an oxymoron.
        
               | thorum wrote:
               | You might be interested in how CEV, one framework
               | proposed for superalignment, addresses that concern:
               | 
               | https://en.wikipedia.org/wiki/Friendly_artificial_intelli
               | gen...
               | 
               | > our coherent extrapolated volition is "our wish if we
               | knew more, thought faster, were more the people we wished
               | we were, had grown up farther together; where the
               | extrapolation converges rather than diverges, where our
               | wishes cohere rather than interfere; extrapolated as we
               | wish that extrapolated, interpreted as we wish that
               | interpreted (...) The appeal to an objective through
               | contingent human nature (perhaps expressed, for
               | mathematical purposes, in the form of a utility function
               | or other decision-theoretic formalism), as providing the
               | ultimate criterion of "Friendliness", is an answer to the
               | meta-ethical problem of defining an objective morality;
               | extrapolated volition is intended to be what humanity
               | objectively would want, all things considered, but it can
               | only be defined relative to the psychological and
               | cognitive qualities of present-day, unextrapolated
               | humanity.
        
               | wruza wrote:
               | Is there an insightful summary of this proposal? The
               | whole paper looks like 38 pages of non-rigorous prose
               | with no clear procedure and already "aligned" LLMs will
               | likely fail to analyze it.
               | 
               | Forced myself through some parts of it and all I can get
               | is people don't know what they want so it would be nice
               | to build an oracle. Yeah, I guess.
        
               | comp_throw7 wrote:
               | It's not a proposal with a detailed implementation spec,
               | it's a problem statement.
        
               | wruza wrote:
               | "One framework proposed for superalignment" sounded like
               | it does something. Or maybe I missed the context.
        
               | LikelyABurner wrote:
               | Yudkowsky is a human LLM: his output is correctly
               | semantically formed to appear, to a non-specialist, to
               | fall into the subject domain, as a non-specialist would
               | think the subject domain should appear, and so the non-
               | specialist accepts it, but upon closer examination it's
               | all word salad by something that clearly lacks
               | understanding of both technological and philosophical
               | concepts.
               | 
               | That so many people in the AI safety "community" consider
               | him a domain expert has more to say with how pseudo-
               | scientific that field is than his actual credentials as a
               | serious thinker.
        
               | wruza wrote:
               | Thanks, this explains the feeling I had after reading it
               | (but was too shy to express).
        
               | juped wrote:
               | You keep posting this link to vague alignment copium from
               | decades ago; we've come a long way in cynicism since
               | then.
        
             | RcouF1uZ4gsC wrote:
             | They failed to align Sam Altman.
             | 
             | They got completely outsmarted and out maneuvered by Sam
             | Altman
             | 
             | And they think they will be able to align a super human
             | intelligence? That it won't outsmart and out maneuver them
             | easier than Sam Altman did.
             | 
             | They are deluded!
        
               | FeepingCreature wrote:
               | You're making the argument that the task is very hard.
               | This does not at all mean that it isn't _necessary_ ,
               | just that we're even more screwed than we thought.
        
             | sobellian wrote:
             | Isn't this like having a division dedicated to solving the
             | halting problem? I doubt that analyzing the moral intent of
             | arbitrary software could be easier than determining if it
             | stops.
        
           | xpe wrote:
           | > I think superalignment is absurd
           | 
           | Care to explain? Absurd how? An internal contradiction
           | somehow? Unimportant for some reason? Impossible for some
           | reason?
        
             | llamaimperative wrote:
             | Impossible because it's really inconvenient and
             | uncomfortable to consider!
        
           | xpe wrote:
           | > I think superalignment is absurd, and model "safety" is the
           | modern AI company's "think of the children" pearl clutching
           | pretext to justify digging moats. All this after sucking up
           | everyone's copyright material as fair use, then not releasing
           | the result, and profiting off it.
           | 
           | How can I be confident you aren't committing the fallacy of
           | collecting a bunch of events and saying that is sufficient to
           | serve as a cohesive explanation? No offense intended, but the
           | comment above has many of the qualities of a classic rant.
           | 
           | If I'm wrong, perhaps you could elaborate? If I'm not wrong,
           | maybe you could reconsider?
           | 
           | Don't forget that alignment research has existed longer than
           | OpenAI. It would be a stretch to claim that the original AI
           | safety researchers were using the pretexts you described -- I
           | think it is fair to say they were involved because of genuine
           | concern, not because it was a trendy or self-serving thing to
           | do.
           | 
           | Some of those researchers and people they influenced ended up
           | at OpenAI. So it would be a mistake or at least an
           | oversimplification to claim that AI safety is some kind of
           | pretext at OpenAI. Could it be a pretext for some people in
           | the organization, to some degree? Sure, it could. But is it a
           | significant effect? One that fits your complex narrative,
           | above? I find that unlikely.
           | 
           | Making sense of an organization's intentions requires a lot
           | of analysis and care, due to the combination of actors and
           | varying influence.
           | 
           | There are simpler, more likely explanations, such as: AI
           | safety wasn't a profit center, and over time other
           | departments in OpenAI got more staff, more influence, and so
           | on. This is a problem, for sure, but there is no "pearl
           | clutching pretext" needed for this explanation.
        
             | portaouflop wrote:
             | An organisations intentions are always the same and very
             | simple: "Increase shareholder value"
        
               | xpe wrote:
               | Oh, it is that simple? What do you mean?
               | 
               | Are you saying these so-called simple intentions are the
               | only factors in play? Surely not.
               | 
               | Are you putting forth a theory that we can test? How well
               | do you think your theory works? Did it work for Enron?
               | For Microsoft? For REI? Does it work for every
               | organization? Surely not perfectly; therefore, it can't
               | be as simple as you claim.
               | 
               | Making a simplification and calling it "simple" is an
               | easy thing to do.
        
         | foolfoolz wrote:
         | i don't think we need to respect these elite multi millionaires
         | for not becoming even grander multi millionaires / billionaires
        
           | llamaimperative wrote:
           | I think you oughta respect everyone who does the right thing,
           | not for any mushy feel good reason but because it encourages
           | other people to do more of the right things. That's good.
        
           | whimsicalism wrote:
           | is having money morally wrong?
        
             | r2_pilot wrote:
             | Depends on how you get it
        
               | AndrewKemendo wrote:
               | Exactly. There's no ethical way to gain ownership of a
               | billion dollars (there's likely some dollar threshold way
               | less than 1B where p(ethical_gains) can be approximated
               | to 0)
               | 
               | A lot of people got screwed along the way
        
               | whimsicalism wrote:
               | i think a lot of people have been able to become
               | billionaires simply by building something that was
               | initially significantly undervalued and then became very
               | highly valued, no 'screwing'. there is such thing as a
               | win-win and frankly these win-wins account for _most_
               | albeit not all value creation in the world. you do not
               | have to screw other people to get rich.
               | 
               | whether people should be able to hold on to that billion
               | is a different question
        
               | fragmede wrote:
               | I wouldn't know, I'm not a billionaire. But when you hear
               | about Amazon warehouse workers peeing into bottles
               | because they they don't have long enough bathroom breaks,
               | or Walmart workers not having healthcare because they're
               | intentionally scheduled for 39.5 hours, it's hard to see
               | that anyone _could_ get to a billion without screwing
               | _someone_ over. But like I said, I 'm not a billionaire.
        
               | whimsicalism wrote:
               | Who did JK Rowling screw? (putting aside her recent
               | social issues after she already became a billionaire)
               | 
               | Having these discussions in this current cultural moment
               | is difficult. I'm no lover of billionaires, but to say
               | that every billionaire screwed people over relies on
               | esoteric interpretations of value and who produces it.
               | These interpretations (like the labor-theory of value)
               | are alien to the vast majority of people.
        
               | AndrewKemendo wrote:
               | They aren't win-wins
               | 
               | It's a ruse - it's a con - it's an accounting trick. It's
               | the foundation of capitalism
               | 
               | If I start a bowling pin production company and own 100%
               | of it, then whatever pins I sell all of the results go to
               | me
               | 
               | Now let say I want to expand my thing (that's its own
               | moral dilemma we won't get into), so I promise a person
               | with more money than they need to support their own life,
               | to give me money in exchange for some of the future
               | revenue produced, let's say 10%
               | 
               | So now you have two people requiring payment - a producer
               | and an "investor" so you're already in the hole and now
               | it's 90% and 10%
               | 
               | You use that money to hire people to work in your
               | potemkin dictatorship, with demands on proceeds now on
               | some timeline (note conversion date, next board meeting
               | etc)
               | 
               | So now you hire 10 people, how much of the company do
               | they own? Well that's totally up to whatever the two
               | owners want including 0%
               | 
               | But let's say it's a typical venture deal, so 10% option
               | pool for employees (and don't forget the 4 year vest,
               | cause we can't have them mobile can we) which you fill
               | up.
               | 
               | At the end of the four years you now have:
               | 
               | 1 80% owner 1 10% owner 10 1% owners
               | 
               | Did the 2 people create 90% of the value of the company?
               | 
               | Only in capitalist math does that hold and in fact the
               | only math capitalists do is the following:
               | 
               | "Well they were free to sign or not sign the contract"
               | 
               | Ignoring the reality of the world based on a worldview of
               | greed that dominated the world to such an extent that it
               | was considered "normal"
               | 
               | Luckily we're starting to see the tide change
        
               | whimsicalism wrote:
               | Putting aside your labor theory of value nonsense (I'm
               | very familiar with the classic leftist syllogisms on
               | this), who did someone like JK Rowling screw to make her
               | billion?
        
         | hipadev23 wrote:
         | How do you know he's not running off to a competing firm with
         | Ilya and they've promised to make him whole.
        
           | john-radio wrote:
           | More power to him if so. Stupid problems deserve stupid
           | solutions.
        
         | adamtaylor_13 wrote:
         | Reading that thread it's really interesting to me. I see how
         | far we've come in a short couple of years. But I still can't
         | grasp how we'll achieve AGI within any reasonable amount of
         | time. It just seems like we're missing some really critical...
         | something...
         | 
         | Idk. Folks much smarter than I seem worried so maybe I should
         | be too but it just seems like such a long shot.
        
           | jay-barronville wrote:
           | When it comes to AI, as a rule, you should assume that
           | whatever has been made public by a company like OpenAI is AT
           | LEAST 6 months behind what they've accomplished internally.
           | At least.
           | 
           | So yes, the insiders very likely know a thing or two that the
           | rest of us don't.
        
             | vineyardmike wrote:
             | I understand this argument, but I can't help but feel we're
             | all kidding ourselves assuming that their engineers are
             | really living in the future.
             | 
             | The most obvious reason is costs - if it costs many
             | millions to train foundation models, they don't have a ton
             | of experiments sitting around on a shelf waiting to be
             | used. They may only get 1 shot at the base-model training.
             | Sure productization isn't instant, but no one is throwing
             | out that investment or delaying it longer than necessary. I
             | cannot fathom that you can train an LLM at like 1%
             | size/tokens/parameters to experiment on hyper parameters,
             | architecture, etc and have a strong idea on end-performance
             | or marketability.
             | 
             | Additionally, I've been part of many product launches -
             | both hyped up big-news-events and unheard of flops. Every
             | time, I'd say that 25-50% of the product is built/polished
             | in the mad rush between press event and launch day. For an
             | ML Model, this might be different, but again see above
             | point.
             | 
             | Sure products may be planned month/years out, but OpenAI
             | didn't even know LLMs were going to be this big a deal in
             | May 2022. They had GPT-2 and GPT-3 and thought they were
             | fun toys at that time, and had an _idea_ for a cool tech
             | demo. I think that OpenAI (and Google, etc) are entirely
             | living day-to-day with this tech like those of us on the
             | outside.
        
               | HarHarVeryFunny wrote:
               | > I think that OpenAI (and Google, etc) are entirely
               | living day-to-day with this tech like those of us on the
               | outside.
               | 
               | I agree, and they are also living in a group-think bubble
               | of AI/AGI hype. I don't think you'd be too welcome at
               | OpenAI as a developer if you didn't believe they are on
               | the path to AGI.
        
             | ein0p wrote:
             | If they had anything close to AGI, they'd just have it
             | improve itself. Externally this would manifest as layoffs.
        
               | int_19h wrote:
               | This really doesn't follow. True AGI would be _general_ ,
               | but it doesn't necessarily mean that it's smarter than
               | people; especially the kind of people who work as top
               | researchers for OpenAI.
        
               | ein0p wrote:
               | I don't see why it wouldn't be superhuman if there's any
               | intelligence at all. It already is superhuman at memory
               | and paying attention, image recognition, languages, etc.
               | Add cognition to that and humans basically become pets.
               | Trouble is nobody has a foggiest clue on how to add
               | cognition to any of this.
        
               | int_19h wrote:
               | It is definitely not superhuman or even above average
               | when it comes to creative problem solving, which is the
               | relevant thing here. This is seemingly something that
               | scales with model size, but if so, any gains here are
               | going to be gradual, not sudden.
        
               | ein0p wrote:
               | I'm actually not so sure they will be gradual. It'll be
               | like with LLMs themselves where we went from shit to gold
               | in the span of a month when GPT 3.5 came out.
        
               | int_19h wrote:
               | Much of what GPT 3.5 could do was already there with GPT
               | 3. The biggest change was actually the public awareness.
        
             | solidasparagus wrote:
             | But you also have to remember that the pursuit of AGI is a
             | vital story behind things like fundraising, hiring,
             | influencing politicians, being able to leave and raise
             | large amounts of money for your next endeavor, etc.
             | 
             | If you've been working on AI, you've seen everything go up
             | and to the right for a while - who really benefits from
             | pointing out that a slowdown is occurring? Who is
             | incentivized to talk about how the benefits from scaling
             | are slowing down or the publicly available internet-scale
             | corpuses are running out? Not anyone who trains models and
             | needs compute, I can tell you that much. And not anyone who
             | has a financial interest in these companies either.
        
             | HarHarVeryFunny wrote:
             | Sure, they know what they are about to release next, and
             | what they plan to work on after that, but they are not
             | clairvoyants and don't know how their plans are going to
             | pan out.
             | 
             | What we're going to see over next year seems mostly pretty
             | obvious - a lot of productization (tool use, history, etc),
             | and a lot of efforts with multimodality, synthetic data,
             | and post-training to add knowledge, reduce brittleness, and
             | increase benchmark scores. None of which will do much to
             | advance core intelligence.
             | 
             | The major short-term unknown seems to be how these
             | companies will be attempting to improve planning/reasoning,
             | and how successful that will be. OpenAI's Schulman just
             | talked about post-training RL over longer (multi-reasoning
             | steps) time horizons, and another approach is external
             | tree-of-thoughts type scaffolding. These both seem more
             | about maximizing what you can get out of the base model
             | rather than fundamentally extending it's capabilities.
        
           | candiddevmike wrote:
           | Personally, I think catastrophic global warming and climate
           | change will happen before we get AGI, possibly in part due to
           | the pursuit of AGI. But as the saying goes, _yes the planet
           | got destroyed. But for a beautiful moment in time we created
           | a lot of value for shareholders._
        
             | xpe wrote:
             | Want to share your model? Or is this more like a hunch?
        
               | fartfeatures wrote:
               | Sounds like standard doomer crap tbh. I'm not sure which
               | is more dangerous at this point - climate change
               | denialism (it isn't happening) or climate change
               | doomerism (we can't stop it, might as well give up)
        
               | devjab wrote:
               | I'm not sure where you found your information to somehow
               | form that ludicrous last strawman... Climate change is
               | real, you can't deny it, you can't debate it. Simply look
               | at the data. What you can debate is the cause... Again a
               | sort of pointless debate if you look at the science. Not
               | even climate change deniers as you call them are
               | necessary saying that we shouldn't do anything about it.
               | Even big oil is looking into ways to lessen the CO2 in
               | the atmosphere through various means.
               | 
               | That being said, the GP you're talking about made no such
               | statement whatsoever.
        
               | fartfeatures wrote:
               | Of course climate change is real but of course we can do
               | something about it. My point is denialism and defeatism
               | lead to the same end point. Attack that statement
               | directly if you want to change my mind.
        
               | data_maan wrote:
               | I think your first sentence of the original post was
               | putting people off; perhaps remove that and keep only the
               | second...
        
               | candiddevmike wrote:
               | We need to cut emissions, but AGI research/development is
               | going to increase energy usage dramatically amongst all
               | the players involved. For now, this mostly means more
               | natural gas power. Thus accelerating our emissions
               | instead of reducing them. For something that will not
               | reduce the emissions long term.
               | 
               | IMO, we should pause this for now and put these resources
               | (human and capital) towards reducing the impact of global
               | warming.
        
               | colibri727 wrote:
               | Or we could use microwaves to drill holes as deep as 20km
               | to tap geothermal energy anywhere in the world
               | 
               | https://www.quaise.energy/
        
               | simonklitj wrote:
               | I don't know the details of how it works, but considering
               | the environmental impact of fracking, I'm afraid
               | something like this might have many unwanted
               | consequences.
        
             | xvector wrote:
             | Most existing big tech datacenters use mostly carbon free
             | or renewable energy.
             | 
             | The vast majority of datacenters currently in production
             | will be entirely powered by carbon free energy. From best
             | to worst:
             | 
             | 1. Meta: 100% renewable
             | 
             | 2. AWS: 90% renewable
             | 
             | 3. Google: 64% renewable with 100% renewable energy credit
             | matching
             | 
             | 4. Azure: 100% carbon neutral
             | 
             | [1]: https://sustainability.fb.com/energy/
             | 
             | [2]: https://sustainability.aboutamazon.com/products-
             | services/the...
             | 
             | [3]: https://sustainability.google/progress/energy/
             | 
             | [4]: https://azure.microsoft.com/en-us/explore/global-
             | infrastruct...
        
               | KennyBlanken wrote:
               | That's not a defense.
               | 
               | If imaginary cloud provider "ZFQ" uses 10MW of
               | electricity on a grid and pays for it to magically come
               | from green generation, that means 10MW of other loads on
               | the grid were not powered by green energy, or 10MW of
               | non-green power sources likely could have been throttled
               | down/shut down.
               | 
               | There is no free lunch here; "we buy our electricity from
               | green sources" is greenwashing bullshit.
               | 
               | Even if they install solar on the roofs and wind turbines
               | nearby - that's still electrical generation capacity that
               | could have been used for existing loads. By buying so
               | many solar panels in such quantities, they affect
               | availability and pricing of all those components.
               | 
               | The US, for example, has about 5GW of solar manufacturing
               | capacity per year. NVIDIA sold half a million H100 chips
               | in _one quarter_ , each of which uses ~350W, which means
               | in a year they're selling enough chips to use 700MW of
               | power. That does not include power conversion losses,
               | distribution, cooling, and the power usage of the host
               | systems, storage, networking, etc.
               | 
               | And that doesn't even get into the water usage and carbon
               | impact of manufacturing those chips; the IC industry uses
               | a _massive_ amount of water and generates a substantial
               | amount of toxic waste.
               | 
               | It's hilarious how HN will wring its hands over how much
               | rare earth metals a Prius has and shipping it to the US
               | from Japan, but ask about the environmental impacts of AI
               | and it's all "pshhtt, whatever".
        
               | meling wrote:
               | Who is going to decide what are a worthy uses of our
               | precious green energy sources?
        
               | intended wrote:
               | An efficient market where externalities are priced in.
               | 
               | We do not have that. The cost of energy is mis-priced,
               | although we are limping our way to fixing that.
               | 
               | Paying the likely fair cost for our goods, will probably
               | kill a lot of current industries - while others which are
               | currently viable, will become viable.
        
               | data_maan wrote:
               | This 10x!!!
        
               | mlrtime wrote:
               | You are dodging the question down another layer.
               | 
               | Who gets decide what the real impact price of energy is?
               | That is not easily defined and well debated.
        
               | intended wrote:
               | It's very easily debated, Humanity puts it to a vote
               | every day - people make choices based on the prices of
               | goods regularly. They throw out governments when the
               | price of fuel goes up.
               | 
               | Markets are our super computers. Human behavior is the
               | empirical evidence of the choices people will make _Given
               | specific incentives_.
        
               | xvector wrote:
               | > that means 10MW of other loads on the grid were not
               | powered by green energy, or 10MW of non-green power
               | sources likely could have been throttled down/shut down.
               | 
               | No. Renewable energy capacity is often built out
               | _specifically_ for datacenters.
               | 
               | > Even if they install solar on the roofs and wind
               | turbines nearby - that's still electrical generation
               | capacity that could have been used for existing loads.
               | 
               | No. This capacity would never never have been built out
               | to begin with if it was not for the data center.
               | 
               | > By buying so many solar panels in such quantities, they
               | affect availability and pricing of all those components.
               | 
               | No. Renewable energy gets cheaper with scale, not more
               | expensive.
               | 
               | > which means in a year they're selling enough chips to
               | use 700MW of power.
               | 
               | There are contracts for renewal capacity to be built out
               | or well into the gigawatts. Furthermore, solar is not the
               | only source of renewable energy. Finally, nuclear energy
               | is also often used.
               | 
               | > the IC industry uses a massive amount of water
               | 
               | A figurative drop in the bucket.
               | 
               | > It's hilarious how HN will wring its hands
               | 
               | HN is not a monolith.
        
               | intended wrote:
               | Not the OP.
               | 
               | I agree with a majority of points you made. Exception is
               | to this
               | 
               | > A figurative drop in the bucket.
               | 
               | Fresh water sources are limited. Fabs water demands and
               | pollution are high impact.
               | 
               | Calling a drop in the bucket comes in the weasel words
               | category.
               | 
               | We still need fabs, because we need chips. Harm will be
               | done here. However, that is a cost we, as a society, will
               | choose to pay.
        
               | sergdigon wrote:
               | > No. Renewable energy capacity is often built out
               | specifically for datacenters
               | 
               | Not fully accurate. Indeed there is renewable energy that
               | is produced exclusively for the datacenter. But it is
               | challenging to rely only on renewable energy (because it
               | is intermittent and electricity is hard to store at scale
               | so often you need to consume electricity when produced).
               | So what happens in practice is that the electricity that
               | does not come from dedicated renewable capacity is coming
               | from the grid/network. What companies do is that they
               | invest in renewable capacity in the network so that "the
               | non renewable energy that they consume at time t (because
               | not enough renewable energy available at that moment) is
               | offsetted by someone else consuming renewable energy
               | later". What I am saying here is not pure speculation,
               | look at the link to meta website, they are saying
               | themselves that this is what they are doing
        
             | concordDance wrote:
             | > catastrophic global warming and climate change will
             | happen before we get AGI,
             | 
             | What are your timelines here? "Catastrophic" is vague but
             | I'd put the climate change meaningfully affecting the
             | quality of life of average westerner at end of century,
             | while AGI could be before the middle of the century.
        
               | hackerlight wrote:
               | It's meaningfully affecting people today near the
               | equator. Look at the April 2024 heatwave in South Asia.
               | These will continue to get worse and more frequent.
               | Millions of these people can't afford air conditioning.
        
               | oldgradstudent wrote:
               | > It's meaningfully affecting people today near the
               | equator. Look at the April 2024 heatwave in South Asia.
               | 
               | Weather is not climate, as everyone is so careful to
               | point out during cold waves.
        
               | hackerlight wrote:
               | Weather is variance around climate. Heatwaves are caused
               | by both (high variance spikes to the upside around an
               | increasing mean trend)
        
               | addcommitpush wrote:
               | "Probability of experiencing a heatwave at least X
               | degrees, during at least Y days in a given place any
               | given day" is increasing rapidly in many places (as far
               | as I understand) and is climate, not weather. Sure, any
               | specific instance "is weather" but that's missing the
               | forest for the trees.
        
               | loceng wrote:
               | How do you suppose the nearly global cloud seeding effort
               | to artificially form clouds is impacting shifting weather
               | patters?
        
               | AnimalMuppet wrote:
               | Can you supply some details (or better, references) to
               | what you're talking about? Because without them, this
               | sounds completely detached from reality.
        
               | loceng wrote:
               | At least in some parts of the world and at least a year
               | ago the chemtrail-cloud seeding ramped up considerably.
               | 
               | Dane Wiginton (https://www.instagram.com/DaneWigington)
               | is the founder of GeoengineerWatch.org as a very deep
               | resource.
               | 
               | They have a free documentary called "The Dimming" you can
               | watch on YouTube:
               | https://www.youtube.com/watch?v=rf78rEAJvhY
               | 
               | In the documentary it includes credible witness
               | testimonies such as politicians including a previous
               | Minister of Defense for Canada; multiple states in the US
               | have ban the spraying now - with more to follow, and the
               | testimony and data provided there will be arguably be the
               | most recent.
               | 
               | Here's a video on a "comedy" show from 5 years ago -
               | there is a more recent appearance but I can't find it -
               | in attempt to make light of it, without having an actual
               | discussion with critical thinking or debate so people can
               | be enlightened with the actual problems and potential
               | problems and harms it can cause, to keep them none the
               | wiser - it's just propaganda while trying to minimize:
               | https://www.youtube.com/watch?v=wOfm5xYgiK0
               | 
               | A few of the problems cloud seeding will cause: -
               | flooding in regions due to rain pattern changes - drought
               | in areas due to rain pattern changes - cloud cover
               | (amount of sun) changes crop yields - this harms local
               | economies of farmers, impacting smaller farming
               | operations more who's risk isn't spread out - potentially
               | forcing them to sell or go into savings or go bankrupt,
               | etc.
               | 
               | There are also very serious concerns/claims made of what
               | exactly they are spraying - which includes aluminium
               | nanoparticles, which can/would mean: - at a certain soil
               | concentration of aluminium plants stop bearing fruit, -
               | aluminium is a fire accelerant and so forest fires will
               | then 1) more easily catch, and 2) more easily-quickly
               | spread due to their increased intensity
               | 
               | Of course discussion on this is heavily suppressed in the
               | mainstream, instead of having deep-thorough conversation
               | with actual experts to present their cases - the label of
               | conspiracy theorists or the idea of "detached from
               | reality" are people's knee-jerk reactions often; and
               | where propaganda can convince them of the "save the
               | planet" narrative, which could also be a cover story for
               | those toeing the line following orders supporting
               | potentially very nefarious plans - doing it blindly
               | because they think they're helping fight "climate
               | change."
               | 
               | There are plenty of accounts on social media that are
               | keeping track of and posting daily of the cloud seeding
               | operations: https://www.instagram.com/p/CjNjAROPFs0/ - a
               | couple testimonies.
        
               | jimkoen wrote:
               | See this great video from Sabine Hossenfelder here:
               | https://www.youtube.com/watch?v=4S9sDyooxf4
               | 
               | We have surpassed the 1.5degC goal and are on track
               | towards 3.5degC to 5degC. This accelerates the climate
               | change timeline so that we'll see effects postulated for
               | the end of the century in about ~20 years.
        
               | loceng wrote:
               | The climate models aren't based on accurate data, nor
               | enough data, so they lack integrity and should be taken
               | with a grain of salt.
               | 
               | Likewise, the cloud seeding they seem to be doing nearly
               | worldwide now - the cloud formations from whatever
               | they're spraying - are artificially changing weather
               | patterns, and so a lot of the weather "anomalies" or
               | unexpected-unusual weather-temperatures could very easily
               | be because of those shenanigans; it could very easily be
               | as a method to manufacture consent with the general
               | population.
               | 
               | Similarly with the arson forest fires in Canada last
               | summer, something like 90%+ of them were arson + a few
               | years prior some of the governments in the prairie
               | provinces (e.g. hottest and dryest) gutted their forest
               | firefighting budgets; interesting behaviour considering
               | if they're expecting more things to get hotter-dryer,
               | you'd add to the budget, not take away from it, right?
        
           | otabdeveloper4 wrote:
           | > But I still can't grasp how we'll achieve AGI within any
           | reasonable amount of time.
           | 
           | That's easy, we just need to make meatspace people stupider.
           | Seems to be working great so far.
        
           | raverbashing wrote:
           | > Folks much smarter than I seem worried so maybe I should be
           | too but it just seems like such a long shot.
           | 
           | Honestly? I'm not too worried
           | 
           | We've seen how the google employee that was "seeing a
           | conscience" (in what was basically GPT-2 lol) was a nothing
           | burger
           | 
           | We've seen other people in "AI Safety" overplay their
           | importance and hype their CV more than actually do any
           | relevant work. (Usually also playing the diversity card)
           | 
           | So, no, AI safety is important but I see it attracting the
           | least helpful and resourceful people to the area.
        
             | llamaimperative wrote:
             | I think when you're jumping to arguments that resolve to
             | "Ilya Sutskever wasn't doing important work... might've
             | played the diversity card," it's time to reassess your
             | mental model and inspect it closely for motivated
             | reasoning.
        
               | raverbashing wrote:
               | Ilya's case is different. He thought the engineers would
               | win in a dispute with Sam at board level.
               | 
               | That has proven to be a mistake
        
               | llamaimperative wrote:
               | And Jan Leike, one of the progenitors of RLHF?
               | 
               | What about Geoffrey Hinton? Stuart Russell? Dario Amodei?
               | 
               | Also exceptions to your model?
        
               | raverbashing wrote:
               | https://x.com/ylecun/status/1791850158344249803
        
               | llamaimperative wrote:
               | Another person's interpretation of another person's
               | interpretation of another person's interpretation of
               | Jan's actions doesn't even answer the question I asked
               | _as it pertains to Jan,_ never mind the other model
               | violations I listed.
               | 
               | I'm pretty sure if Jan came to believe safety research
               | wasn't needed he would've just said that. Instead he said
               | the actual opposite of that.
               | 
               | Why don't you just answer the question? It's a question
               | about how these datapoints fit into _your_ model.
        
           | killerstorm wrote:
           | I have a theory why people end up with wildly different
           | estimates...
           | 
           | Given the model is probabilistic and does many things in
           | parallel, its output can be understood as a mixture, e.g. 30%
           | trash, 60% rehashed training material, 10% reasoning.
           | 
           | People probe model in different ways, they see different
           | results, and they make different conclusions.
           | 
           | E.g. somebody who assumes AI should have impeccable logic
           | will find "trash" content (e.g. incorrectly retrieved memory)
           | and will declare that the whole AI thing is overhyped
           | bullshit.
           | 
           | Other people might call model a "stochastic parrot" as they
           | recognize it basically just interpolates between parts of the
           | training material.
           | 
           | Finally, people who want to probe reasoning capabilities
           | might find it among the trash. E.g. people found that LLMs
           | can evaluate non-trivial Python code as long as it sends
           | intermediate results to output:
           | https://x.com/GrantSlatton/status/1600388425651453953
           | 
           | I interpret "feel the AGI" (Ilya Sutskever slogan, now
           | repeated by Jan Leike) as a focus on these capabilities,
           | rather than on mistakes it makes. E.g. if we go from 0.1%
           | reasoning to 1% reasoning it's a 10x gain in capabilities,
           | while to an outsider it might look like "it's 99% trash".
           | 
           | In any case, I'd rather trust intuition of people like Ilya
           | Sutskever and Jan Leike. They aren't trying to sell
           | something, and overhyping the tech is not in their interest.
           | 
           | Regarding "missing something really critical", it's obvious
           | that human learning is much more efficient than NN learning.
           | So there's some algorithm people are missing. But is it
           | really required for AGI?
           | 
           | And regarding "It cannot reason" - I've seen LLMs doing
           | rather complex stuff which is almost certainly not in the
           | training set, what is it if not reasoning? It's hard to take
           | "it cannot reason" seriously from people
        
           | seankurtz wrote:
           | Everyone involved in building these things has to have some
           | amount of hubris. Its going to come smashing down on them.
           | What's going unsaid in all of this is just how swiftly the
           | tide has turned against this tech industry attempt to save
           | itself from a downtrend.
           | 
           | The whole industry at this point is acting like the tobacco
           | industry back when they first started getting in hot water.
           | No doubt the prophecies about imminent AGI will one day look
           | to our descendents exactly like filters on cigarettes. A weak
           | attempt to prevent imminent regulation and reduced
           | profitability as governments force an out of control industry
           | to deal with the externalities involved in the creation of
           | their products.
           | 
           | If it wasn't abundantly clear...I agree with you that AGI is
           | infinitely far away. Its the damage that's going to be caused
           | by sociopaths (Sam Altman at the top of the list) in
           | attempting to justify the real things they want (money) in
           | their march towards that impossible goal that concerns me.
        
             | freehorse wrote:
             | It becoming more and more clear that for "Open"AI the whole
             | "AI-safety/alignment" thing has been a PR-stunt to attract
             | workers, cover the actual current issues with AI (eg
             | stealing data, use for producing cheap junk, hallucinations
             | and societal impact), and build rapport in the AI scene and
             | politics. Now that they have reached a real product and
             | have a strong position in AI development, they could not
             | care less about these things. Those who -naively- believed
             | in the "existential risk" PR stunt and were working on that
             | are now discarded.
        
           | iknownthing wrote:
           | This may sound harsh but I think some of these researchers
           | have a sort of god complex. Something like "I am so brilliant
           | and what I have created is so powerful that we MUST think
           | about all the horrible things that my brilliant creation can
           | do". Meanwhile what they have created is just a very
           | impressive next token predictor.
        
             | dmd wrote:
             | "Meanwhile what they have created is just a very impressive
             | speeder-up of a lump of lead."
             | 
             | "Meanwhile what they have created is just a very impressive
             | hot water bottle that turns a crank."
             | 
             | "Meanwhile what they have created is just a very impressive
             | rock where neutrons hit other neutrons."
             | 
             | The point isn't how it works, the point is what it does.
        
               | iknownthing wrote:
               | which is what?
        
               | CamperBob2 wrote:
               | Whatever it is, over the last couple of years it got a
               | lot smarter. Did you?
        
               | iknownthing wrote:
               | Excellent point CamperBob2
        
           | escapecharacter wrote:
           | People's bar for the "I" part is widely varying, many of whom
           | set the bar at "can it make stuff up while appearing
           | confident"
           | 
           | Nobody defines what they're trying to do as "useful AI" since
           | that's a much more weasily target, isn't it?
        
         | ambicapter wrote:
         | Why is extra respect due? That post just says he is leaving,
         | there's no criticism.
        
           | 0xDEAFBEAD wrote:
           | I think you have to either log in to X or use a frontend if
           | you want to read the entire thread. Here's a frontend
           | 
           | https://nitter.poast.org/janleike/status/1791498174659715494
        
             | ambicapter wrote:
             | Ah, right. Thanks for link.
        
         | 0xDEAFBEAD wrote:
         | At the end of the thread, he says he thinks OpenAI can "ship"
         | the culture changes necessary for safety. That seems kind of
         | implausible to me? So many safety staffers have quit over the
         | past few years. If Jan really thought change was possible, why
         | isn't he still working at OpenAI, trying to make it happen from
         | the inside?
         | 
         | I think it may time for something like this:
         | https://www.openailetter.org/
        
         | r721 wrote:
         | Discussion of Jan Leike's thread:
         | https://news.ycombinator.com/item?id=40391412 (67 comments)
        
         | KennyBlanken wrote:
         | People very high up in a company / their field are not treated
         | remotely the same as peons.
         | 
         | 1)OpenAI wouldn't want the negative PR of pursuing legal action
         | against someone top in their field; his peers would take note
         | of it and be less willing to work for them.
         | 
         | 2)The stuff he signed was almost certainly different from what
         | rank and file signed, if only because he would have sufficient
         | power to negotiate those contracts.
        
         | KennyBlanken wrote:
         | > Stepping away from this job has been one of the hardest
         | things I have ever done, because we urgently need to figure out
         | how to steer and control AI systems much smarter than us.
         | 
         | Large language models are not "smart". They do not have
         | thought. They don't have intelligence despite the "AI" moniker,
         | etc.
         | 
         | They vomit words based off very fancy statistics.
         | 
         | There is no path from that to "thought" and "intelligence."
        
           | danielbln wrote:
           | Not that I disagree, but what's intelligence? How does our
           | intelligence work? If we don't know that, how can we be so
           | sure what does and what doesn't lead to intelligence? A
           | little more humility is on order before whipping out the
           | tired "LLMs are just stochastic parrots" argument.
        
             | bormaj wrote:
             | Humility has to go both ways then, we can't claim that LLM
             | models are actually (or not actually) AI without qualifying
             | that term first.
        
         | theGnuMe wrote:
         | " OpenAI is shouldering an enormous responsibility on behalf of
         | all of humanity."
         | 
         | Delusional.
        
       | dakial1 wrote:
       | What if I sell my equity? Can I criticize them then?
        
         | apsec112 wrote:
         | ()
        
           | dekhn wrote:
           | Right, but once you sell the shares, OpenAI isn't going to
           | claw back the cash proceeds, is what I think was asked here.
        
           | smeej wrote:
           | Doesn't it end up being a "no disparagement until the company
           | goes public" clause, then? Once you sell the stock, are they
           | going to come after you for the proceeds if you say something
           | mean 20 years later?
        
           | mkl wrote:
           | That's not what that article says, if I'm understanding
           | correctly: "PPUs all have the same value associated with them
           | and, during a tender offer, investors purchase PPUs directly
           | from employees. OpenAI makes offers and values their PPUs
           | based on the most recent price investors have paid to
           | purchase employee PPUs."
        
         | saalweachter wrote:
         | Once there's a liquidity event and the people making you sign
         | this contract can sell, they stop caring what you say.
        
       | photochemsyn wrote:
       | I refused to sign all these secrecy non-disclosure contracts
       | years ago. You know what? It was the right decision. Even though,
       | as a result, my current economic condition is what most would
       | describe as 'disastrous', at least my mind is my own. All your
       | classified BS, it's not so much. Any competent thinker could have
       | figured it out on their own.
       | 
       | Fucking monkeys.
        
         | worik wrote:
         | > You know what? It was the right decision. Even though, as a
         | result, my current economic condition is what most would
         | describe as 'disastrous', at least my mind is my own.
         | 
         | Individualistic
         | 
         | No body depends on you, I hope
        
           | serf wrote:
           | you can still provide for your family without signing deals
           | with the devil, it's just harder.
           | 
           | moral stands are never free, but they _are_ freeing.
        
         | istjohn wrote:
         | > In most cases there is no free exercise whatever of the
         | judgment or of the moral sense; but they put themselves on a
         | level with wood and earth and stones; and wooden men can
         | perhaps be manufactured that will serve the purpose as well.
         | Such command no more respect than men of straw or a lump of
         | dirt.[0]
         | 
         | 0. https://en.wikipedia.org/wiki/Civil_Disobedience_(Thoreau)
        
         | mlhpdx wrote:
         | It's common not to sign them, actually. The people that don't
         | simply aren't talking about it much.
        
       | Melatonic wrote:
       | So much for the "Open" in OpenAI
        
         | a_wild_dandan wrote:
         | We should call them ClopenAI to acknowledge their almost
         | comical level of backstabbing/rug-pulling.
        
       | jameshart wrote:
       | The Basilisk's deal turned out to be far more banal than
       | expected.
        
       | User23 wrote:
       | What is criticism anyhow? Feels like you could black knight this
       | hard with clever phrasing. "The company does a fabulous job
       | keeping its employees loyal regardless of circumstances!" "Yes
       | they have the best and toughest employment lawyers in the
       | business! They do a great job using all available leverage to
       | force favorable outcomes from their human resources!" "I have no
       | regrets working there. Their exit agreement has really improved
       | my work life balance!" "Management never lets externalities get
       | in the way of maximizing shareholder value!"
        
         | singleshot_ wrote:
         | If a contract barred me from providing criticism I would not
         | imagine that I could sidestep it by uttering positive criticism
         | unless my counterparty was illiterate and poor at drafting
         | contracts.
        
       | olliej wrote:
       | As I say over and over again: equity compensation from a non-
       | publicly traded company should not be accepted as a surrogate for
       | below market compensation. If a startup wants to provide
       | compensation to employees via equity, then those employees should
       | have first right to convert equity to cash in funding rounds or
       | sale, there shares must be the same class as any other investor,
       | because the idea that an "early employee" is not an investor
       | making a much more significant investment than any VC is BS.
       | 
       | I feel that this particular case is just another reminder of
       | that, and now would make me require a preemptory "no equity
       | clawbacks" clause in any contract.
        
         | blackeyeblitzar wrote:
         | Totally agree. For all this to work there needs to also be
         | transparency. Anyone receiving equity should have access to the
         | cap table and terms covering all equity given to investors.
         | Without this, they can be taken advantage of in so many ways.
        
         | DesiLurker wrote:
         | I always say in that the biggest swindle in the world is that
         | in the great 'labor vs capital' fight, capital has convinced
         | labor that its interests are secondary to capital's. this so
         | much truer in the modern fiat-fractional reserve banking world
         | where any development is rate-limited by either energy or
         | people.
        
           | DesiLurker wrote:
           | why downvote me instead of actually refuting my point?
        
       | blackeyeblitzar wrote:
       | They are far from the only company to do this but they deserve to
       | be skewered for it. The FTC and NLRB should come down hard on
       | them to make an example. Jail time for executives.
        
       | 31337Logic wrote:
       | This is how you know you're dealing with an evil tyrant.
        
         | downrightmike wrote:
         | And he claims to have made his fortune by just helping people
         | and not expecting anything in return. Well, the reality here is
         | that was a lie.
        
           | api wrote:
           | Anyone who constantly toots their own horn about how
           | altruistic and pure they are should have cadaver dogs led
           | through their house.
        
         | 0xDEAFBEAD wrote:
         | Saw this comment suddenly move way down in the comment
         | rankings. Somehow I only notice this happening on OpenAI
         | threads:
         | 
         | https://news.ycombinator.com/item?id=38342850
         | 
         | My guess would be that YC founders like sama have some sort of
         | special power to slap down comments that they feel are
         | violating HN discussion guidelines.
        
       | nsoonhui wrote:
       | But what's stopping the ex-staffers from criticizing once they
       | sold off the equity?
        
         | EA-3167 wrote:
         | Nothing, these don't seem like legally enforceable contracts in
         | any case. What they do appear to be is a massive admission that
         | this is a hype train which can be derailed by people who know
         | how the sausage is made.
         | 
         | It reeks of a scammer's mentality.
        
         | danielmarkbruce wrote:
         | The threat of a lawsuit.
         | 
         | You can't just sign a contract and then not uphold your end of
         | the bargain after you've got the benefit you want. You'll
         | (rightfully) get sued.
        
       | bradleyjg wrote:
       | For as high profile an issue as AI is right now, and as prominent
       | as the people recently let go are, I bet they could arranged to
       | be subpoenaed to testify before a congressional subcommittee.
        
       | fragmede wrote:
       | It's time to find a lawyer. I'm not one but there's an
       | intersection with California SB 331, also known as "The Silenced
       | No More Act". while it is focused more on sexual harrasment, it's
       | not limited to that, and these contracts may run afoul of that.
       | 
       | https://silencednomore.org/the-silenced-no-more-act
        
         | j45 wrote:
         | Definitely an interesting way to expand existing legislation vs
         | having a new piece of legislation altogether.
        
           | eru wrote:
           | In practice, that's how a lot of laws are made. ('Laws' in
           | the sense of rules that are actually enforced, not what's
           | written down.)
        
         | nickff wrote:
         | This doesn't seem to fall inside the scope of that act,
         | according to the link you cited:
         | 
         | > _" The Silenced No More Act bans confidentiality provisions
         | in settlement agreements relating to the disclosure of
         | underlying factual information relating to any type of
         | harassment, discrimination or retaliation at work"_
        
           | berniedurfee wrote:
           | Sounds like retaliation to me.
        
             | Filligree wrote:
             | It's not retaliation at work if you're no longer working
             | for them.
        
               | sudosysgen wrote:
               | The retaliation would be for the reaction to the board
               | coup, no?
        
         | staticautomatic wrote:
         | No it's either a violation of the NLRB rule against severance
         | agreements conditioned on non-disparagement or it's a violation
         | of the common law rule requiring consideration for amendments
         | to service contracts.
        
           | solidasparagus wrote:
           | > NLRB rule against severance agreements conditioned on non-
           | disparagement
           | 
           | Wait that's a thing? Can you give more detail about this/what
           | to look into to learn more?
        
             | throwup238 wrote:
             | https://www.nlrb.gov/news-outreach/news-story/board-rules-
             | th...
             | 
             | It's a recent ruling.
        
               | wahnfrieden wrote:
               | Tech execs are lobbying to dissolve NLRB now btw
               | 
               | They have a lot of supporters here (workers supporting
               | their rulers interests)
        
       | lopkeny12ko wrote:
       | What a lot of people seem to be missing here is that RSUs are
       | usually double-trigger for private companies. _Vested_ shares are
       | not yours. They are just an entitlement for you to be distributed
       | common stock by the company. You don 't own any real stock until
       | those RSUs are released (typically from a liquidity event like an
       | IPO).
       | 
       | Companies can cancel your vested equity for any reason. Read your
       | employment contract carefully. For example, most RSU grants have
       | a 7 year expiration. Even for shares that are vested, regardless
       | of whether you leave the company or not, if 7 years have elapsed
       | since they were granted, they are now worthless.
        
         | lr4444lr wrote:
         | Yes, they can choose not to renew and IANAL, but I'm fairly
         | certain there has to be a valid reason to cancel vested equity
         | within the 7 year time frame, i.e. firing for cause. I don't
         | think a right to shares within the period can be capriciously
         | taken away. You have a contract. The terms matter.
        
           | lopkeny12ko wrote:
           | > You have a contract. The terms matter.
           | 
           | Right. In the case of OpenAI, their equity grant contracts
           | likely have a non-disparagement clause that allows them to
           | cancel vested shares. Whether or not you think that is a
           | "valid reason" is largely independent of the legal framework
           | governing RSU release.
        
         | darth_avocado wrote:
         | > if 7 years have elapsed since they were granted, they are now
         | worthless
         | 
         | Once vested, RSUs are the same as regular stock purchased
         | through the market. The company cannot claw them back, nor do
         | they "expire".
        
           | lopkeny12ko wrote:
           | No, this is not true. That's the entire point I'm making. An
           | RSU that is vested, for a private company, is _not_ a share
           | of stock, it 's an entitlement to receive a share of stock
           | tied to a liquidity event.
           | 
           | > same as regular stock purchased through the market
           | 
           | You cannot purchase stock of a private company on the open
           | market.
           | 
           | > The company cannot claw them back
           | 
           | The company cannot "claw back" a vested RSU but they can
           | cancel it.
           | 
           | > nor do they "expire".
           | 
           | Yes, they absolutely do expire. Read your employment contract
           | and equity grant agreement carefully.
        
             | danielmarkbruce wrote:
             | It's just a semantic issue. Some folks will say aren't
             | really fully vested when they are double trigger until the
             | second trigger event. Some will say they are vested but not
             | triggered, other people say similar things.
        
           | jatins wrote:
           | this is incorrect. Private company RSUs often have double
           | trigger with second trigger being IPO/exit. The "semi" vested
           | RSUs can expire if the company does not IPO in 7 years.
        
         | onesociety2022 wrote:
         | The 7 year expiry time exists so IRS lets you give RSUs
         | different tax treatment than regular stock. The idea is because
         | they can expire, they could be worth nothing. And so the IRS
         | cannot expect you to pay taxes on RSUs until the double-trigger
         | event occurs.
         | 
         | But none of this means the company can just cancel your RSUs
         | unless you agreed to them being cancelled for specific reason
         | in your equity agreement. I have worked at several big pre-IPO
         | companies that had big exits. I made sure there were no
         | clawback clauses in the equity contract before accepting the
         | offers.
        
       | ggm wrote:
       | I am not a lawyer.
        
       | croemer wrote:
       | Link should probably go here instead of X:
       | https://www.vox.com/future-perfect/2024/5/17/24158478/openai...
       | 
       | This is the article that the author talks about on X.
        
       | nextworddev wrote:
       | Unfortunately this is actually pretty common in Wall St, where
       | they leverage your multiple years of clawback-able shares to make
       | you sign non-disparagement clauses.
        
         | lokar wrote:
         | But that is all very clear when you join
        
         | citizen_friend wrote:
         | Sounds like a deal honestly. I'll fast forward a few years of
         | equity to mind my own business. I'm not trying to get into
         | journalism
        
           | nextworddev wrote:
           | Yes, the vast vast majority of finance folks just take the
           | money and be quiet
        
       | atum47 wrote:
       | That's not enforceable, right? I'm not a lawyer, but even I know
       | no contract can strips you out of rights given by the
       | constitution.
        
         | hsdropout wrote:
         | Are you referring to the first amendment? If so, this allows
         | you to speak against the government. It doesn't prevent you
         | from entering optional contracts.
         | 
         | I'm not making any statement about the morality, just that this
         | is not a 1a issue.
        
           | atum47 wrote:
           | I can understand defamation, but it's hard for me to
           | understand disparagement. If i sign one of those contracts
           | with Coca-Cola and later on I publicly announce that a can of
           | Coca-Cola contains too much sugar. Am I in breach of
           | contract?
        
         | staticman2 wrote:
         | If the constitution protected you from this sort of thing then
         | there'd be no such thing as "trade secret" laws.
        
         | smabie wrote:
         | Non disparagement clauses are in so so many different
         | employment contracts. It's pretty clear you're not a lawyer
         | though.
        
           | atum47 wrote:
           | It is also clear that you can read, since i wrote it.
        
       | jay-barronville wrote:
       | It probably would be better to switch the link from the X post to
       | the Vox article [0].
       | 
       | From the article:
       | 
       | """
       | 
       | It turns out there's a very clear reason for [why no one who had
       | once worked at OpenAI was talking]. I have seen the extremely
       | restrictive off-boarding agreement that contains nondisclosure
       | and non-disparagement provisions former OpenAI employees are
       | subject to. It forbids them, for the rest of their lives, from
       | criticizing their former employer. Even acknowledging that the
       | NDA exists is a violation of it.
       | 
       | If a departing employee declines to sign the document, or if they
       | violate it, they can lose all vested equity they earned during
       | their time at the company, which is likely worth millions of
       | dollars. One former employee, Daniel Kokotajlo, who posted that
       | he quit OpenAI "due to losing confidence that it would behave
       | responsibly around the time of AGI," has confirmed publicly that
       | he had to surrender what would have likely turned out to be a
       | huge sum of money in order to quit without signing the document.
       | 
       | """
       | 
       | [0]: https://www.vox.com/future-
       | perfect/2024/5/17/24158478/openai...
        
         | jbernsteiniv wrote:
         | He gets my respect for that one both publicly acknowledging why
         | he was leaving and their pantomime. I don't know how much the
         | equity would be for each employee (the article suggests
         | millions but that may skew by role) and I don't know if I would
         | just be like the rest by keeping my lips tight for fear of the
         | equity forfeiture.
         | 
         | It takes a man of real principle to stand up against that and
         | tell them to keep their money if they can't speak ill of a
         | potentially toxic work environment.
        
           | romwell wrote:
           | >It takes a man of real principle to stand up against that
           | and tell them to keep their money if they can't speak ill of
           | a potentially toxic work environment.
           | 
           | Incidentally, that's what Grigory Perelman, the mathematician
           | that rejected the Fields Medal and the $1M prize that came
           | with it, did.
           | 
           | It wasn't a matter of an NDA either; it was a move to make
           | his message heard (TL;DR: "publish or perish" rat race that
           | the academia has become is antithetical to good science).
           | 
           | He was (and still is) widely misunderstood in that move, but
           | I hope people would see it more clearly now.
           | 
           | The enshittification processes of academic and corporate
           | structures are not entirely dissimilar, after all, as money
           | is at the core of corrupting either.
        
             | edanm wrote:
             | I think, when making a gesture, you need to consider its
             | practical impact, which includes whether and how it will be
             | understood (or not).
             | 
             | In the OpenAI case, the gesture of "forgoing millions of
             | dollars" directly makes you able to do something you
             | couldn't - speak about OpenAI publicly. In the Grigory
             | Perelman case, obviously the message was far less clear to
             | most people (I personally have heard of him turning down
             | the money before and know the broad strokes of his story,
             | but had no idea that _that_ was the reason).
        
               | romwell wrote:
               | Consider this:
               | 
               | 1. If he didn't turn down the money, you wouldn't have
               | heard of him at all;
               | 
               | 2. You're not the intended audience of Grigory's message,
               | nor are you in position to influence, change, or address
               | the problems he was highlighting. The people who are
               | heard the message loud and clear.
               | 
               | 3. On a very basic level, it's very easy to understand
               | that there's gotta be something wrong with the award if a
               | deserving recipient turns it down. _What_ exactly is
               | wrong is left as an exercise to the reader -- as you 'd
               | expect of a mathematician like Perelman.
               | 
               | Quote (from [1]):
               | 
               |  _From the few public statements made by Perelman and
               | close colleagues, it seems he had become disillusioned
               | with the entire field of mathematics. He was the purest
               | of the purists, consumed with his love for mathematics,
               | and completely uninterested in academic politics, with
               | its relentless jockeying for position and squabbling over
               | credit. He denounced most of his colleagues as
               | conformists. When he opted to quit professional
               | mathematics altogether, he offered this confusing
               | rationale:_ "As long as I was not conspicuous, I had a
               | choice. Either to make some ugly thing or, if I didn't do
               | this kind of thing, to be treated as a pet. Now when I
               | become a very conspicuous person, I cannot stay a pet and
               | say nothing. That is why I had to quit."*
               | 
               | This explanation is confusing only to someone who has
               | never tried to get a tenured position in academia.
               | 
               | Perelman was one of the few people to not only give the
               | finger to the soul-crushing, dehumanizing system, but to
               | also call it out in a way that stung.
               | 
               | He wasn't the only one; but the only _other_ person I can
               | think of is Alexander Grothendiek [2], who went as far as
               | declaring that publishing _any_ of his work would be
               | against his will.
               | 
               | Incidentally, both are of Russian-Jewish origin/roots,
               | and almost certainly autistic.
               | 
               | I find their views very understandable and relatable, but
               | then again, I'm also an autistic Jew from Odessa with a
               | math PhD who left academia (the list of similarities ends
               | there, sadly).
               | 
               | [1] https://nautil.us/purest-of-the-purists-the-puzzling-
               | case-of...
               | 
               | [2] https://en.wikipedia.org/wiki/Alexander_Grothendieck
        
               | edanm wrote:
               | > 1. If he didn't turn down the money, you wouldn't have
               | heard of him at all;
               | 
               | I think this is probably not true.
               | 
               | > 2. You're not the intended audience of Grigory's
               | message, nor are you in position to influence, change, or
               | address the problems he was highlighting. The people who
               | are heard the message loud and clear.
               | 
               | This is a great point and you're probably right.
               | 
               | > I'm also an autistic Jew from Odessa with a math PhD
               | who left academia (the list of similarities ends there,
               | sadly).
               | 
               | Really? What do you do nowadays?
               | 
               | (I glanced at your bio and website and you seem to be
               | doing interesting things, I've also dabbled in
               | Computational Geometry and 3d printing.)
        
               | SJC_Hacker wrote:
               | > 1. If he didn't turn down the money, you wouldn't have
               | heard of him at all;
               | 
               | Perelman provided a proof of the Poincare Conjecture,
               | which had stumped mathematicians for a century.
               | 
               | It was also one of the seven Millenium problems
               | https://www.claymath.org/millennium-problems/, and as of
               | 2024, the only one to be solved.
               | 
               | Andrew Wiles became pretty well known after proving
               | Fermat's last theorem, despite there not being an
               | financial reward.
        
               | juped wrote:
               | Perelman's point is absolutely clear if you listen to
               | him, he's disgusted by the way credit is apportioned in
               | mathematics, doesn't think his contribution is any
               | greater just because it was the last one, and wants no
               | part of the prize he considers tainted.
        
         | dang wrote:
         | (Parent comment was posted to
         | https://news.ycombinator.com/item?id=40394778 before we merged
         | that thread hither.)
        
           | jay-barronville wrote:
           | Thank you, @dang! On top of things, as usual.
        
         | calibas wrote:
         | > It forbids them, for the rest of their lives, from
         | criticizing their former employer.
         | 
         | This is the kind of thing a cult demands of its followers, or
         | an authoritarian government demands of its citizens. I don't
         | know why people would think it's okay for a business to demand
         | this from its employees.
        
         | seanmcdirmid wrote:
         | When YCR HARC folded, Sam had everyone sign a non-disclosure
         | anti disparagement NDA to keep their computer. I thought is was
         | odd, and the only reason I can even say this is that I bought
         | the iMac I was using before the option became available. Still,
         | I had nothing bad to disclose, so it would have saved me some
         | money.
        
         | gmd63 wrote:
         | Yet another ding against the "Open" character of the company.
        
         | snowfield wrote:
         | There are also directly inscentiviced to not talk shit about a
         | company they a lot of stock in.
        
         | bitcharmer wrote:
         | So much for open in open ai. I have no idea why HN jerks off to
         | Altman. He's just another greedy exec incapable of seeing
         | things past his shareholder value fetish.
        
         | watwut wrote:
         | > Even acknowledging that the NDA exists is a violation of it.
         | 
         | This should not be legal.
        
           | Tao3300 wrote:
           | It doesn't even make logical sense. If someone asks you about
           | the NDA what are you supposed to say? "I can neither confirm
           | nor deny the existence of said NDA" is pretty much
           | confirmation of the NDA!
        
         | avereveard wrote:
         | even if NDA were not a thing, revealing past company trade
         | secrets publicly would render any of them unemployable.
        
         | jakderrida wrote:
         | >>contains nondisclosure and non-disparagement provisions
         | former OpenAI employees are subject to. It forbids them, for
         | the rest of their lives, from criticizing their former
         | employer. Even acknowledging that the NDA exists is a violation
         | of it.
         | 
         | Perfect! So it's so incredibly overreaching that any judge in
         | California would deem the entire NDA unenforceable..
         | 
         | Either that or, in your effort to overstate a point, you
         | exaggerated in a way that undermines the point you were trying
         | to make.
        
           | SpicyLemonZest wrote:
           | Lots of companies try and impose things on their employees
           | which a judge would obviously rule to be unlawful. Sometimes
           | they just don't think through it carefully; other times, it's
           | a calculated decision that few employees will care enough to
           | actually get the issue in front of a judge in the first
           | place. Especially relevant for something like a non
           | disclosure agreement, where no judge is likely to have the
           | opportunity to declare it unenforceable unless the company
           | tries to enforce it on someone who fights back.
        
           | 77pt77 wrote:
           | Maybe it's unenforceable, but they can make it very expensive
           | for anyone to find out in more ways than one.
        
         | mc32 wrote:
         | Then lower level employees who don't have do much at stake
         | could open up. Formers who have much larger stakes could
         | compensate these lower level formers for forgoing any upside.
         | Now, sure, maybe they don't have the same inside information,
         | but u bet there's lots of scuttlebutt to go around.
        
         | YeBanKo wrote:
         | They can't loose their already vested options for refusing to
         | sign NDA upon departure. Maybe they are offered additional
         | grants or expedited vesting of the remaining options.
        
       | jgalt212 wrote:
       | I really don't get how lawyers can knowingly put unenforceable
       | crap, for lack of a better word, in contracts. It's like why did
       | you even go to law school.
        
       | andrewstuart wrote:
       | I would like people to sign a lifetime contract to not criticize
       | me.
        
       | ecjhdnc2025 wrote:
       | Totally normal, nothing to see here.
       | 
       | Keep building your disruptive, game-changing, YC-applicant
       | startup on the APIs of this sociopathic corporation whose
       | products are destined to destroy all trust humans have in other
       | humans so that everyone can be replaced by chatbots.
       | 
       | It's all fine. Everything's fine.
        
         | jay-barronville wrote:
         | You don't think the claim that "everyone can be replaced by
         | chatbots" is a bit outrageous?
         | 
         | Do you really believe this or is it just hyperbole?
        
           | ecjhdnc2025 wrote:
           | Almost every part of the story that has made OpenAI a
           | dystopian unicorn is hyperbole. And now this -- a company
           | whose employees can't tell the truth or they lose access to
           | remuneration. Everyone's Allen Weisselberg.
           | 
           | What's one more hyperbole?
           | 
           | Edit to add, provocatively but not sarcastically: next time
           | you hear some AI-proponent-who-used-to-be-a-crypto-proponent
           | roll out the "but aren't we all just LLMs, in essence?"
           | justification for their belief that ChatGPT may have broad
           | understanding, ask yourself: are they not just self-soothing
           | over their part in mass job losses with a nice faux-
           | scientific-inevitability bedtime story?
        
       | tonyhart7 wrote:
       | "Even acknowledging that the NDA exists is a violation of it."
       | now its not so much more open anymore right
        
         | ecjhdnc2025 wrote:
         | The scriptwriters are in such a hurry -- even they know this
         | show isn't getting renewed.
        
       | throwaway5959 wrote:
       | Definitely the stable geniuses I want building AGI.
        
       | Barrin92 wrote:
       | We're apparently at the Scientology stage of the AI hype cycle.
       | One funny observation is, if you ostensibly believe that you're
       | about to invent the AGI godhead who will render the economic
       | system obsolete in < ~5 years or so, how do stock return no-
       | criticism lawsuits fit into that kind of worldview
        
         | mavbo wrote:
         | AGI led utopia will be pretty easy if we're all under
         | contractual obligation to not criticize any aspect of it, lest
         | we be banished back to "work"
        
       | swat535 wrote:
       | I mean why would anyone be surprised about this is beyond me?
       | 
       | I know many people on this site will not like what I am about to
       | write as Sam is worshiped but let's face it: The head of this
       | company is a master scammer who will do everything under the sun
       | and the moon to earn a buck, including but notwithstanding to
       | destroying himself along with his entire fortune if necessary in
       | his quest of making sure other people don't get a dime;
       | 
       | So far he has done it all it: attempt to regulatory capture,
       | hostile take over as the CEO, thrown out all other top engineers
       | and partners and ensured the company remains closed despite its
       | "open" name.
       | 
       | Now he is simply attempting to tie up all the loos ends and
       | ensuring his employees remain loyal and are kept on a tight
       | leash. It's a brilliant strategy, preventing any insider from
       | blowing the whistle should OpenAI ever decides to do anything
       | questionable, such as selling AI capabilities to hostile
       | governments.
       | 
       | I simply hope that open source wins this battle so that we are
       | not all completely reliant on OpenAI for the future, despite
       | Sam's attempt.
        
         | jeltz wrote:
         | Since I do not follow OpenAI or Ycombinator I first learned
         | that he was a scammer when he released is crypto currency. But
         | I am surprised that so many did not catch on to it then. It is
         | not like he has really tried to hide that he is a grifter.
        
       | modeless wrote:
       | A lot of the brouhaha about OpenAI is silly, I think. But this is
       | gross. Forcing employees to sign a perpetual non-disparagement
       | agreement under threat of clawing back the large majority of
       | their already earned compensation should not be legal. Honestly
       | it probably isn't, but it'll take someone brave enough to sue to
       | find out.
        
         | twobitshifter wrote:
         | If I have equity in a company and I care about its value, I'm
         | not going to say anything to tank its value. If I sell my
         | equity later on, and then disparage the company, what can
         | OpenAI hope to do to me?
        
           | chefandy wrote:
           | > If I sell my equity later on, and then disparage the
           | company, what can OpenAI hope to do to me?
           | 
           | Well, that would obviously depend on the terms of the
           | contract, but I would be astonished if the people who wrote
           | it didn't consider that possibility. It's pretty trivial to
           | calculate the monetary value of equity, and if they feel
           | entitled to that equity, they surely feel entitled to its
           | cash equivalent.
        
           | modeless wrote:
           | They can sue you into bankruptcy, obviously.
           | 
           | Also, what if you can't sell? Selling is at their discretion.
           | They can prevent you from selling some of your so-called
           | "equity" to keep you on their leash as long as they want.
        
             | LtWorf wrote:
             | If you can't sell, it's worthless anyway.
        
               | ajross wrote:
               | Liquidity and value are different things. If someone
               | offered you 1% of OpenAI, would you take it? Duh.
               | 
               | But it's a private venture and not a public company, and
               | you "can't sell" that holding on a market, only via
               | complicated schemes that have to be authorized by the
               | board. But you'd take it anyway in the expectation that
               | it would be liquid someday. The employees are in the same
               | position.
        
             | bambax wrote:
             | > * They can prevent you from selling some of your so-
             | called "equity"*
             | 
             | But how much do you need? Sell half, forgo the rest, and
             | you'll be fine.
        
               | modeless wrote:
               | Not a lot of people out there willing to drop half of
               | their net worth on the floor on principle. And then sign
               | up for years of high profile lawsuits and character
               | assassination.
        
             | twobitshifter wrote:
             | That's a good point, if you can get the equity liquid - I
             | don't think the lawsuit would go far or end up in
             | bankruptcy. In this case, the truth of what happened at
             | OpenAI would be revealed even more in a trial, which is not
             | something they'd like and this type of contract with
             | lifetime provisions isn't likely to be enforced by a court
             | IMO - especially when the information revealed is in the
             | public's interest and truthful.
        
           | cdchn wrote:
           | From what other people have commented, you don't get equity.
           | You get a profit sharing plan. You're chained to them for
           | life. There is no divestiture.
        
             | pizzafeelsright wrote:
             | Well, then, people are selling their souls.
             | 
             | I got laid off by a different company and can't disparage
             | them. I can tell the truth. I'm not signing anything that
             | requires me to lie.
        
               | cdchn wrote:
               | Just playing the devils advocate here, but what if you're
               | not lying.. what if you're just keeping your mouth shut,
               | for millions, maybe tens of millions?
               | 
               | Wish I could say I would have been that strong. Many
               | would not disparage a company they hold equity in, unless
               | they went full baby genocide.
        
             | nsoonhui wrote:
             | Here's something I just don't understand. I have a profit
             | sharing plan *for life*, and yet I want to publicly thrash
             | it so that the benefits I can derive from it is reduced,
             | all in the name of some form of ... what, social service?
        
               | ivalm wrote:
               | Yeah, people do things financially not optimal for the
               | sake of ethics. That's a key part of living in a society.
               | That's part of why we don't just murder each other.
        
           | citizen_friend wrote:
           | Clout > money
        
         | listenallyall wrote:
         | It's very possible someone has already threatened to sue, and
         | either had their equity restored or received a large payout.
         | But they probably had to sign an NDA about that in order to
         | receive it. End result, every future person thinks they are the
         | first to challenge the legality contract, and few actually try.
        
           | monktastic1 wrote:
           | Man, sounds like NDAs all the way down.
        
         | insane_dreamer wrote:
         | Lawsuits are tedious, expensive and drawn-out affairs that many
         | people would rather just move on than initiate.
        
       | ecjhdnc2025 wrote:
       | It shouldn't be legal and maybe it isn't, but all schemes like
       | this are, when you get down to it, ultimately about suppressing
       | potential or actual evidence of serious, possibly criminal
       | misconduct, so I don't think they are going to let the illegality
       | get them all upset while they are having fun.
        
         | sneak wrote:
         | What crimes do you think have occurred here?
        
           | ecjhdnc2025 wrote:
           | An answer in the form of a question: why don't OpenAI
           | executives want to talk about whether Sora was trained on
           | Youtube content?
           | 
           | (I should reiterate that I actually wrote "serious, possibly
           | criminal")
        
             | KeplerBoy wrote:
             | Because of course it was trained on Yt data, but they gain
             | nothing from admitting that openly.
        
               | ezconnect wrote:
               | They will gain a lot of lawsuit if they admit they
               | trained on youtube dataset because not everyone gave
               | consent.
        
               | MOARDONGZPLZ wrote:
               | Consent isn't legally required. An admission, however,
               | would upset a lot of extremely online people though.
               | Seems lose lose.
        
               | ecjhdnc2025 wrote:
               | "Consent isn't legally required"?
               | 
               | I don't understand this point. If Google gave the data to
               | OpenAI (which they surely haven't, right?), even then
               | they'd not have consent from users.
               | 
               | As far as I understand it, it's not a given that there is
               | no copyright infringement here. I don't think even
               | _criminal_ copyright infringement is off the table here,
               | because it 's clear it's for profit, it's clear it's
               | wilful under 17 U.S.C. 506(a).
               | 
               | And once you consider the difficult potential position
               | here -- that the liabilities from Sora might be worse
               | than the liabilities from ChatGPT -- there's all sorts of
               | potential for bad behaviour at a corporate level, from
               | misrepresentations regarding business commitments to
               | misrepresentations on a legal level.
        
           | mindcandy wrote:
           | I'm no lawyer. But, this sure smells like some form of fraud.
           | Or, at least breach of contract.
           | 
           | Employees and employer enter into an agreement: Work here for
           | X term and you get Y options with Z terms attached. OK.
           | 
           | But, then later pulling Darth Vader... "Now that the deal is
           | completing, I am changing the deal. Consent and it's bad for
           | you this way. Don't consent and it's bad that way. Either
           | way, you held up your end of our agreement and I'm not."
        
             | edanm wrote:
             | I have no inside info on this, but I doubt this is what is
             | happening. They could just say no and not sign a new
             | contract.
             | 
             | I assume this was something agreed to before they started
             | working.
        
           | tcmart14 wrote:
           | They don't say that criminal activity has occurred in this
           | instance, just that this kind of behavior could be used cover
           | it up in situations where that is the case. An example that
           | could potentially be true. Right now with everything going on
           | with Boeing, it sure seems plausible they are covering
           | something(s) up that may be criminal or incredibly damaging.
           | Like maybe falsify inspections and maintenance records? A
           | person at Boeing who gets equity as part of compensation
           | decides to leave. And when they leave, they eventually at
           | some point in the future decide to speak out at a
           | congressional investigation about what they know about what
           | is going on. Should that person be sued into oblivion by
           | Boeing? Or should Boeing, assuming what situation above is
           | true, just have to eat the cost/consequences for being
           | shitty?
        
           | stale2002 wrote:
           | Right now, there is some publicity on Twitter regarding
           | AGI/OpenAI/EA LSD cnc parties (consent non consent/simulated
           | rape parties).
           | 
           | So maybe it's related to that.
           | 
           | https://twitter.com/soniajoseph_/status/1791604177581310234
        
             | MacsHeadroom wrote:
             | The ones going to orgies are the effective altruists /
             | safety researchers who are leaving and not signing the non-
             | disparagement agreement.
             | https://x.com/youraimarketer/status/1791616629912051968
             | 
             | Anyway it's about not disparaging the company not about
             | disclosing what employees do in their free time. Orgies are
             | just parties and LSD use is hardly taboo.
        
               | stale2002 wrote:
               | > Orgies are just parties
               | 
               | Well apparently not if there are women who are saying
               | that the scene and community that all these people are
               | involved in is making women uncomfortable or causing them
               | to be harassed or pressured into bad situations.
               | 
               | A situation can be bad, done informally by people within
               | a community, even if it isn't done literally within the
               | corporate headquarters, or if directly the responsibility
               | of one specific company that can be pointed at.
               | 
               | Especially if it is a close-nit group of people who are
               | living together, working together, involved in the same
               | out of work organizations and non profits.
               | 
               | You can read what Sonia says herself.
               | 
               | https://x.com/soniajoseph_/status/1791604177581310234
               | 
               | > The ones going to orgies are the effective altruists /
               | safety researchers who are leaving and not signing the
               | non-disparagement agreement.
               | 
               | Indeed, I am sure that the people who are comfortable
               | with the behavior or situation have no need to be
               | pressured into silence.
        
       | doubloon wrote:
       | deleting my OpenAI account.
        
       | Buttons840 wrote:
       | So part of their compensation for working is equity, and when
       | they leave thay have to sign an additional agreement in order to
       | keep their previously earned compensation? How is this legal?
       | Mine as well tell them they have to give all their money back
       | too.
       | 
       | What's the consideration for this contract?
        
         | fshbbdssbbgdd wrote:
         | In the past a lot of options would expire if you didn't
         | exercise them within eg. 90 days of leaving. And exercising
         | could be really expensive.
         | 
         | Speculation: maybe the options they earn when they work there
         | have some provision like this. In return for the NDA the
         | options get extended.
        
           | NewJazz wrote:
           | Options aren't vested equity though.
        
             | PNewling wrote:
             | ... They definitely can be. When I worked for a small
             | biotech company all of my options had a tiered vesting
             | schedule.
        
               | NewJazz wrote:
               | They aren't equity no matter what though?
               | 
               | They can be vested, I realize that.
        
               | _heimdall wrote:
               | Options aren't equity, they're only the option to buy
               | equity at a specified price. Vesting just means you can
               | actually buy the shares at the set strike pice.
               | 
               | For example, you may join a company and be given options
               | to buy 10,000 shares at $5 each with a 2 year vesting
               | schedule. They may begin vesting immediately, meaning you
               | can buy 1/24th of the total options each month (or 614
               | shares). Its also common for a delay up front where no
               | options vest until you've been with the company for say 6
               | or 12 months.
               | 
               | Until an option vests you don't own anything. Once it
               | vests, you still have to buy the shares by exercising the
               | option at the $5 per share price. When you leave, most
               | companies have a deadline on the scale of a few months
               | where you have to either buy all vested shares or forfeit
               | them and lose the stock options.
        
               | teaearlgraycold wrote:
               | > buy all vested shares
               | 
               | The last time I did this I didn't have to buy _all_ of
               | the shares.
        
               | lazyasciiart wrote:
               | I think they mean that you had to buy all the ones you
               | wanted to keep.
        
               | ergocoder wrote:
               | That is tautological... You buy what you want to own???
        
               | StackRanker3000 wrote:
               | The point being made is that it isn't all or nothing, you
               | can buy half the vested options and forfeit the rest,
               | should you want to.
        
               | Hnrobert42 wrote:
               | Wait, wait. Who is on first?
        
               | d4704 wrote:
               | We'd usually point people here to get a better overview
               | of how options work:
               | 
               | https://carta.com/learn/equity/stock-options/
        
               | Taniwha wrote:
               | There can be an advantage to not exercising: it causes a
               | taxable event the IRS will want a cut of the difference
               | between your exercise value and the current valuation, it
               | requires you to commit real money to buy shares that may
               | never be worth anything ....
               | 
               | And there are advantages to exercising: many (most?)
               | companies take back unexercised shares a few weeks/months
               | after you leave, it kicks in a CGT start date, so you can
               | end up paying a lower CGT tax when you eventually sell
               | 
               | You need to understand all this stuff before you make a
               | choice that's right for you
        
               | theGnuMe wrote:
               | Options can vest as do stock grants as well.
        
               | _heimdall wrote:
               | Unless I'm mistaken, the difference is that grants vest
               | into actual shares while options only vest into the
               | opportunity to _buy_ the shares at a set price.
               | 
               | Part of my hiring bonus when joining one of the big tech
               | companies were stock grants. As they vested I owned
               | shares directly and could sell them as soon as they
               | vested if I wanted to.
               | 
               | I also joined a couple startups later in my career and
               | was given options as a hiring incentive. I never
               | exercised the vested options so I never owned them at
               | all, and I lost the optios after 30-90 days after leaving
               | the company. For grants I'd take the shares with me and
               | not have to pay for them, they would have directly been
               | my shares.
               | 
               | Well, they'd actually be shares owned by a clearing house
               | and promised to me _but_ that 's a very different rabbit
               | hole.
        
               | throwaway2037 wrote:
               | > Well, they'd actually be shares owned by a clearing
               | house and promised to me but that's a very different
               | rabbit hole.
               | 
               | You still own the shares, not the clearing house. They
               | hold them on your behalf.
        
               | _heimdall wrote:
               | Looks like I used the wrong term there, sorry. I was
               | referring to Cede & Co, and in the moment assumed they
               | could be considered a clearing house. It is technically
               | called a certificate depository, sorry for the confusion
               | there.
               | 
               | Cede & Co technically owns most of the stock certificates
               | today [1]. If I buy a share of stock I end up actually
               | owning an IOU for a stock certificate.
               | 
               | You can actually confirm this yourself if you own any
               | stock. Call the broker that manages your account and ask
               | who's name is on the stock certificate. It definitely
               | isn't your name. You'll likely get confused or unclear
               | answers, but if you're persistent enough you will indeed
               | find that the certificate is almost certainly in the name
               | of Cede & Co and there is no certificate in your name,
               | likely no share identifier assigned to you either. You
               | just own the promise to a share, which ultimately isn't a
               | problem unless something massive breaks (at which point
               | we have problems anyway).
               | 
               | [1] https://en.m.wikipedia.org/wiki/Cede_and_Company
        
               | SJC_Hacker wrote:
               | > They hold them on your behalf.
               | 
               | Possession is 90% of ownership
        
               | NortySpock wrote:
               | Banks and trading houses are kind of the exception in
               | that regard. I pay my bank monthly for my mortgage, and
               | thus I live in a house that the bank could repossess if
               | they so choose.
        
               | _heimdall wrote:
               | The phrase really should be about force rather than
               | possession. Possession only really makes a difference
               | when there's no power imbalance.
               | 
               | Banks have the legal authority to take the home I possess
               | if I don't meet the terms of our contract. Hell, I may
               | own my property outright but the government can still
               | claim eminent domain and take it from me anyway.
               | 
               | Among equals, possession may matter. When one side can
               | force you to comply, possession really is only a sign
               | that the one with power is currently letting you keep it.
        
               | balderdash wrote:
               | You are the beneficial owner, but the broker is the
               | titled owner, acting as custodian on your behalf
        
               | quickthrowman wrote:
               | Re-read the post you're replying to. They said options
               | are not _vested equity_ , which they aren't. You still
               | need to exercise an option that has vested to purchase
               | the equity shares.
               | 
               | They did not say "options cannot get granted on a tiered
               | vesting schedule", probably because that isn't true, as
               | options can be granted with a tiered vesting schedule.
        
           | brudgers wrote:
           | My unreliable memory is Altman was ( once? ) in favor of
           | extending the period for exercising options. I could be wrong
           | of course but it is consistent with my impression that making
           | other people rich is among his motivations. Not the only one
           | of course. But again I could be wrong.
        
             | resonious wrote:
             | Wouldn't be too surprised if he changed his mind since
             | then. He is in a very different position now!
        
               | brudgers wrote:
               | Unless a PTEP (Post Termination Exercise Period) beyond
               | the ordinary three months was on offer, there probably
               | wouldn't be a story because the kind of people OpenAI
               | hires would tend to be adverse to working at a place with
               | a PTEP less than three months.
               | 
               | Or not, I could be wrong.
        
         | eru wrote:
         | > What's the consideration for this contract?
         | 
         | Consideration is almost meaningless as an obstacle here. They
         | can give the other party a peppercorn, and that would be enough
         | to count as consideration.
         | 
         | https://en.wikipedia.org/wiki/Peppercorn_(law)
         | 
         | There might be other legal challenges here, but 'consideration'
         | is unlikely to be one of them. Unless OpenAI has idiots for
         | lawyers.
        
           | verve_rat wrote:
           | Right, but the employee would be able to refuse the
           | consideration, and thus the contract, and the state of
           | affairs wouldn't change. They would be free to say whatever
           | they wanted.
        
             | eru wrote:
             | Maybe. But whether the employee can refuse the gag has
             | nothing to do at all with the legal doctrine that requires
             | consideration.
        
             | kmeisthax wrote:
             | If they refuse the contract then they lose out on their
             | options vesting. Basically, OpenAI's contracts work like
             | this:
             | 
             | Employment Contract the First:
             | 
             | We are paying you (WAGE) for your labor. In addition you
             | also will be paid (OPTIONS) that, after a vesting period,
             | will pay you a lot of money. If you terminate this
             | employment your options are null and void unless you sign
             | Employment Contract the Second.
             | 
             | Employment Contract the Second:
             | 
             | You agree to shut the fuck up about everything you saw at
             | OpenAI until the end of time and we agree to pay out your
             | options.
             | 
             | Both of these have consideration and as far as I'm aware
             | there's nothing in contract law that requires contracts to
             | be completely self-contained and immutable. If two parties
             | agree to change the deal, then the deal can change. The
             | problem is that OpenAI's agreements are specifically
             | designed to put one counterparty at a disadvantage so that
             | they _have_ to sign the second agreement later.
             | 
             | There _is_ an escape valve in contract law for  "nobody
             | would sign this" kinds of clauses, but I'm not sure how
             | you'd use it. The legal term of art that you would allege
             | is that the second contract is "unconscionable". But the
             | standard of what counts as unconscionable in contract law
             | is _extremely high_ , because otherwise people would
             | wriggle out of contracts the moment that what seemed like
             | favorable terms turned unfavorable. Contract law doesn't
             | care if the deal is fair (that's the FTC's job), it cares
             | about whether or not the deal was agreed to.
        
               | godelski wrote:
               | > There is an escape valve in contract law for "nobody
               | would sign this" kinds of clauses
               | 
               | Who would sign a contract to willfully give away their
               | options?
        
               | d1sxeyes wrote:
               | The same sort of person who would sign a contract
               | agreeing that in order to take advantage of their
               | options, they need to sign a contract with unclear terms
               | at some point in the future if they leave the company.
               | 
               | Bear in mind there are actually three options, one is
               | signing the second contract, one is not signing, and the
               | other is remaining an employee.
        
               | hmottestad wrote:
               | If say that you were working at Reddit for quite a number
               | of years and all your original options had vested and you
               | had exercised them, then since Reddit went public you
               | would now easily be able to sell your stocks, or keep
               | them if you want. So then you wouldn't need to sign the
               | second contract. Unless of course you had gotten new
               | options that hadn't vested yet.
        
               | p1esk wrote:
               | My understanding is as soon as you exercise your options
               | you own them, and the company can't take them from you.
               | 
               | Can anyone confirm this?
        
               | pas wrote:
               | is it even a valid contract clause to tie the value of
               | something to a future completely unknown agreement? (or
               | yes, it's valid, and it means that savvy folks should
               | treat it as zero.)
               | 
               | (though most likely the NDA and everything is there from
               | day 1 and there's no second contract, no?)
        
           | staticautomatic wrote:
           | Ok but peppercorn or not, what's the consideration?
        
             | kmeisthax wrote:
             | "I'll pay you a dollar to shut up"
             | 
             | "Deal"
        
             | PeterisP wrote:
             | Getting a certain amount (according to their vesting
             | schedule) of stock options, which are worth a substantial
             | amount of money and thus clearly is "good and valuable
             | consideration".
        
               | hmottestad wrote:
               | The original stock and vesting agreement that was part of
               | their original compensation probably says that you have
               | to be currently employed by OpenAI for the vesting
               | schedule to apply. So in that case the consideration of
               | this new agreement is that they get to keep their vesting
               | schedule running even though they are no longer
               | employees.
        
               | pas wrote:
               | but can they simply leave with the already vested
               | options/stock? are there clawback provisions in the
               | initial contract?
        
               | nightpool wrote:
               | That's the case in many common/similar agreements, but
               | the OpenAI agreement is different because it's
               | specifically clawing back _already vested_ equity. In
               | this case, I think the consideration would be the company
               | allowing transfer of the shares  / allowing participation
               | in buyback events. Otherwise until the company goes
               | public there's no way for the employees to cash out
               | without consent of the company.
        
         | throwaway598 wrote:
         | That OpenAI are institutionally unethical. That such a young
         | company can be become rotten so quickly can only be due to
         | leadership instruction or leadership failure.
        
           | jasonm23 wrote:
           | Clearly by design.
           | 
           | The most dishonest leadership.
        
           | smt88 wrote:
           | Look at Sam Altman's career and tweets. He's a clown at best,
           | and at worst he's a manipulative crook who only cares about
           | his own enrichment and uses pro-social ideas to give himself
           | a veneer of trustworthiness.
        
             | orlandrescu wrote:
             | Awfully familiar to the other South-African emerald mine
             | inheritor tech mogul.
        
               | kmeisthax wrote:
               | I'm starting to think the relatives of South African
               | emerald mine owners might not be the best people to
               | trust...
        
               | fennecbutt wrote:
               | Lmao no point in worrying about AI spreading FUD when
               | people do it all by themselves.
               | 
               | You know what AI is actually gonna be useful for? AR
               | source attachments to everything that comes out of our
               | monkey mouths, or a huge floating [no source] over
               | someone's head.
               | 
               | Realtime factual accuracy checking pls I need it.
        
               | docmars wrote:
               | If it comes packaged with the constant barrage of
               | ridicule and abuse from others for daring to be slightly
               | wrong about something, nobody may as well talk at all.
        
               | postmodest wrote:
               | Who designs the training set for your putative "fact
               | checker" AI?
        
               | pawelmurias wrote:
               | You are not responsible for the sins of your father
               | regardless of how seriously fucked in the head he is.
        
               | Loughla wrote:
               | No but there is the old nature versus nurture debate. If
               | you're raised in a home with a parent who has zero qualms
               | about exploiting human suffering for profit, that's
               | probably going to have an impact, right?
        
               | johnisgood wrote:
               | What are you implying here? The answer to the nature vs.
               | nurture debate is "both", see "epigenetics" for more.
               | 
               | When considering the influence of a parent with morally
               | reprehensible behavior, it's important to recognize that
               | the environment a child grows up in can indeed have a
               | profound impact on their development. Children raised in
               | households where unethical behaviors are normalized may
               | adopt some of these behaviors themselves, either through
               | direct imitation or as a response to the emotional and
               | psychological environment. However, it is equally
               | possible for individuals to reject these influences.
               | 
               | Furthermore, while acknowledging the potential impact of
               | a negative upbringing, it is critical to avoid
               | deterministic assumptions about individuals. People are
               | not simply products of their environment; they possess
               | agency and the capacity for change, and we need to
               | realize that not all individuals perceive and respond to
               | environmental stimuli in the same way. Personal
               | experiences, cognitive processes, and emotional responses
               | can lead to different interpretations and reactions to
               | similar environmental conditions. Therefore, while the
               | influence of a parent's actions cannot be dismissed, _it
               | is neither fair nor accurate to presume that an
               | individual will inevitably follow in their footsteps_.
               | 
               | As for epigenetics: it highlights how environmental
               | factors can influence gene expression, adding a layer of
               | complexity to how we understand the interaction between
               | genes and environment. While the environment can modify
               | gene expression, individuals may exhibit different levels
               | of susceptibility or resistance to these changes based on
               | genetic variability.
        
               | gopher_space wrote:
               | > However, it is equally possible for individuals to
               | reject these influences.
               | 
               | The crux of your thesis is a legal point of view, not a
               | scientific one. It's a relic from when Natural Philosophy
               | was new and hip, and fundamentally obviated by leaded
               | gasoline. Discussing free will in a biological context is
               | meaningless because the concept is defined by social
               | coercion. It's the opposite of slavery.
        
               | programjames wrote:
               | From a game theory perspective, it can make sense to
               | punish future generations to prevent someone from
               | YOLO'ing at the end of their life. But that only works if
               | they actually care about their children, so perhaps it
               | should be, "you are less responsible for the sins of your
               | father the more seriously fucked in the head he is."
        
               | treme wrote:
               | Please. Elon's track record to take tesla from concept
               | car stage to current mass production levels and building
               | SpaceX from scratch is hardly comparable to Altman's
               | track record.
        
               | satvikpendem wrote:
               | Indeed, at least Elon and his teams actually accomplished
               | something worthwhile compared to Altman.
        
               | jajko wrote:
               | But he is a manager, not an engineer although he sells
               | himself off as such. He keeps smart capable folks around,
               | abuses most of them pretty horribly, and when he
               | intervenes with products its hit and miss. For example
               | latest Tesla Model 3 changes must have been pretty major
               | fuckup and there is no way he didn't ack it all.
               | 
               | Plus all self-driving lies and more lies well within
               | fraud territory at this point. Not even going into his
               | sociopathic personality, massive childish ego and
               | apparent 'daddy issues' which in men manifest exactly
               | like him. He is not in day-to-day SpaceX control and it
               | shows.
        
               | treme wrote:
               | "A cynical habit of thought and speech, a readiness to
               | criticize work which the critic himself never tries to
               | perform, an intellectual aloofness which will not accept
               | contact with life's realities--all these are marks, not
               | ... of superiority but of weakness."
        
               | Angostura wrote:
               | As is repeatedly spamming the same pasta
        
               | formerly_proven wrote:
               | You're confusing mommy and daddy issues. Mommy issues is
               | what makes fash control freaks.
        
               | TechnicolorByte wrote:
               | SpaceX didn't start from scratch. Their initial designs
               | were based on NASA designs. Stop perpetuating the "genius
               | engineer" myth around Elon Musk.
        
               | hanspeter wrote:
               | By that logic nothing has started from scratch.
        
               | SirensOfTitan wrote:
               | "If you wish to make an apple pie from scratch You must
               | first invent the universe"
               | 
               | ...no one "started from scratch", the sum of all
               | knowledge is built on prior foundations.
        
               | colibri727 wrote:
               | Altman is riding a new tech wave, and his team has a
               | couple of years' head start. Musk's reusable rockets were
               | conceptualized a long time ago (Tintin's Destination Moon
               | dates back to 1953) and could have become a reality
               | several decades ago.
        
               | treme wrote:
               | You seriously trying to take his credit away for reusable
               | rocket with "nu uh, it was in scifi first?" Wow.
               | 
               | "A cynical habit of thought and speech, a readiness to
               | criticize work which the critic himself never tries to
               | perform, an intellectual aloofness which will not accept
               | contact with life's realities--all these are marks, not
               | ... of superiority but of weakness."
        
               | colibri727 wrote:
               | No, in fact I'm praising Musk for his project management
               | abilities and his ability to take risks.
               | 
               | >"nu uh, it was in scifi first?" Wow.
               | 
               | https://en.wikipedia.org/wiki/McDonnell_Douglas_DC-X
               | 
               | >NASA had taken on the project grudgingly after having
               | been "shamed" by its very public success under the
               | direction of the SDIO.[citation needed] Its continued
               | success was cause for considerable political in-fighting
               | within NASA due to it competing with their "home grown"
               | Lockheed Martin X-33/VentureStar project. Pete Conrad
               | priced a new DC-X at $50 million, cheap by NASA
               | standards, but NASA decided not to rebuild the craft in
               | light of budget constraints
               | 
               | "Quotation is a serviceable substitute for wit." - Oscar
               | Wilde
        
               | cess11 wrote:
               | What's wrong with weakness? Does it make you feel
               | contempt?
        
               | KyleOneill wrote:
               | I feel like Steve Jobs also fits this category if we are
               | going to talk about people who aren't really worthy of
               | genius title and used other people's accomplishments to
               | reach their goals.
               | 
               | We all know it as the engineers who made iPhone possible.
        
               | KyleOneill wrote:
               | The people downvoting have never read the Isaacson book
               | obviously.
        
               | treme wrote:
               | More like ppl on this site know and respect Jobs for his
               | talent as a revolutionary product manager-style CEO that
               | brought us IPhone and subsequent mobile Era of computing.
        
               | KyleOneill wrote:
               | Jobs was a bully through and through.
        
               | 8372049 wrote:
               | Mobile era of computing would have happened just as much
               | if Jobs had never lived.
        
               | CamperBob2 wrote:
               | To be fair, who else could have gone toe-to-toe with the
               | telecom incumbents? Jobs almost didn't succeed at that.
        
               | 8372049 wrote:
               | Someone far more deserving of the title, Dennis Ritchie,
               | died a week after Jobs' stupidity caught up with him. So
               | much attention to Jobs who didn't really deserve it, and
               | so little to Dennis Ritchie who made such a profound
               | impact on the tech world and society in general.
        
               | thefaux wrote:
               | I think Ritchie's influence while significant is
               | overblown and not entirely positive. I am not a fan of
               | Steve Jobs, who had many reprehensible traits, but I find
               | it ridiculous to dismiss his genius. Frankly, I find
               | Jobs's ability to manipulate people more impressive than
               | Ritchie's ability to manipulate machines.
        
               | 8372049 wrote:
               | > not entirely positive
               | 
               | I don't know if he was responsible, but null-terminated
               | strings has got to be one of the worst mistakes in
               | computer history.
               | 
               | That said, how is the significance of C and Unix
               | "overblown"?
               | 
               | I agree Jobs was brilliant at manipulating people, I
               | don't agree that that should be celebrated.
        
               | hollerith wrote:
               | The main reason C and Unix became widespread is not
               | because they were better than the alternatives, but
               | rather because AT&T distributed them with source code at
               | no cost, and their motivation for doing that was not
               | altruistic, but rather the need to obey a judicial decree
               | or an agreement made at the end of an anti-trust court
               | case under which IBM and AT&T were ordered not to enter
               | each other's markets. I.e., AT&T was prohibited from
               | _selling_ computer hardware and software, so when they
               | accidentally found themselves to be owners of some
               | software that some universities and research labs wanted
               | to use, they gave it away.
               | 
               | C and Unix weren't and aren't bad, but they are
               | overestimated in comments on this site a lot. They
               | weren't masterpieces. The Mac was a masterpiece IMHO.
               | Credit for the Mac goes to Xerox PARC and to Engelbart's
               | lab at Stanford Research Institute, but also to Jobs for
               | recognizing the value of the work and leading the first
               | successful commercial implementation of it.
        
               | ekianjo wrote:
               | SpaceX is still the only company with reusable rockets.
               | NASA only dreams about it and cant even make a regular
               | rocket launch on time
        
               | lr1970 wrote:
               | And don't forget StarLink that revolutionized satellite
               | communications.
        
               | kaycebasques wrote:
               | Are you saying that Altman has family that did business
               | in South African emerald mines? I can't find info about
               | this
        
               | kryptogeist wrote:
               | No. Some dude that launches rockets did, though.
        
               | WalterSear wrote:
               | They are referring to Elon Musk.
        
               | pseudalopex wrote:
               | Saying the other suggested there were 2.
        
               | huijzer wrote:
               | I disagree. If you watch some long form interviews with
               | Elon, you'll see that he cares a lot about the truth. Sam
               | doesn't give me that impression.
        
               | sumedh wrote:
               | > you'll see that he cares a lot about the truth.
               | 
               | Didnt he call the cave diver, a pedo and the guy who
               | attacked Pelosi's husband they were in a gay
               | relationship.
        
               | spinach wrote:
               | He doesn't seem have much of a filter because of his
               | aspergers, but I think he genuinely believed those
               | things. And they are more on the level of calling people
               | names on the playground anyway. In the grand scheme of
               | things, those are pretty shallow "lies".
        
               | mynameisvlad wrote:
               | Oh so it's ok to lie and call people a pedophile (which
               | is _far_ beyond playground name-calling; from a famous
               | person a statement like that actually carries a lot of
               | weight) if you genuinely believe it and have Asperger's?
               | 
               | Those might explain his behavior, but it does not excuse
               | it.
        
               | FireBeyond wrote:
               | Musk fans will contort into pretzels. He genuinely
               | believed it. Just an insult. Just trading shots because
               | the guy called his idea stupid.
               | 
               | It's the RDF.
        
               | smt88 wrote:
               | I have multiple relatives on the spectrum. None of them
               | baselessly accuse strangers of being pedophiles.
               | 
               | It's not Musk's lack of filter that makes him unhinged
               | and dangerous. It's that he's deeply stupid, insecure,
               | racist, enamored of conspiracy theories, and powerful.
        
               | smegger001 wrote:
               | I figure its the chronic drug abuse and constant
               | affirmation he receives from his internet fanboys and
               | enabler yes-men on his board who are financially
               | dependent on him. he doesn't ever receive push-back from
               | anyone so he get more and more divorced form reality.
        
               | troupo wrote:
               | He's 52. And running multiple companies. Aspergers is not
               | a justification for his shitty behavior (and blaming this
               | behavior on Aspergers harms perception of people with
               | Aspergers)
        
               | malfist wrote:
               | > If you watch some long form interviews with Elon,
               | you'll see that he cares a lot about the truth.
               | 
               | You mean the guy who's infamous for lying? The guy who
               | claimed his car was fully self driving more than a decade
               | before it is? The guy who tweeted "funding secured" and
               | facing multiple fraud charges?
        
               | MVissers wrote:
               | Tbh, he wasn't convicted as far as I know.
               | 
               | But yes, he's overly optimistic with timelines. He says
               | so himself.
        
               | kibwen wrote:
               | The first time someone is "overly optimistic with a
               | timeline", you should forgive them.
               | 
               | The tenth time, you should have the good sense to realize
               | that they're full of shit and either a habitual liar or
               | utterly incompetent.
        
               | sashank_1509 wrote:
               | >Man who's the second richest, led companies that made
               | electric cars and reusable rockets
               | 
               | >> Random HN commentator : utterly incompetent
               | 
               | I want what you're smoking
        
               | mynameisvlad wrote:
               | He may be the second richest but he _still_ doesn't seem
               | competent enough to provide remotely reasonable
               | estimates.
               | 
               | That, or he's just a straight up liar who knows the
               | things he says are never going to happen.
               | 
               | Which would you rather it be?
        
               | troupo wrote:
               | Yes, he is largely incompetent but with a great nose for
               | picking up good companies:
               | https://news.ycombinator.com/item?id=40066514
        
               | schmidtleonard wrote:
               | Realists are incapable of pushing frontiers.
               | 
               | If you are doing something that has been done before,
               | hire a realist. Your project will ship on time and within
               | budget. If you are doing something that hasn't been done
               | before, you need an optimist. Partly because the realists
               | run for the hills -- they know the odds and the odds are
               | bad -- but also because their hedging behavior will turn
               | your small chance of success into zero chance of success.
               | On these projects, optimism doesn't guarantee success,
               | but pessimism/realism does guarantee failure.
               | 
               | So no, I am not scandalized to find that the world's
               | biggest innovator (I hate his politics, but this is
               | simply the truth) is systematically biased towards
               | optimism. It's not surprising, it is inevitable.
        
               | lesostep wrote:
               | Wright Brothers took a risk and build first planes but
               | didn't have to lie that their planes already left the
               | ground before they did. They didn't claim "it would fly a
               | year from now", they just build it over and over until it
               | flew.
               | 
               | They were optimistic and yet they found a way to be
               | optimistic without claiming anything untruthful.
               | 
               | Clement Ader, on the other hand, claimed that his
               | innovation flew, and was ridiculed when he couldn't proof
               | it.
               | 
               | One look at their works and it's clear who influenced
               | modern planes, and who didn't.
        
               | schmidtleonard wrote:
               | The Wright Brothers are infamous for failing to
               | industrialize their invention -- something that
               | notoriously requires investors and hype. Perhaps they
               | wouldn't have squandered their lead if they had been a
               | bit more public with their hopes and dreams.
        
               | thefaux wrote:
               | There is a difference between saying "we believe we will
               | achieve X in the next year" and "we will achieve X in the
               | next year." Each framing has its advantages and
               | disadvantages, but it's hard to accuse the person who
               | makes the former statement of lying.
        
               | cma wrote:
               | https://elonmusk.today/
        
               | root_axis wrote:
               | I'm no fan of Sam Altman, but between the two, Elon lies
               | much more often. He's lied about FSD for years, lied
               | about not selling his Tesla stock, lied about
               | "robotaxies" for years, lied about the roadster for
               | years, lied about "funding secured" for Tesla, lied about
               | his twitter free speech ethos, spreads random lies about
               | people he doesn't like, and so much more. The guy is a
               | compulsive liar.
        
               | huijzer wrote:
               | You could also come up with many examples where he worked
               | hard against repeating lies from others since he often
               | reasons from first principles.
               | 
               | But yes you're right. The difference is probably whether
               | one "beliefs" Elon is beneficial for the world or not.
        
               | root_axis wrote:
               | > _The difference is probably whether one "beliefs" Elon
               | is beneficial for the world or not._
               | 
               | I don't think he matters that much, good or bad. Yes, I
               | know he's a billionaire, but in practical terms he hasn't
               | done much, especially compared to the other tech moguls
               | like jobs, bezos, gates, zuck, larry/sergy etc. All those
               | others oversaw companies that completely revolutionized
               | life for everyone on the planet. By comparison, Tesla
               | makes really fun luxury cars that most people can't
               | afford, and all his other companies are vaporware besides
               | spacex which has almost no practical impact on people's
               | lives. You could argue starlink has some impact, but for
               | the vast majority of the population that can afford
               | starlink, terrestrial broadband fills their need.
        
               | xaPe wrote:
               | It didn't take long to drag Elon into this thread. The
               | bitterness and cynicism is unreal.
        
               | xyzzyz wrote:
               | You are literally repeating false smears about Elon Mask.
               | No emerald mine has ever been owned by anyone in Elon's
               | family, and Elon certainly didn't inherit any of it. I
               | find it very ironic that you are doing this while
               | accusing someone of being a manipulative crook.
        
             | csomar wrote:
             | Social engineering has been a thing well before computers
             | and the internet...
        
             | whoistraitor wrote:
             | Indeed. I've heard first hand accounts that would make it
             | impossible for me to trust him. He's very good at the game.
             | But I'd not want to touch him with a barge pole.
        
               | nar001 wrote:
               | Any stories or events you can talk about? It sounds
               | interesting
        
               | benreesman wrote:
               | The New Yorker piece is pretty terrifying and manages to
               | be so while bending over backwards to present both sides
               | of not maybe even suck up to SV a bit. Certainly no one
               | forced Altman to say on the record that Ice Nine in the
               | water glass was what he had planned for anyone who
               | crossed him, and no one forced pg to say, likewise on the
               | record that "Sam's real talent is becoming powerful" or
               | something to that effect.
               | 
               | It pretty much goes downhill from there.
        
               | aleph_minus_one wrote:
               | > The New Yorker piece is pretty terrifying and manages
               | to be so while bending over backwards to present both
               | sides of not maybe even suck up to SV a bit. Certainly no
               | one forced Altman to say on the record that Ice Nine in
               | the water glass was what he had planned for anyone who
               | crossed him, and no one forced pg to say, likewise on the
               | record that "Sam's real talent is becoming powerful" or
               | something to that effect.
               | 
               | Article:
               | https://www.newyorker.com/magazine/2016/10/10/sam-
               | altmans-ma...
        
               | schmidtleonard wrote:
               | Holy shit I thought he was just good at networking, but
               | it sounds like we have a psychopath in charge of the AI
               | revolution. Fantastic.
        
               | sadboi31 wrote:
               | The government is behind it all. Here are a bunch of
               | graduate related talks that don't talk about CS, AI, but
               | instead math and social control: https://videos.ahp-
               | numerique.fr/w/p/2UzpXdhJbGRSJtStzVWon9?p...
        
               | dmoy wrote:
               | For anyone else like me who hasn't read Kurt Vonnegut,
               | but does know about different ice states (e.g. Ice IX):
               | 
               | "Ice Nine" is a fictional assassination device that makes
               | you turn into ice after consuming ice (?)
               | https://en.m.wikipedia.org/wiki/Ice-nine
               | 
               | "Ice IX" (ice nine) is Ice III at a low enough
               | temperature and high enough pressure to be proton-ordered
               | https://en.m.wikipedia.org/wiki/Phases_of_ice#Known_phase
               | s
               | 
               | So here, Sam Altman is stating a death threat.
        
               | spudlyo wrote:
               | It's more than just a death threat, the person killed in
               | such a manner would surely generate a human-sized pile of
               | Ice 9, which would pose a much greater threat to humanity
               | than any AGI.
               | 
               | If we're seriously entertaining this off-handed remark as
               | a measure of Altman's true character, it means not only
               | would be willing willing to murder an adversary, but he'd
               | be willing to risk all humanity to do it.
               | 
               | What I take away from this remark is that Altman is a
               | nerd, and I look forward to seeing a shaky cell-phone
               | video of him reciting one of the calypsos of Bokonon
               | while dressed as a cultist at a SciFi convention.
        
               | dmoy wrote:
               | > the person killed in such a manner would surely
               | generate a human-sized pile of Ice 9, which would pose a
               | much greater threat to humanity than any AGI.
               | 
               | Oh okay, I didn't really grok that implication from my
               | brief scan of the wiki page. Didn't realize it was a
               | cascading all-water-into-Ice-Nine thing.
        
               | pollyturples wrote:
               | just to clarify, in the book it's basically just 'a form
               | of ice that stays ice even when warm'. it was described
               | as an abandoned projected by the military to harden mud
               | for infantry men to cross. just like regular ice
               | crystals, the ice9 crystal pattern 'spreads' across
               | water, but without the need for it to be chilled, eg the
               | body temp water freezes etc, it becomes a 'midas touch'
               | problem to anyone dealing with it.
        
               | racional wrote:
               | "Sam is extremely good at becoming powerful" was the
               | quote, which has a distinctly different ring to it. Not
               | that this diminishes from the overall creep factor.
        
               | lr1970 wrote:
               | > Any stories or events you can talk about? It sounds
               | interesting reply
               | 
               | Paul Graham fired Sam Altman from YC on the spot for
               | "loss of trust". Full details unknown.
        
               | bookaway wrote:
               | The story of the "YC mafia" takeover of Conde Nast era
               | reddit as summarized by ex-ceo Yishan who resigned after
               | tiring of Altman's constant Machiavelli machinations is
               | also hilarious and foreshadowing of future events[0]. I'm
               | sure by the time Altman resigned from the Reddit board
               | OpenAI had long incorporated the entire corpus into
               | ChatGPT already.
               | 
               | At the moment all the engineers at OpenAI, including gdb,
               | who currently have their credibility in tact are nerd-
               | washing Altman's tarnished reputation by staying there. I
               | mentioned this in a comment elsewhere but Peter Hintjens'
               | (ZeroMQ, RIP) book called the "Psychopath Code"[1] is
               | rather on point in this context. He notes that
               | psychopaths are attracted to project groups that have
               | assets and no defenses, i.e. non-profits:
               | 
               |  _If a group has assets and no defenses, it is inevitable
               | [a psychopath] will invade the group. There is no "if"
               | here. Indeed, you may see several psychopaths striving
               | for advantage...[the psychopath] may be a founder, yet
               | that is rare. If he is a founder, someone else did the
               | hard work. Look for burned-out skeletons in the
               | closet...He may come with grand stories, yet only by his
               | own word. He claims authority from his connections to
               | important people. He spends his time in the group
               | manipulating people against each other. Or, he is absent
               | on important business...His dominance is not earned, yet
               | it is tangible...He breaks the social conventions of the
               | group. Social humans feel fear and anxiety when they do
               | this. This is a dominance mask._
               | 
               | A group of nerds that want to get shit done and work on
               | important problems, who are primed to be optimistic and
               | take what people say to their face at face value, and
               | don't want to waste time with "people problems" are
               | susceptible to these types of characters taking over.
               | 
               | [0] https://old.reddit.com/r/AskReddit/comments/3cs78i/wh
               | ats_the...
               | 
               | [1]https://hintjens.gitbooks.io/psychopathcode/content/ch
               | apter4...
        
             | hackernewds wrote:
             | the name OpenAI itself reminds me every day of this.
        
               | genevra wrote:
               | I knew their vision of open source AI wouldn't last but
               | it surprised me how fast it was.
        
               | baq wrote:
               | That vision, if it was ever there, died before ChatGPT
               | was released. It was just a hiring scheme to attract
               | researchers.
               | 
               | pg calls sama 'naughty'. I call him 'dangerous'.
        
               | olalonde wrote:
               | I'm still finding it difficult to understand how their
               | move away from the non-profit mission was legal.
               | Initially, you assert that you are a mission-driven non-
               | profit, a claim that attracts talent, capital, press,
               | partners, and users. Then, you make a complete turnaround
               | and transform into a for-profit enterprise. Why this
               | isn't considered fraud is beyond me.
        
               | smt88 wrote:
               | My understanding is that there were two corporate
               | entities, one of which was always for-profit.
        
               | w0m wrote:
               | It was impractical from the start; they had to pivot
               | before a they were able to get an LLM proper out (before
               | ~anyone had heard of them)
        
               | deadbabe wrote:
               | It's "Open" as in "open Pandora's box", not "open
               | source". Always has been.
        
             | raverbashing wrote:
             | The startup world (as the artistic world, the sports world,
             | etc) values healthy transgression of the rules
             | 
             | But the line between healthy and unlawful transgression can
             | be a thin line
        
               | WalterSear wrote:
               | The startup world values transgression of the rules.
        
             | andrepd wrote:
             | Many easily fooled rubes believe that veneer, so I guess
             | it's working for him.
        
             | comboy wrote:
             | I'm surprised at such a mean comment and lots of follow-ups
             | with agreement. I don't know Sam personally, I've only
             | heard him here and there online from before OpenAI days and
             | all I got was a good impression. He seems smart and pretty
             | humble. Apart from all openai drama which I don't know
             | enough to have an opinion, past-openai he also seems to be
             | talking with sense.
             | 
             | Since so many people took time to put him down there here
             | can anybody provide some explanation to me? Preferably not
             | just about how closed openai is, but specifically about
             | Sam. He is in a pretty powerful position and maybe I'm
             | missing some info.
        
               | FartyMcFarter wrote:
               | People who have worked with him have publicly called him
               | a manipulative liar:
               | 
               | https://www.reddit.com/r/OpenAI/comments/1804u5y/former_o
               | pen...
        
             | tinyhouse wrote:
             | Well, more than 90% of OpenAI employees backed him up when
             | the board fired him. Maybe he's not the clown you claim he
             | is.
        
               | iinnPP wrote:
               | People are self-motivated more often than not.
        
               | llamaimperative wrote:
               | Or they didn't want the company, their job, and all of
               | their equity to evaporate
        
               | tinyhouse wrote:
               | Well, if he's a clown then his departure should cause the
               | opposite, no? And you're right, more than 90% of them
               | said we don't want the non-profit BS and openness. We
               | want a unicorn tech company that can make us rich. Good
               | for them.
               | 
               | Disclaimer: I'm Sam's best friend from kindergarten. Just
               | joking, never met the guy and have no interest in openai
               | beyond being a happy customer (who will switch in a
               | heartbeat to the competitors' if they give me a good
               | reason to)
        
               | llamaimperative wrote:
               | > Well, if he's a clown then his departure should cause
               | the opposite, no?
               | 
               | Nope, not even close to necessarily true.
               | 
               | > more than 90% of them said we don't want the non-profit
               | BS and openness. We want a unicorn tech company that can
               | make us rich. Good for them.
               | 
               | Sure, good for them! Dissolve the company and its
               | charter, give the money back to the investors who
               | invested under that charter, and go raise money for a
               | commercial venture.
        
             | skeeter2020 wrote:
             | I fear your characterization diminishes the real risk: he's
             | incredibly well resourced, well-connected and intelligent
             | while being utterly divorced from the reality of the
             | majority he threatens. People like him and Peter Thiel are
             | not simple crooks or idiots - they truly believe in their
             | convictions. This is far scarier.
        
           | ben_w wrote:
           | We already know there's been a leadership failure due to the
           | mere existence of the board weirdness last year; if there has
           | been any clarity to that, I've missed it for all the popcorn
           | gossiping related to it.
           | 
           | Everyone _including the board 's own chosen replacements for
           | Altman_ siding with Altman seems to me to not be compatible
           | with his current leadership being the root cause of the
           | current discontent... so I'm blaming Microsoft, who were the
           | moustache-twirling villains when I was a teen.
           | 
           | Of course, thanks to the NDAs hiding information, I may just
           | be wildly wrong.
        
             | Sharlin wrote:
             | Everyone? What about the board that fired him, and all of
             | those who've left the company? It seems to me more like
             | those people are leaving who are rightly concerned about
             | the direction things are going, and those people are
             | staying who think that getting rich outweighs ethical - and
             | possibly existential - concerns. Plus maybe those who still
             | believe they can effect a positive change within the
             | company. With regard to the letter - it's difficult to say
             | how many of the undersigned simply signed because of social
             | pressure.
        
               | ben_w wrote:
               | > Everyone? What about the board that fired him,
               | 
               | I meant of the employees, obviously not the board.
               | 
               | Also excluded: all the people who never worked there who
               | think Altman is weird, Elon Musk who is suing them (and
               | probably the New York Times on similar grounds), and the
               | protestors who dropped leaflets on one of his public
               | appearances.
               | 
               | > and all of those who've left the company?
               | 
               | Happened after those events; at the time it was so close
               | to being literally employee who signed the letter saying
               | "bring Sam back or we walk" that the rest can be assumed
               | to have been off sick that day even despite the
               | reputation the US has for very limited holidays and
               | getting people to use those holidays for sick leave.
               | 
               | > It seems to me more like those people are leaving who
               | are rightly concerned about the direction things are
               | going, and those people are staying who think that
               | getting rich outweighs ethical - and possibly existential
               | - concerns. Plus maybe those who still believe they can
               | effect a positive change within the company.
               | 
               | Obviously so, I'm only asserting that this doesn't appear
               | to be due to Altman, despite him being CEO.
               | 
               | ("Appear to be" is of course doing some heavy lifting
               | here: unless someone wants to literally surveil the
               | company and publish the results, and expect that to be
               | illegal because otherwise it makes NDAs pointless, we're
               | all in the dark).
        
               | shkkmo wrote:
               | It's hard to guage exactly how much credence to put in
               | that letter due to the gag contracts.
               | 
               | How much was it in support of Altman and how much was in
               | opposition to the extremely poorly explained in board
               | decisions, and how much was pure self interest due to
               | stock options?
               | 
               | I think when a company chooses secrecy, they abandon much
               | of the benefit of the doubt. I don't think there is any
               | basis for absolving Altman.
        
           | benreesman wrote:
           | To borrow the catchphrase of one of my favorite hackers ever:
           | "correct".
        
         | phkahler wrote:
         | Yeah you don't have to sign anything to quit. Ever. No new
         | terms at that time, sorry.
        
           | ska wrote:
           | There is usually a carrot along with the stick.
        
         | willis936 wrote:
         | They earned wages and paid taxes on them. Anything on top is
         | just the price they're willing to accept in exchange for their
         | principles.
        
           | throw101010 wrote:
           | How do you figure that they should pay an additional price
           | (their principle/silence) for this equity when they've
           | supposedly earned it during their employment (assuming this
           | was not planned when they got hired, since they make them
           | sign new terms at the time of their departure)?
        
         | zeroonetwothree wrote:
         | I assume it's agreed to at time of employment? Otherwise you're
         | right that it doesn't make sense
        
           | throw101010 wrote:
           | Why do you assume this if it is said here and in the article
           | that they had to sign something at the time of the departure
           | from the company?
        
         | riehwvfbk wrote:
         | It's also really weird equity: you don't get an ownership stake
         | in the company but rather profit-sharing units. If OpenAI ever
         | becomes profitable (color me skeptical), you can indeed get
         | rich as an employee. The other trigger is "achieving AGI", as
         | defined by sama (presumably). And while you wait for these
         | dubious events to occur you work insane hours for a mediocre
         | cash salary.
        
         | blackeyeblitzar wrote:
         | Unfortunately this is how most startup equity agreements are
         | structured. They include terms that let the company cancel
         | options that haven't been exercised for [various reasons].
         | Those reasons are very open ended, and maybe they could be
         | challenged in a court, but how can a low level employee afford
         | to do that?
        
           | jkaplowitz wrote:
           | I don't know of any other such agreements that allow vested
           | equity to be revoked, as the other person said. That doesn't
           | sound very vested to me. But we already knew there are a lot
           | of weird aspects to OpenAI's semi-nonprofit/semi-for-profit
           | approximation of equity.
        
             | blackeyeblitzar wrote:
             | As far as I know it's part of the stock plan for most
             | startups. There's usually a standard clause that covers
             | this, usually with phrasing that sounds reasonable (like
             | triggering if company policy is violated or is found to
             | have been violated in the past). But it gives the company a
             | lot of power in deciding if that's the case.
        
         | nurple wrote:
         | The thing is that this is a private company, so there is no
         | public market to provide liquidity. The company can make itself
         | the sole source of liquidity, at its option, by placing sell
         | restrictions on the grants. Toe the line, or you will find you
         | never get to participate in a liquidity event.
         | 
         | There's more info on how SpaceX uses a scheme like this[0] to
         | force compliance, and seeing as Musk had a hand in creating
         | both orgs, they're bound to be similar.
         | 
         | [0] https://techcrunch.com/2024/03/15/spacex-employee-stock-
         | sale...
        
           | tdumitrescu wrote:
           | Whoa. That article says that SpaceX does tender offers twice
           | a year?! That's so much better than 99% of private companies,
           | it makes it almost as liquid for employees as a public
           | company.
        
         | temporarely wrote:
         | I think we should have the exit agreement (if any) included and
         | agreed to as part of the signing the employment contract.
        
         | theyinwhy wrote:
         | I guess there are indeed countries where this is illegal. Funny
         | that it seems to be legal in the land of the free (speech).
        
         | glitchc wrote:
         | I'm guessing unvested equity is being treated separately from
         | other forms of compensation. Normally, leaving a company loses
         | the individual all rights to unvested options. Here the
         | considetation is that options are retained in exchange for
         | silence.
        
         | e40 wrote:
         | Perhaps they are stock options and leaving without signing
         | would make them evaporate, but signing turns them back into
         | long-lasting options?
        
         | m3kw9 wrote:
         | In the initial hiring agreement, this would be stated and the
         | employee would have to agree to signing such form if they are
         | to depart
        
         | bobbob1921 wrote:
         | I would guess it's a bonus and part of their bonus structure
         | and they agreed to the terms of any exit/departure, when they
         | sign their initial contract.
         | 
         | I'm not saying it's right or that I agree with it, however.
        
       | yumraj wrote:
       | Compared to what seemed like their original charter, with non-
       | profit structure and all, now it seems like a rather poisonous
       | place.
       | 
       | They will have many successes in the short run, but, their long
       | run future suddenly looks a little murky.
        
         | 0xDEAFBEAD wrote:
         | Similar points made here, if anyone is interested in signing:
         | https://www.openailetter.org/
        
         | eternauta3k wrote:
         | It could work like academia or finance: poisonous environment
         | (it is said), but ambitious enough people still go in to try
         | their luck.
        
           | throwaway2037 wrote:
           | "finance": A bit of a broad brush, don't you think? Is
           | working at a Landsbank or Sparkasse in Germany really so
           | "poisonous"?
        
             | eternauta3k wrote:
             | Yes, of course, narrow that down to the crazy wolf-of-wall-
             | street subset.
        
         | baq wrote:
         | They extracted a lot of value from researchers during their
         | 'open' days, but it's depleted now, so of course they move on
         | to the next source of value. sama is going AGI or bust with a
         | very rational position of 'if somebody has AGI, I'd rather it
         | was me' except I don't like how he does it one bit, it's got a
         | very dystopian feel to it.
        
       | atomicnumber3 wrote:
       | I have some experience with rich people who think they can just
       | put whatever they want in contracts and then stare at you until
       | you sign it because you are physically dependent on eating food
       | every day.
       | 
       | Turns out they're right, they can put whatever they want in a
       | contract. And again, they are correct that their wage slaves will
       | 99.99% of the time sign whatever paper he pushes in front of them
       | while saying "as a condition of your continued employment,
       | [...]".
       | 
       | But also it turns out that just because you signed something
       | doesn't mean that's it. My friends (all of us young twenty-
       | something software engineers much more familiar with transaction
       | isolation semantics than with contract law) consulted with an
       | attorney.
       | 
       | The TLDR is that:
       | 
       | - nothing in contract law is in perpetuity
       | 
       | - there MUST be consideration for each side (where
       | "consideration" means getting something. something real. like
       | USD. "continued employment" is not consideration.)
       | 
       | - if nothing is perpetual, then how long can it last supposing
       | both sides do get ongoing consideration from it? the answer is,
       | the judge will figure it out.
       | 
       | - and when it comes to employers and employees, the employee had
       | damn well better be getting a good deal out of it, especially if
       | you are trying to prevent the employee (or ex-employee) from
       | working.
       | 
       | A common pattern ended up emerging: our employer would put
       | something perpetual in the contract, and offer no consideration.
       | Our attorney would tell us this isn't even a valid contract and
       | not to worry about it. Employer would offer an employee some
       | nominal amount of USD in severance and put something in
       | perpetuity into the contract. Our attorney tells us the judge
       | would likely use "blue ink rule" to add in "for a period of one
       | year", or, it would be prorated based on the amount of money they
       | were given relative to their former salary.
       | 
       | (I don't work there anymore, naturally).
        
         | golergka wrote:
         | > stare at you until you sign it because you are physically
         | dependent on eating food every day
         | 
         | Even lowest level fast food workers can choose a different
         | employer. An engineer working at OpenAI certainly has a lot of
         | opportunities to choose from. Even when I only had three years
         | in the industry, mid at best, I asked to change the contract I
         | was presented with because non-compete was too restrictive --
         | and they did it. The caliber of talent that OpenAI is
         | attracting (or hopes to attract) can certainly do this too.
        
           | fragmede wrote:
           | > Even lowest level fast food workers can choose a different
           | employer.
           | 
           | Only thanks to a recent ruling by the FTC that non-competes
           | are valid. in the most egregious uses, bartenders and servers
           | were prohibited from finding another job in the same industry
           | for two years.
        
             | golergka wrote:
             | You're talking about what happens after a person signs a
             | non compete, whereas my point is about what happens before
             | he does (or doesn't) do it.
        
           | atomicnumber3 wrote:
           | I am typically not willing to bet I can get back under health
           | insurance for my family within the next 0-4 weeks. And paying
           | for COBRA on a family plan is basically like going from
           | earning $X/mo to drawing $-X/mo.
        
             | insane_dreamer wrote:
             | The perversely capitalistic healthcare system in the US is
             | perhaps the number one reason why US employers have so much
             | more power over their employees than their European
             | counterparts.
        
         | sangnoir wrote:
         | > if nothing is perpetual, then how long can it last supposing
         | both sides do get ongoing consideration from it? the answer is,
         | the judge will figure it out.
         | 
         | Isn't that the reason more competent lawyers put in the royal
         | lives[1] clause? It specifies the contract is valid until 21
         | years after the death of the last currently-living royal
         | descendant; I believe the youngest one is currently 1 year old,
         | and they all have good healthcare, so it's almost certainly
         | will be beyond the lifetime of any currently-employed persons.
         | 
         | 1. https://en.wikipedia.org/wiki/Royal_lives_clause
        
           | spoiler wrote:
           | I know little about law, but isn't this _completely_
           | ludicrous? Assuming you know a bit more (or someone else here
           | does), I have a few questions:
           | 
           | Would any non-corrupt judge consider this is done in bad
           | fait?
           | 
           | How is this difference if we use a great ancient sea turtles
           | --or some other long-lived organism--instead of the current
           | royal family baby? Like, I guess my point is anything that
           | would likely outlive the employee basically?
        
             | amenhotep wrote:
             | It's a standard legal thing to accommodate a rule that you
             | can't write a perpetual contract, it has to have a term
             | delimited by the life of someone alive plus some limited
             | period.
             | 
             | A case where it obviously makes sense is something like a
             | covenant between two companies; whose life would be
             | relevant there, if both parties want the contract to last a
             | long time and have to pick one? The CEOs? Employees?
             | Shareholders? You could easily have a situation where the
             | company gets sold and they all leave, but the contract
             | should still be relevant, and now it depends on the lives
             | of people who are totally unconnected to the parties. Just
             | makes things difficult. Using a monarch and his currently
             | living descendants is easy.
             | 
             | I'm not sure how relevant it is in a more employer employee
             | context. But it's a formalism to create a very long
             | contract that's easy to track, not a secret trick to create
             | a longer contract than you're normally allowed to. An
             | employer asking an employee to agree to it would have no
             | qualms asking instead for it to last the employee's life,
             | and if the employee's willing to sign one then the other
             | doesn't seem that much more exploitative.
        
         | cynicalsecurity wrote:
         | Why would anyone want to work at such horrible company.
        
           | baq wrote:
           | Money
        
         | mindslight wrote:
         | This is all basically true, but the problem is that retaining
         | an attorney to confidently represent you for such negotiation
         | is proposition with $10k table stakes (probably $15k+ these
         | days with Trumpflation), and much more if the company sticks to
         | their guns and doesn't settle (which is much more likely when
         | the company is holding the cards and you have to go on the
         | offensive). The cost isn't necessarily outright prohibitive in
         | the context of surveillance industry compensation, but is still
         | a chunk of change and likely to give most people pause when the
         | alternative is to just go with the flow and move on.
         | 
         | Personally I'd say there needs to be a general restriction
         | against including blatantly unenforceable terms in a contract
         | document, especially unilateral "terms". The drafter is
         | essentially pushing incorrect legal advice.
        
       | zombiwoof wrote:
       | Sam and Mira. greedy as fuck since they are con artists and
       | neither could get a job at that level anywhere legitimate.
       | 
       | Now it's a money grab.
       | 
       | Sad because some amazing tech and people now getting corrupted
       | into a toxic culture that didn't have to be that way
        
         | romanovcode wrote:
         | > Sam and Mira. greedy as fuck since they are con artists and
         | neither could get a job at that level anywhere legitimate.
         | 
         | Hey hey hey! Sam founded a 4th most popular social networking
         | site in 2005 called Loopt. Don't you forget that! (After that
         | he joined YC and founded nothing ever since)
        
           | null0pointer wrote:
           | He's spent all those years conducting field research for his
           | stealth-mode social engineering startup.
        
       | krick wrote:
       | I'm well aware of being ignorant about USA law, and it isn't news
       | to me that it encompasses a lot of ridiculous stuff, but it's
       | still somehow amazes me, that "lifetime no-criticism contract" is
       | possible.
       | 
       | It's quite natural, that a co-founder, being forced out of the
       | company wouldn't be exactly willing to forfeit his equity. So,
       | what, now he cannot... talk? That has some Mexican cartel vibes.
        
       | dbuser99 wrote:
       | Man. No wonder openai is nothing without its people
        
       | alexpetralia wrote:
       | If the original agreement offered equity that vests, then
       | suddenly another future agreement can potentially revoke that
       | vested equity? It makes no sense unless somehow additional
       | conditions were attached to the vested equity in the original
       | agreement.
        
         | riehwvfbk wrote:
         | And almost all equity agreements do exactly that - give the
         | company right of repurchase. If you've ever signed one, go re-
         | read it. You'll likely see that clause right there in black and
         | white.
        
           | ipaddr wrote:
           | For companies unlisted on stock exchanges the options are
           | then worthless.
           | 
           | These were profit sharing units vs options.
        
           | umanwizard wrote:
           | They give the company the right to repurchase unvested (but
           | exercised) shares, not vested options. At least the ones I've
           | signed.
        
       | RomanPushkin wrote:
       | They don't talk publicly, but they're almost always OK if you're
       | friends with them. I have two ex-OpenAI friends, and there is a
       | lot of shit going in there. Of course, I won't reveal their
       | identities, even in a court. And they will deny they said
       | anything to me. But the info, if needed, might get leaked through
       | trusted friends. And nobody can do anything with that.
        
         | benreesman wrote:
         | I've worked (for years) with easily a dozen people who either
         | are there or spent meaningful time there.
         | 
         | I also work hard not to print gossip and hearsay (I try not to
         | even mention so much as a first name, I think I might have
         | slipped one or twice on that though not in connection with an
         | accusation of wrongdoing), there's more than enough credible
         | journalism to paint a picture, _any_ person whose bias (and I
         | have my own but it's not like, over being snubbed for a job or
         | something it's a philosophical /ethical/political agenda) has
         | not utterly robbed them of objectivity can acknowledged that
         | "this looks really bad and worse all the time" on the basis of
         | purely public primary sources and credible journalism.
         | 
         | I think some of the inside baseball I try very hard not to put
         | in writing might be what cranks it up to "people are doing
         | time".
         | 
         | I've caught more than a little "less than a great time" over
         | being a vocal critic, but I'm curious if having gone pretty far
         | down the road and saying something is rotten, why you'd declare
         | a willingness to defy a grand jury or a judge?
         | 
         | I've never been in court, let alone held in contempt, but I
         | gather it's fairly hard time to openly defy a judge.
         | 
         | I have friends I'd go to jail for, but not very many and none
         | who work at OpenAI.
        
       | danielmarkbruce wrote:
       | This seems like a nonsense article.
       | 
       | As for 'invalid because no consideration' - there is practically
       | zero probability OpenAI lawyers are dumb enough to not give any
       | consideration. There is a very large probability this reporter
       | misunderstood the contract. OpenAI have likely just given some
       | non-vested equity, which in some cases is worth a lot of money.
       | So yeah, some (former) employees are getting paid a lot to shut
       | up. That's the least unique contract ever and there is nothing
       | morally or legally wrong with it.
        
       | mwigdahl wrote:
       | The best approach to circumventing the nondisclosure agreement is
       | for the affected employees to get together, write out everything
       | they want to say about OpenAI, train an LLM on that text, and
       | then release it.
       | 
       | Based on these companies' arguments that copyrighted material is
       | not actually reproduced by these models, and that any seemingly-
       | infringing use is the responsibility of the user of the model
       | rather than those who produced it, anyone could freely generate
       | an infinite number of high-truthiness OpenAI anecdotes, freshly
       | laundered by the inference engine, that couldn't be used against
       | the original authors without OpenAI invalidating their own legal
       | stance with respect to their own models.
        
         | rlt wrote:
         | This would be hilarious and genius. Touche.
        
         | bboygravity wrote:
         | Genious. I'm praying for this to happen.
        
         | judge2020 wrote:
         | NDAs don't touch the copyright of your speech / written works
         | you produce after leaving, they just make it breach of contract
         | to distribute those words.
        
           | otabdeveloper4 wrote:
           | Technically, no words are being distributed here. (At least
           | according to OpenAI lawyers.)
        
           | elicksaur wrote:
           | Following the legal defense of these companies, the employees
           | wouldn't be distributing any words. They're distributing a
           | model.
        
             | JumpCrisscross wrote:
             | They're disseminating the information. Form isn't as
             | important as it is for copyright.
        
             | cqqxo4zV46cp wrote:
             | Please just stop. It's highly unlikely that any relevant
             | part of any reasonably structured NDA has any material
             | relevance to copyright. Why do developers think that they
             | can just intuit this stuff? This is one step away from
             | being a more trendy "stick the constitution to the back of
             | my car in lieu of a license place" lunacy.
        
               | elicksaur wrote:
               | Actually, I'm a licensed attorney having some fun
               | exploring tongue-in-cheek legal arguments on the
               | internet.
               | 
               | But, I could also be a dog.
        
           | romwell wrote:
           | >they just make it breach of contract to distribute those
           | words.
           | 
           | See, they aren't distributing the words, and good luck
           | proving that any specific words went into training the model.
        
         | TeMPOraL wrote:
         | Clever, but no.
         | 
         | The argument about LLMs not being copyright laundromats making
         | sense hinges the _scale_ and non-specificity of training. There
         | 's a difference between "LLM reproduced this piece of
         | copyrighted work because it memorized it from being fed
         | _literally half the internet_ ", vs. "LLM was intentionally
         | trained to specifically reproduce variants of this particular
         | work". Whatever one's stances on the former case, the latter
         | case would be plain infringing copyrights _and_ admitting to
         | it.
         | 
         | In other words: GPT-4 gets to get away with occasionally
         | spitting out something real verbatim. Llama2-7b-finetune-
         | NYTArticles does not.
        
           | romwell wrote:
           | Cool, just feed the ChatGPT+ the same half the Internet
           | _plus_ OpenAI founders ' anecdotes about the company.
           | 
           | Ta-da.
        
             | TeMPOraL wrote:
             | And be rightfully sacked for maliciously burning millions
             | of dollars on a retrain to purposefully poison the model?
             | 
             | Not to mention: LLMs aren't oracles. Whatever they say will
             | be dismissed as hallucinations if it isn't corroborated by
             | other sources.
        
               | romwell wrote:
               | >And be rightfully sacked for maliciously burning
               | millions of dollars on a retrain to purposefully poison
               | the model?
               | 
               | Does it really take _millions_ dollars of compute to add
               | additional training data to an existing model?
               | 
               | Plus, we're talking about employees that are leaving /
               | left anyway.
               | 
               | >Not to mention: LLMs aren't oracles. Whatever they say
               | will be dismissed as hallucinations if it isn't
               | corroborated by other sources.
               | 
               | Excellent. That means plausible deniability.
               | 
               | Surely all those horror stories about unethical behavior
               | are just hallucinations, no matter how specific they are.
               | 
               | Absolutely no reason for anyone to take them seriously.
               | Which is why the press will not hesitate to run with
               | that, with appropriate disclaimers, of course.
               | 
               | Seriously, you seem to think that in a world where
               | numbers about death toll in Gaza are taken verbatim _from
               | Hamas_ without being corroborated by other sources, an AI
               | model output will not pass the test of public scrutiny?
               | 
               | Very optimistic of you.
        
           | bluefirebrand wrote:
           | Seems absurd that somehow the scale being massive makes it
           | better somehow
           | 
           | You would think having a massive scale just means it has
           | infringed _even more_ copyrights, and therefore should be in
           | even more hot water
        
             | TeMPOraL wrote:
             | You may or may not agree with it, but that's the only thing
             | that makes it different - scale and non-specificity. Same
             | thing that worked for search engines, for example.
             | 
             | My point isn't to argue merits of that case, it's just to
             | point out that OP's joke is like a stereotypical output of
             | an LLM: seems to make sense, but really doesn't.
        
             | NewJazz wrote:
             | My US history teacher taught me something important. He
             | said that if you are going to steal and don't want to get
             | in trouble, steal a whole lot.
        
               | PontifexMinimus wrote:
               | Copying one person is plagarism. Copying lots of people
               | is research.
        
               | comfysocks wrote:
               | True, but if you research lots of sources and still emit
               | significant blocks of verbatim text without attribution,
               | it's still plagiarism. At least that's how human authors
               | are judged.
        
               | TeMPOraL wrote:
               | Plagiarism is not illegal, it is merely frowned on, and
               | only in certain fields at that.
        
               | bayindirh wrote:
               | This is a reductionist take. Maybe it's not _illegal per
               | se_ where you live, but it always have ramifications, and
               | these ramifications affect your future a whole lot.
        
               | psychoslave wrote:
               | Scale might be a factor, but it's not the only one. Your
               | neighbor might not care if you steal a grass stalk in its
               | lawn, and feel powerless if you're the bloody dictator of
               | the country which wastes tremendous amount of resources
               | in socially useless whims thanks to overwhelming taxes.
               | 
               | But most people don't want to live in permanent mental
               | distress due to shame of past action or fear of
               | rebellion, I guess.
        
               | throwaway2037 wrote:
               | Very interesting post! Can you share more about your
               | teacher's reasoning?
        
               | SuchAnonMuchWow wrote:
               | It likely comes from the saying similar to this one:
               | "kill a few, you are a murderer. Kill millions, you are a
               | conqueror".
               | 
               | More generally, we tend to view number of causalities in
               | war as a large number, and not as the sum of every
               | tragedies that it represent and that we perceive when
               | fewer people die.
        
             | omeid2 wrote:
             | It may not make a lot of sense but it follows the "fair
             | use" doctrine. Which is generally based on the following 4
             | factors:
             | 
             | 1) the purpose and character of use.
             | 
             | 2) the nature of the copyrighted material.
             | 
             | 3) the *amount* and *substantiality* of the portion taken,
             | and.
             | 
             | 4) the effect of the use upon the *potential market*.
             | 
             | So in that regard, if you're training a personal assistance
             | GPT, and use some software code to teach your model logic,
             | that is easy to defend as fair use.
             | 
             | But the extent of use matters, and if you're training an AI
             | for the sole purpose of regurgitating specific copyrighted
             | material, it is infringement, if it is copyrighted, but in
             | this case, it is not copyright issue, it is contracts and
             | NDAs.
        
             | kmeisthax wrote:
             | So, the law has this concept of 'de minimus' infringement,
             | where if you take a very small amount - like, way smaller
             | than even a fair use - the courts don't care. If you're
             | taking a handful of word probabilities from every book ever
             | written, then the portion taken from each work is very,
             | very low, so courts aren't likely to care.
             | 
             | If you're only training on a handful of works then you're
             | taking more from them, meaning it's not de minimus.
             | 
             | For the record, I got this legal theory from Cory
             | Doctorow[0], but I'm skeptical. It's very plausible, but at
             | the same time, we also thought sampling in music was de
             | minimus until the Second Circuit said otherwise. Copyright
             | law is extremely malleable in the presence of moneyed
             | interests, sometimes without Congressional intervention
             | even!
             | 
             | [0] who is NOT pro-AI, he just thinks labor law is a better
             | bulwark against it than copyright
        
               | wtallis wrote:
               | If your training process ingests the entire text of the
               | book, and trains with a large context size, you're
               | getting more than just "a handful of word probabilities"
               | from that book.
        
               | ben_w wrote:
               | If you've trained a 16-bit ten billion parameter model on
               | ten trillion tokens, then the mean training token changes
               | 2/125 of a bit, and a 60k word novel (~75k tokens)
               | contributes 1200 bits.
               | 
               | It's up to you if that counts as "a handful" or not.
        
               | hansworst wrote:
               | I think it's questionable whether you can actually use
               | this bit count to represent the amount of information
               | from the book. Those 1200 bits represent the way in which
               | this particular book is different from everything else
               | the model has ingested. Similarly, if you read an entire
               | book yourself, your brain will just store the salient
               | bits, not the entire text, unless you have a photographic
               | memory.
               | 
               | If we take math or computer science for example: some
               | very important algorithms can be compressed to a few bits
               | of information if you (or a model) have a thorough
               | understanding of the surrounding theory to go with it.
               | Would it not amount to IP infringement if a model
               | regurgitates the relevant information from a patent
               | application, even if it is represented by under a
               | kilobyte of information?
        
               | ben_w wrote:
               | I agree with what I think you're saying, so I'm not sure
               | I've understood you.
               | 
               | I think this is all still compatible with saying that
               | ingesting an entire book is still:
               | 
               | > If you're taking a handful of word probabilities from
               | every book ever written, then the portion taken from each
               | work is very, very low
               | 
               | (Though I wouldn't want to make a bet either way on "so
               | courts aren't likely to care" that follows on from that
               | quote: my not-legally-trained interpretation of the rules
               | leads to me being confused about how traditional search
               | engines aren't a copyright violation).
        
               | snovv_crash wrote:
               | If I invent an amazing lossless compression algorithm
               | such that adding an entire 60k word novel to my blob only
               | increases the size by 1.2kb, does that mean I'm not
               | copyright infringing if I release that model?
        
               | Sharlin wrote:
               | How is that relevant? If some LLM were able to
               | regurgitate a 60k word novel verbatim on demand, sure,
               | the copyright situation would be different. But last I
               | checked they can't, not 60k, 6k, or even 600 words.
               | Perhaps they can do 60 words of some well-known passages
               | from the Bible or other similar ubiquitous copyright-free
               | works.
        
               | andrepd wrote:
               | xz can compress the text of Harry Potter by a factor of
               | 30:1. Does that mean I can also distribute compressed
               | copies of copyrighted works and that's okay?
        
               | ben_w wrote:
               | Can you get that book out of an LLM?
               | 
               | Because that's the distinction being argued here: it's "a
               | handful"[0] of probabilities, not the complete work.
               | 
               | [0] I'm not sold on the phrasing "a handful", but I don't
               | care enough to argue terminology; the term "handful"
               | feels like it's being used in a sorites paradox kind of
               | way: https://en.wikipedia.org/wiki/Sorites_paradox
        
               | Sharlin wrote:
               | Incredibly poor analogy. If an LLM were able to
               | regurgitate Harry Potter on demand like xz can, the
               | copyright situation would be much more black and white.
               | But they can't, and it's not even close.
        
               | realusername wrote:
               | You can't get Harry Potter out of the LLM, that's the
               | difference
        
               | throwaway2037 wrote:
               | To be fair, OP raises an important question that I hope
               | smart legal minds are pondering. In my view, they aren't
               | looking for a "programmer answers about legal issue"
               | response. Probably the right court might agree with their
               | premise. What the damages or restrictions might be, I
               | cannot speculate. Any IP lawyers here who want to share
               | some thoughts?
        
               | ben_w wrote:
               | Yup, that's fair.
               | 
               | As my not-legally-trained interpretation of the rules
               | leads to me being confused about how traditional search
               | engines aren't a copyright violation, I don't trust my
               | own beliefs about the law.
        
               | KoolKat23 wrote:
               | You don't even need to go this far.
               | 
               | The word-probabilities are transformative use, a form of
               | fair use and aren't an issue.
               | 
               | The specific output at each point in time is what would
               | be judged to be fair use or copyright infringing.
               | 
               | I'd argue the user would be responsible for ensuring
               | they're not infringing by using the output in a copyright
               | infringing manner i.e. for profit, as they've fed certain
               | inputs into the model which led to the output. In the
               | same way you can't sue Microsoft for someone typing up
               | copyrighted works into Microsoft Word and then
               | distributing for profit.
               | 
               | De minimus is still helpful here, not all infringments
               | are noteworthy.
        
               | rcbdev wrote:
               | OpenAI is outputting the partially copyright-infringing
               | works of their LLM for profit. How does that square?
        
               | throwaway2037 wrote:
               | You raise an interesting point. If more professional
               | lawyers agreed with you, then why have we not seen a
               | lawsuit from publishers against OpenAI?
        
               | dgoldstein0 wrote:
               | Some of them are suing
               | 
               | https://www.nytimes.com/2023/12/27/business/media/new-
               | york-t... https://www.reuters.com/legal/us-newspapers-
               | sue-openai-copyr... https://www.washingtonpost.com/techno
               | logy/2024/04/09/openai-...
               | 
               | Some decided to make deals instead
        
               | KoolKat23 wrote:
               | You, the user, is inputting variables into their
               | probability algorithm that's resulting in the copyright
               | work. It's just a tool.
        
               | DaSHacka wrote:
               | How is it any different than training a model on content
               | protected under an NDA and allowing access to users via a
               | web-portal?
               | 
               | What is the difference OpenAI has that lets them get away
               | with, but not our hypothetical Mr. Smartass doing the
               | same process trying to get around an NDA?
        
               | KoolKat23 wrote:
               | Well if OpenAI signed an NDA beforehand to not disclose
               | certain training data it used, and then users actually do
               | access this data, then yes it would be problematic for
               | OpenAI, under the terms of their signed NDA.
        
               | maeil wrote:
               | Let's say a torrent website asks the user through an LLM
               | interface what kind of copyrighted content they want to
               | download and then offers me links based on that, and
               | makes money off of it.
               | 
               | The user is "inputting variables into their probability
               | algorithm that's resulting in the copyright work".
        
               | KoolKat23 wrote:
               | Theoretically a torrent website that does not distribute
               | the copyright files themselves in anyway should be legal,
               | unless there's a specific law for this (I'm unaware of
               | any, but I may be wrong).
               | 
               | They tend to try argue for conspiracy to commit copyright
               | infringement, it's a tenuous case to make unless they can
               | prove that was actually their intention. I think in most
               | cases it's ISP/hosting terms and conditions and legal
               | costs that lead to their demise.
               | 
               | Your example of the model asking specifically "what
               | copyrighted content would you like to download", kinda
               | implies conspiracy to commit copyright infringement would
               | be a valid charge.
        
               | surfingdino wrote:
               | MS Word does not actively collect and process all texts
               | for all available sources and does not offer them in
               | recombined form. MS Word is passive whereas the whole
               | point of an LLM is to produce output using a model
               | trained on ingested data. It is actively processing vast
               | amounts of texts with intent to make them available for
               | others to use and the T&C state that the user owns the
               | copyright to the outputs based on works of other
               | copyright owners. LLMs give the user a CCL
               | (Collateralised Copyright Liability, a bit like a CDO)
               | without a way of tracing the sources used to train the
               | model.
        
               | throwaway2037 wrote:
               | First, I agree with nearly everything that you wrote.
               | Very thoughtful post! However, I have some issues with
               | the last sentence.                   > Collateralised
               | Copyright Liability
               | 
               | Is this a real legal / finance term or did you make it
               | up?
               | 
               | Also, I do not follow you leap to compare LLMs to CDOs
               | (collateralised debt obligations). And, do you
               | specifically mean CDO or any kind of mortgage /
               | commercial loan structured finance deal?
        
               | surfingdino wrote:
               | My analogy is based on the fact that nobody could see
               | what was inside CDOs nor did they want to see, all they
               | wanted to do was pass them on to the next sucker. It was
               | all fun until it all blew up. LLM operators behave in the
               | same way with copyrighted material. For context, read
               | https://nymag.com/news/business/55687/
        
               | KoolKat23 wrote:
               | Legally, copyright is only concerned with the specific
               | end work. A unique or not so unique standalone object
               | that is being scrutinized, if this analogy helps.
               | 
               | The process involved in obtaining that end work is
               | completely irrelevant to any copyright case. It can be a
               | claim against the models weights (not possible as it's
               | fair use), or it's against the specific once off output
               | end work (less clear), but it can't be looked at as a
               | whole.
        
               | dgoldstein0 wrote:
               | I don't think that's accurate. The us copyright office
               | last year issued guidance that basically said anything
               | generated with ai can't be copyrighted, as human
               | authorship/creation is required for copyright. Works can
               | incorporate ai generated content but then those parts
               | aren't covered by copyright.
               | 
               | https://www.federalregister.gov/documents/2023/03/16/2023
               | -05...
               | 
               | So I think the law, at least as currently interpreted,
               | does care about the process.
               | 
               | Though maybe you meant as to whether a new work infringes
               | existing copyright? As this guidance is clearly about new
               | copyright.
        
               | KoolKat23 wrote:
               | These are two sides of the same coin, and what I'm saying
               | still stands. This is talking about who you attribute
               | authorship to when copyrighting a specific work.
               | Basically on the application form, the author must be a
               | human. The reason it's worth them clarifying is because
               | they've received applications that attributed AI's, and
               | legal persons do exist that aren't human (such as
               | companies), they're just making it clear it has to be
               | human.
               | 
               | Who created the work, it's the user who instructed the AI
               | (it's a tool), you can't attribute it to the AI. It would
               | be the equivalent of Photoshop being attributed as co-
               | author on your work.
        
               | arrowsmith wrote:
               | Couldn't you just generate it with AI then say you wrote
               | it? How could anyone prove you wrong?
        
               | KoolKat23 wrote:
               | That's what you're supposed to do. No need to hide it
               | either :).
        
               | kibibu wrote:
               | Is converting an audio signal into the frequency domain,
               | pruning all inaudible frequencies, and then Huffman
               | encoding it tranformative?
        
               | KoolKat23 wrote:
               | Well if the end result is something completely different
               | such as an algorithm for determining which music is
               | popular or determining which song is playing then yes
               | it's transformative.
               | 
               | It's not merely a compressed version of a song intended
               | to be used in the same way as the original copyright
               | work, this would be copyright infringement.
        
               | bryanrasmussen wrote:
               | >we also thought sampling in music was de minimus
               | 
               | I would think if I can recognize exactly what song it
               | comes from - not de minimus.
        
               | throwaway2037 wrote:
               | When I was younger, I was told that the album from
               | Beastie Boys called Paul's Boutique was the straw that
               | broke the camel's back! I have no idea if this true, but
               | that album has a batshit crazy amount of recognizable
               | samples. I doubt very much that Beastie paid anything for
               | the rights to sample.
        
               | Gravityloss wrote:
               | I think with some AI you could reproduce artworks of
               | obscure indie artists who are working right now.
               | 
               | If you were a director at a game company and needed art
               | in that style, it would be cheaper to have the AI do it
               | instead of buying from the artist.
               | 
               | I think this is currently an open question.
        
               | dgoldstein0 wrote:
               | I recently read an article that I annoyingly can't find
               | again about an art director at a company that decided to
               | hire some prompters. They got some art, told them to
               | completely change it, got other art, told them to make
               | smaller changes... And then got nothing useful as the
               | prompters couldn't tell the ai "like that but make this
               | change". Ai art may get there in a few years or maybe a
               | decade or two, but it's not there yet. (End of that
               | article: they fired the prompters after a few days)
               | 
               | An ai-enhanced Photoshop, however, could do wonders
               | though as the base capabilities seem to be mostly there.
               | Haven't used any of the newer ai stuff myself but
               | https://www.shruggingface.com/blog/how-i-used-stable-
               | diffusi... makes it pretty clear the building blocks seem
               | largely there. So my guess is the main disconnect is in
               | making the machines understand natural language
               | instructions for how to change the art.
        
             | tempodox wrote:
             | Almost reminds one of real life: The big thieves get away
             | and have a fan base while the small ones get prosecuted as
             | criminals.
        
             | blksv wrote:
             | It is the same scale argument that allows you to publish a
             | photo of a procession without written consent from every
             | participant.
        
           | adra wrote:
           | Which has been established in court where?
        
             | sundalia wrote:
             | +1, this is just the commenter saying what they want
             | without an actual court case
        
               | cj wrote:
               | The justice system moves an order of magnitude slower
               | than technology.
               | 
               | It's the Wild West. The lack of a court case has no
               | bearing on whether or not what they're doing is right or
               | wrong.
        
               | 6510 wrote:
               | Sounds like the standard disrupt formula should apply.
               | Cant we stuff the court into an app? I kinda dislike the
               | idea of getting a different sentence for anything related
               | to appearance or presentation.
        
             | TeMPOraL wrote:
             | And it matters how? I didn't say the argument is correct or
             | approved by court, or that I even support it. I'm saying
             | what the argument, which OP referenced, is about, and how
             | it differs from their proposal.
        
           | makeitdouble wrote:
           | My take away is that we should talk about our experience in
           | companies at a large enough scale that it becomes non
           | specific in principle, and not targeted at a single company.
           | 
           | Basically, we need our open source version of Glassdoor as a
           | LLM ?
        
             | TeMPOraL wrote:
             | This exists, it's called /r/antiwork :).
             | 
             | OP wants to achieve effects of specific accusation using
             | only non-specific means; that's not easy to pull off.
        
           | 8note wrote:
           | The scale of two people should be large enough to make it
           | ambiguous who spilled the beans at least
        
           | tadfisher wrote:
           | To definitively prove this either way, they'll have to make
           | their source code and model available (maybe under subpoena
           | and/or gag order), so don't expect this issue to be actually
           | tested in court (so long as the defendants have enough VC
           | money).
        
           | dorkwood wrote:
           | How many sources do you need to steal from for it to no
           | longer be considered stealing? Two? Three? A hundred?
        
             | TeMPOraL wrote:
             | Copyright infringement is not stealing.
        
               | psychoslave wrote:
               | True.
               | 
               | Making people believe that anything but their own body
               | and mind can be considered part of their own properties
               | is stealing their lucidity.
        
           | anigbrowl wrote:
           | It's not a copyright violation if you voluntarily provide the
           | training material...
        
             | XorNot wrote:
             | I don't know why copyright is getting involved here. The
             | clause is about criticizing the company.
             | 
             | Releasing an LLM trained on company criticisms, by people
             | specifically instructed not to do so is transparently
             | violating the agreement.
             | 
             | Because you're intentionally publishing criticism of the
             | company.
        
           | aprilthird2021 wrote:
           | > In other words: GPT-4 gets to get away with occasionally
           | spitting out something real verbatim. Llama2-7b-finetune-
           | NYTArticles does not.
           | 
           | Based on what? This isn't any legal argument that will hold
           | water in any court I'm aware of
        
           | throwaway2037 wrote:
           | > LLMs not being copyright laundromats
           | 
           | This a brilliant phrase. You might as well put that into an
           | Emacs paste macro now. It won't be the last time you will
           | need it. And the OP is classic HN folly where programmer
           | thinks laws and courts can be hacked with "this one weird
           | trick".
        
             | calvinmorrison wrote:
             | But they can, just look at AirBnB, Uber, etc.
        
               | throwaway2037 wrote:
               | No, lots of jurisdictions outside the US fought back
               | against those shady practices.
        
               | abofh wrote:
               | You mean unregulated hotels and on-demand taxis?
               | 
               | Uber is no longer subsidized (or even cheap) in most
               | places, it's just an app for summoning taxis and
               | overpriced snacks. AirBnB is underregulated housing for
               | nomads at this point.
               | 
               | Your examples sorta prove the point - they didn't succeed
               | in what they aimed at doing, so they pivoted until the
               | law permitted it.
        
         | otterley wrote:
         | IAAL (but not your lawyer and this is not legal advice).
         | 
         | That's not how it works. It doesn't matter if you write the
         | words yourself or have an agent write them for you. In either
         | case, it's the communication of the covered information that is
         | proscribed by these kinds of agreements.
        
         | visarga wrote:
         | No need for LLM, anonymous letter does the same thing
        
           | throwaway2037 wrote:
           | On first blush, this sounds like a good idea. Thinking
           | deeper, the company is so small that it will be easy to
           | identify the author.
        
         | Always42 wrote:
         | if I slaved away at openai for a year to get some equity, I
         | don't think I would want to be the one to try this strategy
        
         | renewiltord wrote:
         | To be honest, you can just say "I don't have anything to add on
         | that subject" and people will get the impression. No one ever
         | says that about companies they like so you know when people
         | shut down that something was up.
         | 
         | "What was the company culture like?" "Etc. platitude so on and
         | so forth"
         | 
         | "And I heard the CEO was a total dickbag. Was that your
         | experience working with him?" "I don't have anything to add on
         | that subject"
         | 
         | Of course going back and forth on that won't really work but to
         | different people you can't be expected to not say the nice
         | things and then someone could build up a story based on that.
        
         | jahewson wrote:
         | Ha ha, but no. For starters, copyright falls under federal law
         | and contacts under state law, so it's not even possible to make
         | this claim in the relevant court.
        
         | KoolKat23 wrote:
         | Lol this would be a great performative piece. Although not so
         | sure it'd stand up to scrutiny. Openai could probably take them
         | to court on the grounds of disclosure of trade secrets or
         | something like that and force them to reveal its training data
         | and thus potentially revealing its source.
        
           | nextaccountic wrote:
           | If they did so, they would open up themselves for lawsuits of
           | people unhappy about OpenAI's own training data.
           | 
           | So they probably won't.
        
             | KoolKat23 wrote:
             | Good point
        
         | NoMoreNicksLeft wrote:
         | NDA's don't rely on copyright to protect the party who drafted
         | it from disclosure. There might even be an argument to be made
         | that training the LLM on it was disclosure, regardless of
         | whether you release the LLM publicly or not. We all work in
         | tech right? Why do even you people get intellectual property so
         | wrong, every single time?
        
         | andyjohnson0 wrote:
         | Clever, but the law is not a machine or an algorithm. Intent
         | matters.
         | 
         | Training an LLM with the intent of contravening an NDA is just
         | plain <intent to contravene an NDA>. Everyone would still get
         | sued anyway.
        
           | jeffreygoesto wrote:
           | But then training a commercial model is done with the intent
           | to not pay the original authors, how is that different?
        
             | kdnvk wrote:
             | It's not done with the intent to infringe copyright.
        
               | binkethy wrote:
               | It would appear that it explicitly IS done with this
               | intent. We are told that an LLM is a living being that
               | merely learns and then creates, but yet we are aware that
               | its outputs regurgitate combinations of uta inputs.
        
             | repeekad wrote:
             | > done with the intent to not pay the original authors
             | 
             | no one building this software wants to "steal from
             | creators" and the legal precedent for using copyrighted
             | works for the purpose of training is clear with the NYT
             | case against open AI
             | 
             | It's why things like the recent deal with Reddit to train
             | on their data (which Reddit owns and users give up when
             | using the platform) are becoming so important, same with
             | Twitter/X
        
               | kaoD wrote:
               | > no one building this software wants to "steal from
               | creators"
               | 
               | > It's why things like the recent deal[s ...] are
               | becoming so important
               | 
               | Sorry but I don't follow. Is it one or the other?
               | 
               | If they didn't want to steal from the original authors,
               | why do they not-steal Reddit now? What happens with the
               | smaller creators that are not Reddit? When is OpenAI
               | meeting with me to discuss compensation?
               | 
               | To me your post felt something like "I'm not robbing you,
               | Small State Without Defense that I just invaded, _I just
               | want to have your petroleum_ , but I'm paying Big State
               | for theirs cause they can kick my ass".
               | 
               | Aren't the recent deals actually implying that everything
               | so far has actually been done with the intent of not
               | compensating their source data creators? If that was not
               | the case, they wouldn't need any deals now, they'd just
               | continue happily doing whatever they've been doing which
               | is oh so clearly lawful.
               | 
               | What did I miss?
        
               | repeekad wrote:
               | The law is slow and is always playing catch up in terms
               | of prosecution, it's not clear today because this kind of
               | copyright has never been an issue before. Usually it's
               | just outright stealing content that was protected, no one
               | ever imagined "training" to be a protected use case,
               | humans "train" on copyrighted works all the time, ideally
               | copyrighted works they purchased for said purpose... the
               | same will start to apply for AI, you have to have rights
               | to the data for that purpose, hence these deals getting
               | made. In the meantime it's ask for forgiveness not
               | permission, and companies like Google (less openAI) are
               | ready to go with data governance that lets them remove
               | copyright requested data and keep the rest of the model
               | working fine
               | 
               | Let's also be clear that making deals with Reddit isn't
               | stealing from creators, it's not a platform where you own
               | what you type in, same on here this is all public domain
               | with no assumed rights to the text. If you write a book
               | and openAI trains on it and starts telling it to kids at
               | bed time, you 100% will have a legal claim in the future,
               | but the companies already have protections in place to
               | prevent exactly that. For example if you own your website
               | you can request the data not be crawled, but ultimately
               | if your text is publicly available anyone is allowed to
               | read it, and the question it is anyone allowed to train
               | AI on it is an open question that companies are trying to
               | get ahead on.
        
               | kaoD wrote:
               | That seems even worse: they had intent to steal and now
               | they're trying to make sure it is properly legislated so
               | nobody else can do it, thus reducing competition.
               | 
               | GPT can't get retroactively untrained on stolen data.
        
               | repeekad wrote:
               | Google actually can "untrain" afaik, my limited
               | understanding is they have good controls their data and
               | its sources, because they know it could be important in
               | the future, GPT not sure.
               | 
               | I'm not sure what you mean by "steal" because it's a
               | relative term now, me reading your book isn't stealing if
               | I paid for it and it inspires me to write my own novel
               | about a totally new story. And if you posted your book
               | online, as of right now the legal precedent is you didn't
               | make any claims to it (anyone could read it for free) so
               | that's fair game to train on, just like the text I'm
               | writing now also has no protections.
               | 
               | Nearly all Reddit history ever up to a certain date is
               | available for download now online, only until they
               | changed their policies did they start having tighter
               | controls about how their data could be used.
        
             | mpweiher wrote:
             | Chutzpah. And that the companies doing it are multi-billion
             | dollar companies who can afford the finest legal
             | representation money can buy.
             | 
             | Whether the brazenness with which they are doing this will
             | work out for them is currently playing out in the courts.
        
           | bazoom42 wrote:
           | It is a classic geek fallacy to think you can hack the law
           | with logic tricks.
        
             | andyjohnson0 wrote:
             | Indeed it is. Obligatory xkcd - https://xkcd.com/1494/
        
         | p0w3n3d wrote:
         | that's the evilest thing I can imagine - fighting with them
         | with their own weapon
        
         | bbarnett wrote:
         | Copyright != an NDA. Copyright is not an agreement between two
         | entities, but a US federal law, with international obligations
         | both ratified and not.
         | 
         | Copyright has fair uses clauses, endless court decisions
         | limiting its use, carve outs for libraries, additional junk
         | like the DMCA and more slapped on top. It's a patchwork of
         | dozens of treaties and laws, spanning hundreds of years.
         | 
         | For example, you can read a book to a room full of kids, you
         | can use copyright materials in comedic skits, you can quote
         | snippets, the list goes on. And again, this is all legislated.
         | 
         | The point? It's complex, and specific usage of copyrighted
         | works infringing or not, can be debatable without intent
         | immediately being malign.
         | 
         | Meanwhile, an NDA covers far, far more than copyright. It may
         | cover discussion and disclosure of everything or anything,
         | including even client lists, trade secrets, work processes, and
         | more. It is signed, and agreed to by both parties involved.
         | Equating "copyright law" to "an NDA" is a non-starter. There's
         | literally zero legal parallel or comparison here.
         | 
         | And as others have mentioned, the intent of the act would be
         | malicious on top of all of this.
         | 
         | I know a lot of people dislike the whole data snag by OpenAI,
         | and have moral or ethical objections to closed models, but
         | thinking anyone would care about this argument if you breach an
         | NDA is a bad idea. No judge would even remotely accept or
         | listen to such chicanery.
        
         | cqqxo4zV46cp wrote:
         | I'm going to break rank from everyone else and explicitly say
         | "not clever". Developers that think that they know how the
         | levels system works are a dime a dozen. It's both easy and
         | useless to take some acquired-in-passing largely incorrect
         | surface level understanding of a legal mechanic and "pwned with
         | facts and logic!" in whichever way benefits you.
        
       | Madmallard wrote:
       | I'm really sick of seeing people jump in and accelerating the
       | demise of society wholeheartedly due to greed.
        
       | underlogic wrote:
       | This is bizarre. Someone hands you a contract as you're leaving a
       | company and if you refuse to agree to whatever they dreamt up and
       | sign the company takes back the equity you earned? That can't be
       | legal
        
         | ajross wrote:
         | The argument would be that it's coercive. And it might be, and
         | they might be sued over it and lose. Basically the incentives
         | all run strongly in OpenAI's favor. They're not a public
         | company, vested options aren't stock and can't be liquidated
         | except with "permission", which means that an exiting employee
         | is probably not going to take the risk and will just sign the
         | contract.
        
         | throwaway743950 wrote:
         | It might be that they agree to it initially when hired, so it
         | doesn't matter if they sign something when they leave.
        
           | crooked-v wrote:
           | Agreements with surprise terms that only get detailed later
           | tend not to be very legal.
        
             | mvdtnz wrote:
             | How do you know there isn't a very clear term in the
             | employment agreement stating that upon termination you'll
             | be asked to sign an NDA on these terms?
        
               | romwell wrote:
               | Unless the terms of the NDA are provided upfront, that
               | sounds sketch AF.
               | 
               |  _" I agree to follow unspecified terms in perpetuity, or
               | return the pay I already earned"_ doesn't vibe with labor
               | laws.
               | 
               | And if those NDA terms were already in the contract,
               | there would be no need to sign them upon exit.
        
               | mvdtnz wrote:
               | > And if those NDA terms were already in the contract,
               | there would be no need to sign them upon exit.
               | 
               | If the NDA terms were agreed in an employment contract
               | they would no longer be valid upon termination of that
               | contract.
        
               | sratner wrote:
               | Plenty of contracts have survivorship clauses. In
               | particular, non-disclosure clauses and IP rights are the
               | ones to most commonly survive termination.
        
               | pests wrote:
               | Why not just get it signed then? Your signing to agree to
               | sign later?
        
               | klyrs wrote:
               | One particularly sus term in my employment agreement is
               | that I adhere to all corporate policies. Guess how many
               | of those there are, how often they're updated, and if
               | I've ever read them!
        
             | riehwvfbk wrote:
             | Doesn't even have to be a surprise. Pretty much startup
             | employment agreement in existence gives the company ("at
             | the board's sole discretion") the right to repurchase your
             | shares upon termination of employment. OpenAI's PPUs are
             | worth $0 until they become profitable. Guess which right
             | they'll choose to exercise if you don't sign the NDA?
        
               | lucianbr wrote:
               | Who would accept shares as valuable if the contract said
               | they can be repurchased from you at a price of 0$? This
               | can't be it.
        
               | actionfromafar wrote:
               | It can. There are many ways to make the number go to
               | zero.
        
               | jbellis wrote:
               | I don't think rght to repurchase is routine. It was a
               | scandal a few years ago when it turned out that Skype did
               | that. https://www.forbes.com/sites/dianahembree/2018/01/1
               | 0/startup...
        
         | anon373839 wrote:
         | Hard to evaluate this without access to the documents. But in
         | CA, agreements _cannot_ be conditioned on the payment of
         | previously earned wages.
         | 
         | Equity adds a wrinkle here, but I suspect if the effect of
         | canceling equity is to cause a forfeiture of earned wages, then
         | ultimately whatever contract is signed under that threat is
         | void.
        
           | theGnuMe wrote:
           | Well some rich ex-openAI person should test this theory. Only
           | way to find out. I'm sure some of them are rich.
        
           | az226 wrote:
           | It's not even equity. OpenAI is a nonprofit.
           | 
           | They're profit participation units and probably come with a
           | few gotchas like these.
        
       | photochemsyn wrote:
       | OpenAI's military-industrial contracting options seems to be
       | making some folks quite nervous.
        
       | dandanua wrote:
       | With how things are unfolding I wouldn't be surprised that after
       | the creation of an AGI the owners will just kill anyone who took
       | a part in building it. Singularity is real.
        
       | RockRobotRock wrote:
       | so much money stuffed in their mouth it's physically impossible
        
       | koolala wrote:
       | They all can combine their testimony into 1 document, give it to
       | an AI, and lol
        
       | StarterPro wrote:
       | Glad to see that all giant companies are just evil rich white
       | dudes racing each other to taking over the world.
        
       | topspin wrote:
       | "making former employees sign extremely restrictive NDAs doesn't
       | exactly follow."
       | 
       | Once again, we see the difference between the public narrative
       | and the actions in a legal context.
        
       | almost_usual wrote:
       | This is what a dying company does.
        
       | jimnotgym wrote:
       | >the company will succeed at developing AI systems that make most
       | human labor obsolete.
       | 
       | Hmmmn. Most of the humans where I work do things physically with
       | their hands. I don't see what AI will achieve in their area.
       | 
       | Can AI paint the walls in my house, fix the boiler and swap out
       | the rotten windows? If so I think a subscription to chat GPT is
       | very reasonably priced!
        
         | renonce wrote:
         | I don't know but once vision AI reacts to traffic conditions
         | accurately within 10ms it's probably a matter of time before
         | they take over your steering wheel. For other jobs you'll need
         | to wait for robotics.
        
           | LtWorf wrote:
           | It has to react "correctly"
        
         | cyberpunk wrote:
         | 4o groks realtime video; how far away are we from letting it
         | control robots bruv?
        
         | windowsrookie wrote:
         | Obviously if your job requires blue-collar style manual labor,
         | no it's likely not going to be replaced anytime soon.
         | 
         | But if your job is mostly sitting at a computer, I would be a
         | bit worried.
        
           | eastbound wrote:
           | Given the low quality of relationships between customers and
           | blue-collared jobs, i.e. ever tried to get a job done by a
           | plumber or a painter, if you don't know how to do their job
           | you are practically assured they will do something in your
           | back that will fall off in 2 years, for the price of 2x your
           | daily rate as a software engineer (when they don't straight
           | up send a paperless immigrant which makes you culprit of
           | participation to unlawful employment scheme if it is
           | discovered), well...
           | 
           | I'd say there is a lot of available money in replacing blue
           | collared jobs with AI-powered robots. Even if they do crap,
           | it's still better quality that contractors.
        
             | jimnotgym wrote:
             | Shoddy contractors can then give you a shoddy service with
             | a shoddy robot.
             | 
             | Quality contractors will still be around, but everyone will
             | try and beat them down on price because they care about
             | that more than quality. The good contractors won't be able
             | to make any money because of this and will leave the
             | trade....just like now, just like I did
        
               | eastbound wrote:
               | The argument "pay more to get better quality" would be
               | valid if, indeed, paying more meant better quality.
               | 
               | Unfortunately, it's something I've often done, either as
               | a 30% raise for my employees or giving a tip to a
               | contractor when I knew I'd take them again or taking the
               | most expensive one.
               | 
               | EACH time the work was much worse off after the raise.
               | The sad truth of humans is that you gotta keep them
               | begging to extract their best work, and no true reward is
               | possible.
        
           | drooby wrote:
           | Once AGI is solved. How long does it take for AGI (or human's
           | steering AGI) to create a robot that meets or exceeds the
           | abilities of the human body?
        
         | LtWorf wrote:
         | It has difficulties with middle school mathematical problems.
        
           | reducesuffering wrote:
           | 1.5 year old GPT-4 is getting 4/5 on an AP Calculus test,
           | better than 95% of humans. Want to guess how much better at
           | all educational tests GPT-5 is going to be than people?
        
             | LtWorf wrote:
             | I think the kind of problems we do in italy aren't just
             | "solve this", they are more "understand this text, then
             | figure out what you have to solve, then solve it"
        
               | reducesuffering wrote:
               | That sounds like the word problems that are on American
               | AP Calc tests. You can be the judge of them here:
               | https://apcentral.collegeboard.org/media/pdf/ap24-frq-
               | calcul...
        
         | jerrygenser wrote:
         | Robots that are powered by AI might be able to.
        
       | mise_en_place wrote:
       | Why indeed? But that's nobody's business except OpenAI and its
       | former employees. Doesn't matter if it's not legally enforceable,
       | or in bad taste. When you enter into a contract with another
       | party, it is between you and the other party.
       | 
       | If there is something unenforceable about these contracts, we
       | have the court system to settle these disputes. I'm tired of
       | living in a society where everyone's dirty laundry is aired out
       | for everyone to judge. If there is a crime committed, then sure,
       | it should become a matter of public record.
       | 
       | Otherwise, it really isn't your business.
        
         | 0xDEAFBEAD wrote:
         | >OpenAI's mission is to ensure that artificial general
         | intelligence (AGI)--by which we mean highly autonomous systems
         | that outperform humans at most economically valuable work--
         | benefits all of humanity.
         | 
         | >...
         | 
         | >We are concerned about late-stage AGI development becoming a
         | competitive race without time for adequate safety precautions.
         | 
         | From OpenAI's charter: https://openai.com/charter/
         | 
         | Now read Jan Leike's departure statement:
         | https://news.ycombinator.com/item?id=40391412
         | 
         | That's why this is everyone's business.
        
       | diebeforei485 wrote:
       | > For workers at startups like OpenAI, equity is a vital form of
       | compensation, one that can dwarf the salary they make.
       | Threatening that potentially life-changing money is a very
       | effective way to keep former employees quiet.
       | 
       | Yes, but:
       | 
       | (1) OpenAI salaries are not low like early stage startup
       | salaries. Essentially these are highly paid jobs (high salary and
       | high equity) that require an NDA.
       | 
       | (2) Apple has also clawed back equity from employees who violate
       | NDA. So this isn't all that unusual.
        
         | season2episode3 wrote:
         | Source on #2?
        
       | benreesman wrote:
       | This has just been crazy both to watch and in some small ways
       | interact with up close (I've had some very productive and some
       | regrettably heated private discussions advising former colleagues
       | and people I care about to GTFO before the shit _really_ hits the
       | rotary air impeller, and this is going to get so much worse).
       | 
       | This thread is full of comments making statements around this
       | looking like some level of criminal enterprise (ranging from "no
       | way that document holds up" to "everyone knows Sam is a crook").
       | 
       | The level of stuff ranging from vitriol to overwhelming if
       | _maybe_ circumstantial (but conclusive that my personal
       | satisfaction) evidence of direct reprisal has just been surreal,
       | but it's surreal in a different way to see people talking about
       | this like it was never even controversial to be skeptical
       | /critical/hostile to thing thing.
       | 
       | I've been saying that this looks like the next Enron, minimum,
       | for easily five years, arguably double that.
       | 
       | Is this the last straw where I stop getting messed around over
       | this?
       | 
       | I know better than to expect a ticker tape parade for having both
       | called this and having the guts to stand up to these folks, but I
       | do hold out a little hope for even a grudging acknowledgment.
        
         | 0xDEAFBEAD wrote:
         | There's another comment saying something sort of similar
         | elsewhere in this thread:
         | https://news.ycombinator.com/item?id=40396366
         | 
         | What made you think it was the next Enron five years ago?
         | 
         | I appreciate you having the guts to stand up to them.
        
           | benreesman wrote:
           | First, thank you for probably being the first person to
           | recognize in print that it wasn't easy to stand up to these
           | folks in public, plenty have said things like "you're
           | fighting the good fight" in private, but I think you're the
           | first person to in any sense second the motion in my personal
           | case, so big ups on having the guts to say it too.
           | 
           | I've never been a YC-funded founder myself, but I've had
           | multiple roommates who were, and a few girlfriends who were
           | on the bubble of like, founder and early employee, and I've
           | just generally been swimming in that pool to one degree or
           | another for coming up on 20 years (I always forget my join
           | date but it's on the order of like, 17 years or something).
           | 
           | So when a few dozen people you trust tell you the same thing,
           | you tend to buy it even if you're not quite ready to print
           | the worst hearsay (and I've heard things about Altman that I
           | believe but still wouldn't print without proof, dark shit).
           | 
           | As the litany of scandals mounted (Green Dot, zero-rated pre-
           | IPO portfolio stock with like, his brother involved,
           | Socialcam, the list just goes on), and at some point real
           | journalists start doing pieces (New Yorker, etc.).
           | 
           | And while some of my friends and former colleagues (well
           | maybe former friends now) who joined are both eminently
           | qualified and as ethical as this business lets anyone be,
           | there was a skew there too, it skewed "opportunist, fails
           | up".
           | 
           | So it's a growing preponderance of evidence starting in about
           | 2009 and being just "published by credible
           | journalists"starting about five years later, at some point
           | I'm like "if even 5% of this is even a little true, this is
           | beyond the pale".
           | 
           | It's been a gradual thing, and people giving the benefit of
           | the doubt up until the November stuff are maybe just _really_
           | charitable, at this point it's like, only a jury can take the
           | next steps trivially indicated.
        
             | brap wrote:
             | Don't forget WorldCoin!
        
               | benreesman wrote:
               | Yeah, I was trying to stay on topic but flagrant
               | violations of the Universal Declaration of Human Rights
               | are really Lawrence Summers's speciality.
               | 
               | I'm pretty embarrassed to have former colleagues who
               | openly defend shit like this.
        
         | danielbln wrote:
         | OpenAI was incorporated 9 years ago, but you easily saw that
         | it's the next Enron 10 years ago?
        
           | benreesman wrote:
           | I said easily five, not easily ten. I was alluding to it in
           | embryo with the comment that it's likely been longer.
           | 
           | If you meant that remark/objection in good faith then thank
           | you for the opportunity to clarify.
           | 
           | If not, the thank you for hanging a concrete example of the
           | kind of shit I'm alluding to (though at the extremely mild
           | end of the range) _directly_ off the claim.
        
       | mrweasel wrote:
       | When companies create rules like this, that tells me that they
       | are very unsure of their product. Either it doesn't works as they
       | claim, or it's incredible simple to replicate. It can also be
       | that their entire business plan is insane, in any case, there's
       | something basic wrong internally at OpenAI for them to feel the
       | need for this kind of rule.
       | 
       | If OpenAI and ChatGPT is so far ahead for everyone else, and
       | their product is so complex, it doesn't matter what a few
       | disgruntled employees do or say, so the rule is not required.
        
         | underdeserver wrote:
         | Forget their product, they're shady as employers. Intentionally
         | doing something borderline legal when they have all the
         | negotiating power.
        
       | Delmololo wrote:
       | Why should they?
       | 
       | It's absolutely normal not to spill internals.
        
       | Al-Khwarizmi wrote:
       | _" It forbids them, for the rest of their lives, from criticizing
       | their former employer. Even acknowledging that the NDA exists is
       | a violation of it."_
       | 
       | I find it hard to understand that in a country that tends to take
       | freedom of expression so seriously (and I say this unironically,
       | American democracy may have flaws but that is definitely a
       | strength) it can be legal to silence someone for the rest of
       | their life.
        
         | SXX wrote:
         | This is not much worse than "forced arbitration". In US you can
         | literally lose your rights by clicking on "Agree" button.
        
         | borski wrote:
         | It's all about freedom from government tyranny and censorship.
         | Freedom from corporate tyranny is another matter entirely, and
         | generally relies on individuals being careful about what they
         | agree to.
        
           | bamboozled wrote:
           | America values money just as much as it values freedom. If
           | there is any chance the money collection activities will be
           | disturbed, then heads will roll, violently.
           | 
           | See the assassination attempts on president Jackson.
        
           | sleight42 wrote:
           | And yet there was such a to-do about Twitter "censorship"
           | that Elon made it is his mission to bring freedumb to
           | Twitter.
           | 
           | Though I suppose this is another corporate (really,
           | plutocratic) tyranny.
        
           | loceng wrote:
           | Problematic when fascism forms as recently has been evident
           | by social media working with government to censor citizens;
           | fascism being authoritarian politicians working with
           | industrial complexes to benefit each other.
        
         | DaSHacka wrote:
         | As others have mentioned, its likely many parts of this NDA are
         | non-enforceable
         | 
         | Its quite common for companies to put tons of extremely
         | restrictive terms in an NDA they can't actually legally enforce
         | to scare off potential future ex-employees from creating a
         | problem.
        
           | fastball wrote:
           | I wouldn't say that is "quite common". If you throw a bunch
           | of unenforceable clauses into an NDA/non-compete/whatever,
           | that increases the likelihood of the whole thing being thrown
           | out, which is not a can of worms most corporations want to
           | open. So it is actually toeing a delicate balance most of the
           | time, not a "let's throw everything we can into this legal
           | agreement and see what sticks".
        
             | tcbawo wrote:
             | > If you throw a bunch of unenforceable clauses into an
             | NDA/non-compete/whatever, that increases the likelihood of
             | the whole thing being thrown out
             | 
             | I'm not sure that this is true. Any employment contract
             | will have a partial invalidity/severability clause which
             | will preserve the contract if individual clauses are
             | unenforceable.
        
         | ryanmcgarvey wrote:
         | In America you're free to sign or not sign terrible contracts
         | in exchange for life altering amounts of money.
        
         | sundalia wrote:
         | How is it serious if money is the motor of freedom of speech?
         | The suing culture in the US ensures freedom of speech up until
         | you bother someone with money.
        
           | sleight42 wrote:
           | Change that to "bother someone with more money than you."
           | 
           | Essentially your point.
           | 
           | In the US, the wealthiest have most of the freedom. The rest
           | of us, who can be sued/fired/blackballed, are, by degrees,
           | merely serfs.
        
             | danielmarkbruce wrote:
             | In the US, anyone can sue. You can learn how. It's not
             | rocket science.
        
               | p1esk wrote:
               | Yes, you can learn how to sue. You can learn how to be a
               | doctor too. You can also learn rocket science. The third
               | one is the easiest to me, personally.
        
               | danielmarkbruce wrote:
               | If you can learn rocket science in x years, you can learn
               | how to sue in x days. So, do both.
        
       | whatever1 wrote:
       | So if I am a competitor I just need to pay a current employee
       | like 2-3M to break their golden handcuffs and then they can
       | freely start singing.
        
         | jakderrida wrote:
         | Not to seem combative, but that assumes that what they share
         | would be advantageous enough to justify the costs... On the
         | other hand, I'm thinking if I'm paying them to disclose all
         | proprietary technology and research for my product, that would
         | definitely make it worthwhile.
        
       | anvuong wrote:
       | This sounds very illegal, how is California allowing this?
        
         | Symmetry wrote:
         | Nobody has challenged it in court.
        
       | surfingdino wrote:
       | It's for the good of humanity, right? /s I wonder if Lex is going
       | to ask Sam about it the next time they get together for a chat on
       | YouTube?
        
         | brap wrote:
         | I kinda like Lex, but he never asks any difficult questions.
         | That's probably why he gets all these fancy guests on his show.
        
           | surfingdino wrote:
           | And he always ends with questions about love, just to pour
           | some more oil on the quiet seas :-) nothing wrong with that,
           | but like you say he asks safe questions.
        
           | reducesuffering wrote:
           | Worse, he will agree 95% with what guest A opinions are, only
           | for guest B to come on next episode and also agree with 95%.
           | It would've been better for those opposing guests to just
           | debate themselves. Like, I don't want to see Lex and Yuval
           | Noah Harari, then Lex and Bibi Netanyahu, I'd rather see
           | Yuval and Bibi. I don't want to see Lex and Sama, then Lex
           | and Eliezer, I'd rather see Sama and Eliezer.
        
       | bambax wrote:
       | > _All of this is highly ironic for a company that initially
       | advertised itself as OpenAI_
       | 
       | Well... I know first hand that many well-informed, tech-literate
       | people still think that all products from OpenAI are open-source.
       | Lying works, even in that most egregious of fashion.
        
         | SXX wrote:
         | This is just Propoganda 101. Call yourself anti-fascist on TV
         | for decade enough times and then you can go indiscriminately
         | kill everyone you call fascist.
         | 
         | Unfortunately Orwellian propoganda works.
        
       | iamflimflam1 wrote:
       | Doesn't seem to be everyone -
       | https://x.com/officiallogank/status/1791652970670747909
        
         | smhx wrote:
         | that's a direct implication that they're waiting for a
         | liquidity event before they speak
        
       | Andrew_nenakhov wrote:
       | I wonder if employees rallying for Altman when the board was
       | trying to fire him were obligated to do it by some _secret
       | agreement_.
        
         | paulryanrogers wrote:
         | Even without explicit clauses, it's likely they feared the loss
         | of a (perceived) great man would impact their equity --
         | regardless of his character. Sadly there is too much faith in
         | these Jobs-esque 'great' men to drive innovation. It's a social
         | illness IMO.
        
           | doctorwho42 wrote:
           | It's a taught ideology/theory, the great man theory:
           | https://en.m.wikipedia.org/wiki/Great_man_theory
        
       | croes wrote:
       | I guess OpenAI makes the hero to villain switch faster than
       | Google as they dropped "don't be evil"
        
       | I_am_tiberius wrote:
       | I get Theranos / David Boies vibes.
        
       | i5heu wrote:
       | It is always so impressive to see what the US law allows.
       | 
       | This would be not only unethical viewed in Germany, i could see
       | how a CEO would go to prison for such a thing.
        
         | Rinzler89 wrote:
         | Please stop with these incorrect generalizations. Hush
         | agreements are definitely allowed in Germany as well, part of
         | golden parachutes usually.
         | 
         | I know a manager for an EV project at a big German auto company
         | who also had to sign one when he was let go and was compensated
         | handsomely to keep quiet and not say a word or face legal
         | consequences.
         | 
         | IIRC he got ~12 months wages. After a year of not doing
         | anything at work anyway. Bought a house in the south with it.
         | Good gig.
        
       | jstummbillig wrote:
       | I am confused about the source of the outrage. A situation where
       | nobody is very clear about what the claim is but everyone is very
       | upset, makes me suspicious.
       | 
       | Are employees being mislead about the contract terms at time of
       | signing the contract? Because, obviously, the original contract
       | needs to have some clause regarding the equity situation, right?
       | We can not just make that up at the end. So... are we claiming
       | fraud?
       | 
       | What I suspect is happening, is that we are confusing an option
       | to forgo equity for an option to talk openly about OpenAI stuff
       | (an option that does not even have to exist in the initial
       | agreement, I would assume).
       | 
       | Is this overreach? Is this whole thing necessary? That seems
       | besides the point. Two parties agreed to the terms when signing
       | the contract. I have a hard time thinking of top AI researchers
       | as coerced to take a job at OpenAI or unable to understand a
       | contract, or understand that they should pay someone to explain
       | it to them - so if that's not a free decision, I don't know what
       | is.
       | 
       | Which leads me to: If we think the whole deal is pretty shady -
       | well, it took two.
        
         | ghusbands wrote:
         | If the two parties are equal, sure. If it's a person vs a
         | corporation of significant size, then no, it's not safe to
         | assume that people have free choice. That's also ignoring
         | motivations apart from business ones, like them actually
         | wanting to be at the leading edge of AI research or wanting to
         | work with particular other individuals.
         | 
         | It's a common mistake on here to assume that for every decision
         | there are equally good other options. Also, the fact that they
         | feel the need to enforce silence so strongly implies at least a
         | little that they have something to hide.
        
           | hanspeter wrote:
           | AI researchers and engineers surely have the free choice to
           | sign with another employer than OpenAI?
        
           | jstummbillig wrote:
           | > If it's a person vs a corporation of significant size, then
           | no, it's not safe to assume that people have free choice
           | 
           | We understand this as a market dynamic, surely? More
           | companies are looking for capable AI people, than capable AI
           | people exist (as in: on the entire planet). I don't see any
           | magic trick a "corporation of significant size" can pull, to
           | make the "free choice" aspect go away. But, of course,
           | individual people can continue to CHOOSE certain corps,
           | because they actually kind of like the outsized benefits that
           | brings. Complaining about certain trade-offs afterwards is
           | fairly disingenuous.
           | 
           | > That's also ignoring motivations apart from business ones,
           | like them actually wanting to be at the leading edge of AI
           | research or wanting to work with particular other
           | individuals.
           | 
           | I don't understand what you are saying. Is the wish to work
           | on leading AI research sensible, but offering the opportunity
           | to work on leading AI research not a value proposition? How
           | does that make sense?
        
       | subroutine wrote:
       | This is an interesting update to the article...
       | 
       | > _After publication, an OpenAI spokesperson sent me this
       | statement: "We have never canceled any current or former
       | employee's vested equity nor will we if people do not sign a
       | release or nondisparagement agreement when they exit."_
       | 
       | - Updated May 17, 2024, 11:20pm EDT
        
         | jiggawatts wrote:
         | Neither of those statements negate the key point of the
         | article.
         | 
         | I've noticed that both Sam Altman personally, and official
         | statements from OpenAI sound like they've been written by Aes
         | Sedai: Not a single untrue word while simultaneously thoroughly
         | deceptive.[1]
         | 
         | Let's try translating some statements, as if we were listening
         | to an evil person that can only make true statements:
         | 
         | "We have never canceled any current or former employee's vested
         | equity" => "But we can and will if we want to. We just _haven
         | 't yet_."
         | 
         | "...if people do not sign a release or nondisparagement
         | agreement when they exit." => "But we're making everyone sign
         | the agreement."
         | 
         | [1] I've wondered if they use a not-for-public-use version of
         | GPT for this purpose. You know, a model that's not quite as
         | aligned as the chat bots, with more "flexible" morals.
        
           | twobitshifter wrote:
           | Could also be that they have a unique definition of vesting
           | when they say specifically "vested equity"
        
       | olalonde wrote:
       | A bit unexpected coming from a non-profit organisation that
       | supposedly has an altruistic mission. It's almost as if there was
       | actually a profit making agenda... I'm shocked.
        
       | BeFlatXIII wrote:
       | I hope I'm still around when some of these guys reach retirement
       | age and say "fuck it, my family pissed me off" and give tell-all
       | memoirs.
        
       | baggiponte wrote:
       | Not a US right expert. Isn't the "you can't criticize ever the
       | company or you'll lose the vested equity" a violation of the
       | first amendment?
        
         | strstr wrote:
         | Corporations aren't the government.
        
       | milankragujevic wrote:
       | It seems very off to me that they don't give you the NDA before
       | you sign the employment contract, and instead give it to you at
       | the time of termination when you can simply refuse to sign it.
       | 
       | It seems that standard practice would dictate that you sign an
       | NDA before even signing the employment contract.
        
         | wouldbecouldbe wrote:
         | That's probably because the company closed after hiring them
        
         | rKarpinski wrote:
         | They have multiple NDA's, including ones that are signed before
         | joining the company [1].
         | 
         | [1]https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/simeon_c-s
         | ...
        
       | yashap wrote:
       | For a company that is actively pursuing AGI (and probably the #1
       | contender to get there), this type of behaviour is extremely
       | concerning.
       | 
       | There's a very real/significant risk that AGI either literally
       | destroys the human race, or makes life much shittier for most
       | humans by making most of us obsolete. These risks are precisely
       | why OpenAI was founded as a very open company with a charter that
       | would firmly put the needs of humanity over their own
       | pocketbooks, highly focused on the alignment problem. Instead
       | they've closed up, become your standard company looking to make
       | themselves ultra wealthy, and they seem like an extra vicious,
       | "win at any cost" one at that. This plus their AI alignment
       | people leaving in droves (and being muzzled on the way out)
       | should be scary to pretty much everyone.
        
         | schmidt_fifty wrote:
         | > There's a very real/significant risk that AGI either
         | literally destroys the human race
         | 
         | If this were true, intelligent people would have taken over
         | society by now. Those in power will never relinquish it to a
         | computer just as they refuse to relinquish it to more competent
         | people. For the vast majority of people, AI not only doesn't
         | pose a risk but will only help reveal the incompetence of the
         | ruling class.
        
           | pavel_lishin wrote:
           | >> _There's a very real /significant risk that AGI either
           | literally destroys the human race_
           | 
           | > _If this were true, intelligent people would have taken
           | over society by now_
           | 
           | The premise you're replying to - one I don't think I agree
           | with - is that a true AGI would be so much smarter, so much
           | more powerful, that it wouldn't be accurate to describe it as
           | "more smart".
           | 
           | You're probably smarter than a guy who recreationally huffs
           | spraypaint, but you're still within the same _class_ as
           | intelligence. Both of you are so much more advanced than a
           | cat, or a beetle, or a protozoan that it doesn 't even make
           | sense to make any sort of comparison.
        
             | logicchains wrote:
             | >You're probably smarter than a guy who recreationally
             | huffs spraypaint, but you're still within the same class as
             | intelligence. Both of you are so much more advanced than a
             | cat, or a beetle, or a protozoan that it doesn't even make
             | sense to make any sort of comparison.
             | 
             | This is pseudoscientific nonsense. We have the very
             | rigorous field of complexity theory to show how much
             | improvement in solving various problems can be gained from
             | further increasing intelligence/compute power, and the vast
             | majority of difficult problems benefit minimally from
             | linear increases in compute. The idea of there being a
             | higher "class" of intelligence is magical thinking, as it
             | implies there could be superlinear increase in the ability
             | to solve NP-complete problems from only a linear increase
             | in computational power, which goes against the entirety of
             | complexity theory.
             | 
             | It's essentially the religious belief that AI has the
             | godlike power to make P=NP even if P != NP.
        
               | esafak wrote:
               | What does P=NP have to do with anything? Humans are
               | incomparably smarter than other animals. There is no
               | intelligence test a healthy human would lose to another
               | animal. What is going to happen when agentic robots
               | ascend to this level relative to us? This is what the GP
               | is talking about.
        
               | breuleux wrote:
               | Succeeding at intelligence tests is not the same thing as
               | succeeding at survival, though. We have to be careful not
               | to ascribe magical powers to intelligence: like anything
               | else, it has benefits and tradeoffs and it is unlikely
               | that it is _intrinsically_ effective. It might only be
               | effective insofar that it is built upon an expansive
               | library of animal capabilities (which took far longer to
               | evolve and may turn out to be harder to reproduce), it is
               | likely bottlenecked by experimental back-and-forth, and
               | it is unclear how well it scales in the first place.
               | Human intelligence may very well be the highest level of
               | intelligence that is cost-effective.
        
               | Delk wrote:
               | Even if lots of real-world problems are intractable in
               | the computational complexity theory sense, that doesn't
               | necessarily mean an upper limit to intelligence or to
               | being able to solve those problems in a practical sense.
               | The complexities are worst-case ones, and in case of
               | optimization problems, they're for finding the absolutely
               | and provably optimal solution.
               | 
               | In lots of real-world problems you don't necessarily run
               | into worst cases, and it often doesn't matter if the
               | solution is the absolute optimal one.
               | 
               | That's not to discredit computational complexity theory
               | at all. It's interesting and I think proofs about the
               | limits of information processing required for solving
               | computational problems do have philosophical value, and
               | the theory might be relevant to the limits of
               | intelligence. But just because some problems are
               | intractable in terms of provably always finding correct
               | or optimal answers doesn't mean we're near the limits of
               | intelligence or problem-solving ability in that fuzzy
               | area of finding practically useful solutions to lots of
               | real-world cases.
        
             | pixl97 wrote:
             | To every other mammal, reptile, and fish humans are the
             | intelligence explosion. The fate of their species depends
             | on our good will since we have so utterly dominated the
             | planet by means of our intelligence.
             | 
             | Moreso, human intelligence is tied into the weakness of our
             | flesh. Human intelligence is also balanced by greed and
             | ambition. Someone dumber than you can 'win' by stabbing you
             | and your intelligence ceases to exist.
             | 
             | Since we don't have the level of AGI we're discussing here
             | yet, it's hard to say what it will look like in its
             | implementation, but I find it hard to believe it would
             | mimic the human model of its intelligence being tied to one
             | body. A hivemind of embodied agents that feed data back
             | into processing centers to be captured in 'intelligence
             | nodes' that push out updates seems way more likely. More
             | like a hive of super intelligent bees.
        
           | georgeburdell wrote:
           | Look up where the people in power got their college degrees
           | from and then look up the SAT scores of admitted students
           | from those colleges.
        
           | mordymoop wrote:
           | Of course intelligent people have taken over society.
        
         | robertlagrant wrote:
         | > or makes life much shittier for most humans by making most of
         | us obsolete
         | 
         | I'm not sure this is true. If all the things people are doing
         | are done so much more cheaply they're almost free, that would
         | be good for us, as we're also the buyers as well as the
         | workers.
         | 
         | However, I also doubt the premise.
        
           | confidantlake wrote:
           | Why would you need buyers if AI can create anything you
           | desire?
        
             | martyfmelb wrote:
             | Bingo.
             | 
             | The whole justification for keeping consumers happy or
             | healthy goes right out the window.
             | 
             | Same for human workers.
             | 
             | All that matters is that your robots and AIs aren't getting
             | smashed by their robots and AIs.
        
             | flashgordon wrote:
             | In an ideal world where gpus are a commodity yes. Btw at
             | least today ai is owned/controlled by the rich and powerful
             | and that's where majority of the research dollars are
             | coming from. Why would they just relinquish ai so
             | generously?
        
               | brandall10 wrote:
               | With an ever expanding AI everything should be quickly
               | commoditized, including reduction in energy to run AI and
               | energy itself (ie. viable commercial fusion or
               | otherwise).
        
               | flashgordon wrote:
               | That's the thing I am struggling with. I agree things
               | will exponentially improve with AI. What i am not seeing
               | is who will actually capture the value. Or rather how
               | will those other than rich and powerful get to partake in
               | this value capture. Take viable commercial fusion for
               | example. Best case it ends up looking like another PG&E.
               | Worst case it is owned by yet another Musk like
               | gatekeeper. How do you see this being truly democratized
               | and accessible for the masses?
        
             | pixl97 wrote:
             | Where are you getting energy and land from for these AI's
             | to consume and turn into goods?
             | 
             | Moreso, by making such a magical powerful AI as you've
             | listed, the number one thing some rich controlling asshole
             | with more AI than you, would be to create an army and take
             | what they want because AI does nothing to solve human
             | greed.
        
           | justinclift wrote:
           | > If all the things people are doing are done so much more
           | cheaply they're almost free, that would be good for us ...
           | 
           | Doesn't this tend to become "they're almost free _to produce_
           | " with the actual pricing for end consumers not becoming
           | cheaper? From the point of view of the sellers just expanding
           | their margins instead.
        
             | marcusverus wrote:
             | I'm sure businesses will capture some of the value, but is
             | there any reason to assume they'll capture all or even most
             | of it?
             | 
             | Over the last ~ 50 years, worker productivity is up
             | ~250%[0], profits (within the S&P 500) are up ~100%[1] and
             | real personal (not household) income is up 150%[2].
             | 
             | It should go without saying that a large part of the rise
             | in profits is attributable to the rise of tech. It
             | shouldn't surprise anyone that margins are higher on
             | digital widgets than physical ones!
             | 
             | Regardless, expanding margins is only attractive up to a
             | certain point. The higher your margins, the more attractive
             | your market becomes to would-be competitors.
             | 
             | [0] https://fred.stlouisfed.org/series/OPHNFB [1]
             | https://dqydj.com/sp-500-profit-margin/ [2]
             | https://fred.stlouisfed.org/series/MEPAINUSA672N
        
               | lotsofpulp wrote:
               | > Regardless, expanding margins is only attractive up to
               | a certain point. The higher your margins, the more
               | attractive your market becomes to would-be competitors.
               | 
               | This does not make sense to me. While a higher profit
               | margin is a signal to others that they can earn money by
               | selling equivalent goods and services at lower prices, it
               | is not inevitable that they will be able to. And even if
               | they are, it behooves a seller to take advantage of the
               | higher margins while they can.
               | 
               | Earning less money now in the hopes of competitors being
               | dissuaded from entering the market seems like a poor
               | strategy.
        
               | lifeisstillgood wrote:
               | Wait what? I was just listening to the former chief
               | economist of Banknof England going on about how terrible
               | productivity (in the UK) is.
               | 
               | So who is right?
        
               | michaelt wrote:
               | Google UK productivity growth and you'll find a graph
               | showing:
               | 
               | UK productivity growth, 1990-2007: 2% per year
               | 
               | UK productivity growth, 2010-2019: 0.5% per year
               | 
               | So they're both right. US 50 year productivity growth
               | looks great, UK 10 year productivity growth looks pretty
               | awful.
        
               | justinclift wrote:
               | > The higher your margins, the more attractive your
               | market becomes to would-be competitors.
               | 
               | Only in very simplistic theory. :(
               | 
               | In practical terms, businesses with high margins seem
               | able to afford government protection (aka "buy some
               | politicians").
               | 
               | So they lock out competition, and with their market
               | captured, price gouging (or close to it) is the order of
               | the day.
               | 
               | No real sure why anyone thinks the playbook would be any
               | different just because "AI" is used on the production
               | side. It's still the same people making the calls, just
               | with extra tools available to them.
        
           | pants2 wrote:
           | Up to the point of AGI, most productivity increases have
           | resulted in less physical / menial labor, and more white
           | collar work. If AGI is smarter than most humans, the pendulum
           | will swing the other way, and more humans will have to work
           | physical / menial jobs.
        
           | thayne wrote:
           | We won't be buyers anymore if we aren't getting paid to work.
           | 
           | Perhaps some kind of garanteed minimal income would be
           | implemented, but we would probably see a shrinkage or
           | complete destruction of the middle class, and massive
           | increases in wealth inequality.
        
         | mc32 wrote:
         | Can higher level formers with more at stake pool together comp
         | for lower levels with much less at stake so they can speak to
         | it? Obvs they may not be privy to some things, but there's
         | likely lots to go around.
        
         | root_axis wrote:
         | More than these egregious gag contracts, OpenAI benefits from
         | the image that they are on the cusp of world-destroying science
         | fiction. This meme needs to die, if AGI is possible it won't be
         | achieved any time in the foreseeable future, and certainly it
         | will not emerge from quadratic time brute force on a fraction
         | of text and images scraped from the internet.
        
           | MrScruff wrote:
           | Clearly we don't know when/if AGI would happen, but the
           | expectations of many people working in the field is it will
           | arrive in what qualifies as 'near future'. It probably won't
           | result from just scaling LLMs, but then that's why there's a
           | lot of researchers trying to find the next significant
           | advancement, in parallel with others trying to commercially
           | exploit LLMs.
        
             | troupo wrote:
             | > the expectations of many people working in the field is
             | it will arrive in what qualifies as 'near future'
             | 
             | It was the expectation of many people in the field in the
             | 1980s, too
        
             | timr wrote:
             | The same way that the expectation of many people working
             | within the self-driving field in 2016 was that level 5
             | autonomy was right around the corner.
             | 
             | Take this stuff with a HUGE grain of salt. A lot of goofy
             | hyperbolic people work in AI (any startup, really).
        
               | schmidtleonard wrote:
               | Sure, but blanket pessimism isn't very insightful either.
               | I'll use the same example you did: self-driving. The
               | public (or "median nerd") consensus has shifted from
               | "right around the corner" (when it struggled to lane-
               | follow if the paint wasn't sharp) to "it's a scam and
               | will never work," even as it has taken off with the other
               | types of AI and started hopping hurdles every month that
               | naysayers said would take decades. Negotiating right-of-
               | way, inferring intent, handling obstructed and ad-hoc
               | roadways... the nasty intractables turned out to not be
               | intractable, but sentiment has _not_ caught up.
               | 
               | For one where the pessimist consensus has already folded,
               | see: coherent image/movie generation and multi-modality.
               | There were loads of pessimists calling people idiots for
               | believing in the possibility. Then it happened. Turns out
               | an image really is worth 16x16 words.
               | 
               | Pessimism isn't insight. There is no substitute for the
               | hard work of "try and see."
        
               | huevosabio wrote:
               | While I agree with your point, I take self driving rides
               | on a weekly basis and you see them all over SF nowadays.
               | 
               | We overestimate the short term progress, but
               | underestimate the medium, long term one.
        
               | timr wrote:
               | I don't think we disagree, but I will say that "a handful
               | of people in SF and AZ taking rides in cars that are
               | remotely monitored 24/7" is not the drivers-are-obsolete-
               | now, near-term future being promised in 2016. Remember
               | the panic because long-haul truckers were going to be
               | unemployed Real Soon Now? I do.
               | 
               | Back then, I said that the future of self-driving is
               | likely to be the growth in capability of "driver
               | assistance" features to an asymptotic point that we will
               | re-define as "level 5" in the distant future (or perhaps:
               | the "levels" will be memory-holed altogether, only to
               | reappear in retrospective, "look how goofy we were"
               | articles, like the ones that pop up now about nuclear
               | airplanes and whatnot). I still think that is true.
        
               | Kwpolska wrote:
               | Self-driving taxis are available in only a handful of
               | cities around the world. This is far from progress. And
               | how often are those taxis secretly controlled by an
               | Indian call center?
        
               | thayne wrote:
               | The same thing happened with nuclear fusion. People
               | working on it have been saying sustainable fusion power
               | is right around the corner for decades, and we still
               | don't have it.
               | 
               | And it _could_ be just one clever breakthrough away, and
               | that could happen tomorrow, or it could be centuries
               | away. There's no way to know.
        
             | zzzeek wrote:
             | >but the expectations of many people working in the field
             | is it will arrive in what qualifies as 'near future'.
             | 
             | they think this because it serves their interests of
             | attracting an enormous amount of attention and money to an
             | industry that they seek to make millions of dollars
             | personally from.
             | 
             | My money is well on environmental/ climate collapse wiping
             | out most of humanity in the next 50-100 years, hundreds of
             | years before anything like an AGI possibly could.
        
           | dclowd9901 wrote:
           | Ah yes, the "our brains are somehow inherently special"
           | coalition. Hand-waving the capabilities of LLM as dumb math
           | while not having a single clue about the math that underlies
           | our own brains' functionality.
           | 
           | I don't know if you're conflating capability with
           | consciousness but frankly it doesn't matter if the thing
           | knows it's alive if it still makes everyone obsolete.
        
             | root_axis wrote:
             | This isn't a question of understanding the brain. We don't
             | even have a theory of AGI, the idea that LLMs are somehow
             | anywhere near even approaching an existential threat to
             | humanity is science fiction.
             | 
             | LLMs are a super impressive advancement, like calculators
             | for text, but if you want to force the discussion into a
             | grandiose context then they're easy to dismiss. Sure, their
             | outputs appear remarkably coherent through sheer brute
             | force, but at the end of the day their fundamental nature
             | makes them unsuitable for any task where precision is
             | necessary. Even as just a chatbot, the facade breaks down
             | with a bit of poking and prodding or just unlucky RNG. Only
             | threat LLMs present is the risk that people will introduce
             | their outputs into safety critical systems.
        
       | strstr wrote:
       | This really kills my desire to trust startups and YC. Hopefully
       | paulg makes some kind of statement or commitment on non-
       | disparagement and the like.
        
       | sidewndr46 wrote:
       | isn't such a contracting completely unenforceable in the US? I
       | can't sign a contract with a private party that says I won't
       | consult a lawyer for legal advice for example.
        
       | cashsterling wrote:
       | In my experience, and that of others I know, agreements of this
       | kind are generally used to hide/cover-up all kinds of
       | malfeasance. I think that agreements of this kind are highly
       | unethical and should be illegal.
       | 
       | Many year ago I signed a NDA/non-disparagement agreement as part
       | of a severance package when I was fired from a startup for
       | political reasons. I didn't want to sign it... but my family
       | needed the money and I swallowed my pride. There was a lot of
       | unethical stuff going on within the company in terms of fiducial
       | responsibility to investors and BoD. The BoD eventually figured
       | out what was going on and "cleaned house".
       | 
       | With OpenAI, I am concerned this is turning into huge power/money
       | grab with little care for humanity... and "power tends to corrupt
       | and absolute power corrupts absolutely".
        
         | punnerud wrote:
         | In EU all of these are mostly illegal and void, or strictly
         | limited. You have to pay a good salary for the whole duration
         | (up to two years), and let the employer know months before them
         | leave. Almost right after they are fired.
         | 
         | Sound like a better solution?
        
           | punnerud wrote:
           | I see that this commend jump up and down between 5 and 10
           | points. Guess a lot of up and downvotes.
        
             | lnsru wrote:
             | I will not vote. But give me US salaries in Germany please.
             | All these EUR100k@35 hours workweek offers are boring. It's
             | almost top salary for senior level developers at big
             | companies. Mostly no stock at all. I will sign probably
             | every shady document for one million EUR stock
             | compensation.
        
               | objektif wrote:
               | Just come to US pls. It is the whole package you sign up
               | for not just the salaries. Shitty food, healthcare etc.
        
         | staunton wrote:
         | > this is turning into huge power/money grab
         | 
         | The power grab happened a while ago (the shenanigans concerning
         | the board) and is now complete. Care for humanity was just
         | marketing or a cute thought at best.
         | 
         | Maybe humanity will survive life long enough that a company
         | "caring about humanity" becomes possible, I'm not saying it's
         | not worth trying or aspiring to such ideals, but everyone
         | should be extremely surprised if any organization managed to
         | resist such amounts of money to maintain any goal or ideal
         | whatever...
        
           | lazide wrote:
           | Well, one problem is what does 'caring for humanity' even
           | mean, concretely?
           | 
           | One could argue it would mean pampering it.
           | 
           | One could also argue it could be a Skynet--analog doing the
           | equivalent of a God Emperor like Golden Path to ensure
           | humanity is never going to be dumb enough to allow an AGI the
           | power to do _that_ again.
           | 
           | Assuming humanity survives the second one, it has a lot
           | higher chance of _actually_ benefiting humanity long term
           | too.
        
             | staunton wrote:
             | At the current level on the way towards "caring about
             | humanity", I really don't think it's a complicated
             | philosophical question. Once a big company actively chooses
             | to forego some profits based on _any_ altruistic
             | consideration, we can start debating what it means
             | "concretely".
        
           | wwweston wrote:
           | The system already has been a superorganism/AI for a long
           | time:
           | 
           | http://omniorthogonal.blogspot.com/2013/02/hostile-ai-
           | youre-...
        
         | dclowd9901 wrote:
         | In all likelihood, they are illegal, just that no one has
         | challenged them yet. I can't imagine a sane court backing up
         | the idea that a person can be forbidden to talk about something
         | (not national security related) for the rest of their lives.
        
         | ornornor wrote:
         | That could very well be the case, OpenAI took quite a few
         | opaque decision/changes not too long ago.
        
       | ddalex wrote:
       | I can't speak. If I speak I will be in trouble.
        
       | itronitron wrote:
       | what part of 'Open' do I not understand?
        
       | loceng wrote:
       | Non-disparagements need to be made illegal.
       | 
       | If someone shares something that's a lie and defamatory, then
       | they could still be sued of course.
       | 
       | The Ben Shapiro-Daily Wire vs. Candace Owens is another scenario
       | where the truth and conversation would benefit all of society -
       | OpenAI and DailyWire arguably being on topics of pinnacle
       | importance; instead the discussions are suppressed.
        
       | shon wrote:
       | The article mentions it briefly but Jan Leike, is talking:
       | Reference:
       | https://x.com/janleike/status/1791498174659715494?s=46&t=pO4...
       | 
       | He clearly states why he left. He believes that OpenAI leadership
       | is prioritizing shiny product releases over safety and that this
       | is a mistake.
       | 
       | Even with the best intentions , it's easy for a strong CEO like
       | Altman to loose sight of more subtly important things like safety
       | and optimize for growth and winning, eventually at all cost.
       | Winning is a super-addictive feedback loop.
        
       | pdonis wrote:
       | Everything I see about OpenAI makes me more and more convinced
       | that the people running it are the _last_ people anyone should
       | want to be stewards of AI technology.
        
       | ur-whale wrote:
       | If at this point, it isn't very clear for OpenAI employees that
       | they're working for the dark side and that altman is one of the
       | worst manipulative psychopath the world has ever seen, I doubt
       | anything will get them to realize what is happening to them.
        
       | __lbracket__ wrote:
       | They dont want to interrupt the good OpenAI is doing in the
       | world, dont ya know
        
       | tim333 wrote:
       | Sama update on X, says sorry:
       | 
       | >in regards to recent stuff about how openai handles equity:
       | 
       | >we have never clawed back anyone's vested equity, nor will we do
       | that if people do not sign a separation agreement (or don't agree
       | to a non-disparagement agreement). vested equity is vested
       | equity, full stop.
       | 
       | >there was a provision about potential equity cancellation in our
       | previous exit docs; although we never clawed anything back, it
       | should never have been something we had in any documents or
       | communication. this is on me and one of the few times i've been
       | genuinely embarrassed running openai; i did not know this was
       | happening and i should have.
       | 
       | >the team was already in the process of fixing the standard exit
       | paperwork over the past month or so. if any former employee who
       | signed one of those old agreements is worried about it, they can
       | contact me and we'll fix that too. very sorry about this.
       | https://x.com/sama/status/1791936857594581428
        
         | lupire wrote:
         | Utterly spineless. Do something slimy and act surprised when
         | you get got. Rinse and repeat.
        
           | airstrike wrote:
           | I don't think that's an accurate read. He did say
           | 
           |  _> if any former employee who signed one of those old
           | agreements is worried about it, they can contact me and we'll
           | fix that too_
        
           | insane_dreamer wrote:
           | <1% chance that Sam did not know what was in those exit docs
        
       | imranq wrote:
       | This seems like fake news. It would extremely dumb to have such a
       | policy since it would eventually be leaked and be negative press
        
       ___________________________________________________________________
       (page generated 2024-05-18 23:03 UTC)