[HN Gopher] Reflections on OpenAI
       ___________________________________________________________________
        
       Reflections on OpenAI
        
       Author : calvinfo
       Score  : 280 points
       Date   : 2025-07-15 16:49 UTC (6 hours ago)
        
 (HTM) web link (calv.info)
 (TXT) w3m dump (calv.info)
        
       | dagorenouf wrote:
       | Maybe I'm paranoid but this sounds too good to be true. Almost
       | like something planted to help with recruiting after meta poached
       | their best guys.
        
         | Reubend wrote:
         | The fact that they gave little shout outs at the end makes me
         | think they wanted to avoid burning bridges by criticizing the
         | company.
        
           | istjohn wrote:
           | They didn't mind burning MS
        
           | bink wrote:
           | They almost certainly still own shares/options in the
           | company.
        
         | lucianbr wrote:
         | > It's hard to imagine building anything as impactful as AGI,
         | and LLMs are easily the technological innovation of the decade.
         | 
         | I really can't see a person with at least minimal self-
         | awareness talking their own work up this much. Give me a break
         | dude. Plus, you haven't built AGI yet.
         | 
         | Can't believe there's so little critique of this post here.
         | It's incredibly self-serving.
        
         | torginus wrote:
         | It sounds to me in contrast to the grandiose claims OpenAI
         | tries to make about its own products - it views AI as 'regular
         | technology', and is pragmatically tries to build viable
         | products using it.
        
       | bhl wrote:
       | > The Codex sprint was probably the hardest I've worked in nearly
       | a decade. Most nights were up until 11 or midnight. Waking up to
       | a newborn at 5:30 every morning. Heading to the office again at
       | 7a. Working most weekends.
       | 
       | There's so much compression / time-dilation in the industry:
       | large projects are pushed out and released in weeks; careers are
       | made in months.
       | 
       | Worried about how sustainable this is for its people, given the
       | risk of burnout.
        
         | tptacek wrote:
         | It's not sustainable, at all, but if it's happening just a
         | couple times throughout your career, it's doable; I know people
         | who went through that process, at that company, and came out of
         | it energized.
        
         | babelfish wrote:
         | This is what being a wartime company looks like
        
         | lvl155 wrote:
         | I am not saying that's easy work but most motivated people do
         | this. And if you're conscious of this that probably means you
         | viewed it more as a job than your calling.
        
         | beebmam wrote:
         | Those that love the work they do don't burn out, because every
         | moment working on their projects tends to be joyful. I
         | personally hate working with people who hate the work they do,
         | and I look forward to them being burned out
        
           | chrisfosterelli wrote:
           | "You don't really love what you do unless you're willing to
           | do it 17 hours a day every day" is an interesting take.
           | 
           | You can love what you do but if you do more of it than is
           | sustainable because of external pressures then you will burn
           | out. Enjoying your work is not a vaccine against burnout. I'd
           | actually argue that people who love what they do are more
           | likely to have trouble finding that balance. The person who
           | hates what they do usually can't be motivated to do more than
           | the minimum required of them.
        
             | threetonesun wrote:
             | Weird how we went from like the 4 hour workweek and all
             | those charts about how people historically famous in their
             | field spent only a few hours a day on what they were most
             | famous for, to "work 12+ hours a day or you're useless".
             | 
             | Also this is one of a few examples I've read lately of "oh
             | look at all this hard work I did", ignoring that they had a
             | newborn and someone else actually did all of the hard work.
        
             | alwa wrote:
             | I read gp's formulation differently: "if you're working 17
             | hours a day, you'd better stop soon unless you're doing it
             | for the love of doing it." In that sense it seems like you
             | and gp might agree that it's bad for you and for your
             | coworkers if you're working like that because of external
             | pressures.
             | 
             | I don't delight in anybody's suffering or burnout. But I do
             | feel relief when somebody is suffering from the pace or
             | intensity, and alleviates their suffering by striking a
             | more sustainable balance for them.
             | 
             | I feel like even people energized by efforts like that pay
             | the piper: after such a period I for one "lay fallow"--
             | tending to extended family and community, doing phone-it-in
             | "day job" stuff, being in nature--for almost as long as the
             | creative binge itself lasted.
        
               | chrisfosterelli wrote:
               | I would indeed agree with things as you've stated. I
               | interpreted "the work they do" to mean "their craft" but
               | if it was intended as "their specific working conditions"
               | I can see how it'd read differently.
               | 
               | I think there are a lot of people that love their craft
               | but are in specific working conditions that lead to
               | burnout, and all I was saying is that I don't think it
               | means they love their craft any less.
        
           | procinct wrote:
           | Sure, but this schedule is like, maybe 5 hours of sleep per
           | night. Other than an extreme minority of people, there's no
           | way you can be operating on that for long and doing your best
           | work. A good 8 hours per night will make most people a better
           | engineer and a better person to be around.
        
         | sashank_1509 wrote:
         | My hot take is I don't think burn out has much to do with raw
         | hours spent working. I feel it has a lot more to do with sense
         | of momentum and autonomy. You can work extremely hard 100 hour
         | weeks six months in a row, in the right team and still feel
         | highly energized at the end of it. But if it feels like wading
         | through a swamp, you will burn out very quickly, even if it's
         | just 50 hours a week. I also find ownership has a lot to do
         | with sense of burnout
        
           | matwood wrote:
           | And if the work you're doing feels meaningful and you're
           | properly compensated. Ask people to work really hard to fill
           | out their 360 reviews and they should rightly laugh at you.
        
           | parpfish wrote:
           | i hope thats not a hot take because it's 100% correct.
           | 
           | people conflate the terms "burnout" and "overwork" because
           | they seem semantically similar, but they are very different.
           | 
           | you can fix overwork with a vacation. burnout is a deeper
           | existential wound.
           | 
           | my worst bout of burnout actually came in a cushy job where i
           | was consistently underworked but felt no autonomy or sense of
           | purpose for why we were doing the things we were doing.
        
           | apwell23 wrote:
           | > You can work extremely hard 100 hour weeks six months in a
           | row, in the right team and still feel highly energized at the
           | end of it.
           | 
           | Something about youth being wasted on young.
        
         | rvz wrote:
         | > Worried about how sustainable this is for its people, given
         | the risk of burnout.
         | 
         | Well given the amount of money OpenAI pays their engineers,
         | this is what it comes with. It tells you that this is not a
         | daycare or for coasters or for the faint of heart, especially
         | at a startup at the epicenter of AI competition.
         | 
         | There is now a massive queue of lots of desperate 'software
         | engineers' ready to kill for a job at OpenAI and will not
         | tolerate the word "burnout" and might even work 24 hours to
         | keep the job away from others.
         | 
         | For those who love what they do, the word "burnout" doesn't
         | exist for them.
        
         | alwa wrote:
         | If anyone tried to demand that I work that way, I'd say
         | absolutely not.
         | 
         | But when I sink my teeth into something interesting and
         | important (to me) for a few weeks' or months' nonstop sprint,
         | I'd say no to anyone trying to rein me in, too!
         | 
         | Speaking only for myself, I can recognize those kinds of
         | projects as they first start to make my mind twitch. I know
         | ahead of time that I'll have no gas left the tank by the end,
         | and I plan accordingly.
         | 
         | Luckily I've found a community who relate to the world and each
         | other that way too. Often those projects aren't materially
         | rewarding, but the few that are (combined with very modest
         | material needs) sustain the others.
        
           | bradyriddle wrote:
           | I'd be curious to know about this community. Is this a formal
           | group or just the people that you've collected throughout
           | your life?
        
             | alwa wrote:
             | The latter. I mean, I feel like a disproportionate number
             | of folks who hang around here have that kind of
             | disposition.
             | 
             | That just turns out to be the kind of person who likes to
             | be around me, and I around them. It's something I wish I
             | had been more deliberate about cultivating earlier in my
             | life, but not the sort of thing I regret.
             | 
             | In my case that's a lot of artists/writers/hackers, a fair
             | number of clergy, and people working in service to others.
             | People quietly doing cool stuff in boring or difficult
             | places... people whose all-out sprints result in ambiguity
             | or failure at least as often as they do success. Very few
             | rich people, very few who seek recognition.
             | 
             | The flip side is that neither I nor my social circles are
             | all that good at consistency--but we all kind of expect and
             | tolerate that about each other. And there's lots of
             | "normal" stuff I'm not part of, which I probably could have
             | been if I had tried. I don't know what that means to the
             | business-minded people around here, but I imagine it
             | includes things like corporate and nonprofit boards,
             | attending sports events in stadia, whatever golf people do,
             | retail politics, Society Clubs For Respectable People,
             | "Summering," owning rich people stuff like a house or a car
             | --which is fine with me!
             | 
             | More than enough is too much :)
        
           | ishita159 wrote:
           | I think senior folks at OpenAI realized this is not
           | sustainable and hence took the "wellness week".
        
         | Rebelgecko wrote:
         | How did they have any time left to be a parent?
        
           | ambicapter wrote:
           | > I returned early from my paternity leave to help
           | participate in the Codex launch.
           | 
           | Obvious priorities there.
        
             | harmonic18374 wrote:
             | That part made me do a double take. I hope his child never
             | learns they were being put second.
        
         | suncemoje wrote:
         | I'm sure they'll look back at it and smile, no?
        
         | datadrivenangel wrote:
         | The author left after 14 months at OpenAI, so that seems like a
         | burnout duration.
        
         | ojr wrote:
         | for the amount of money they are giving that is relatively
         | easy, normal people are paid way less in harder jobs, for
         | example, working in an Amazon Warehouse or doing door-to-door
         | sales, etc.
        
         | 6gvONxR4sf7o wrote:
         | I couldn't imagine asking my partner to pick up that kind of
         | childcare slack. Props to OP's wife for doing so, and I'm glad
         | she got the callout at the end, but god damn.
        
         | kaashif wrote:
         | Working a job like that would literally ruin my life. There's
         | no way I could have time to be a good husband and father under
         | those conditions, some things should not be sacrificed.
        
         | laidoffamazon wrote:
         | I don't really have an opinion on working that much, but
         | working that much _and_ having to go into the office to spend
         | those long hours sounds like torture.
        
       | vouaobrasil wrote:
       | > The thing that I appreciate most is that the company is that it
       | "walks the walk" in terms of distributing the benefits of AI.
       | Cutting edge models aren't reserved for some enterprise-grade
       | tier with an annual agreement. Anybody in the world can jump onto
       | ChatGPT and get an answer, even if they aren't logged in.
       | 
       | I would argue that there are very few benefits of AI, if any at
       | all. What it actually does is create a prisoner's dilemma
       | situation where some use it to become more efficient only because
       | it makes them faster and then others do the same to keep up. But
       | I think everyone would be FAR better off without AI.
       | 
       | What keeping AI free for everyone is akin to is keeping an
       | addictive drug free for everyone so that it can be sold in larger
       | quantities later.
       | 
       | One can argue that some technology is beneficial. A mosquito net
       | made of plastic immediately improves one's comfort if out in the
       | woods. But AI doesn't really offer any immediate TRUE improvement
       | of life, only a bit more convenience in a world already saturated
       | in it. It's past the point of diminishing returns for true life
       | improvement and I think everyone deep down inside knows that, but
       | is seduced by the nearly-magical quality of it because we are
       | instinctually driven to seek out advantags and new information.
        
         | ookblah wrote:
         | i don't really understand this thought process. all technology
         | has it's advantages and drawbacks and we are currently going
         | through the hype and growing pains process.
         | 
         | you could just as well argue the internet, phones, tv, cars,
         | all adhere to the exact same prisoner's dilemma situation you
         | talk about. you could just as well use AI to rubber duck or
         | ease your mental load than treat it like some rat-race to
         | efficiency.
        
           | vouaobrasil wrote:
           | True, but it is meaningful to understand whether the
           | "quantity" advantages - drawbacks decreases over time, which
           | I believe it does.
           | 
           | And we should indeed apply the logic to other inventions:
           | some are more worth using than others, whereas in today's
           | society, we just use all of them due to the mechanisms of the
           | prisoner's dilemma. The Amish, on the other hand, apply
           | deliberation on whether to use certain technologies, which is
           | a far better approach.
        
         | christiangenco wrote:
         | > I would argue that there are very few benefits of AI, if any
         | at all. What it actually does is create a prisoner's dilemma
         | situation where some use it to become more efficient only
         | because it makes them faster and then others do the same to
         | keep up. But I think everyone would be FAR better off without
         | AI.
         | 
         | Personally, my life has significantly improved in meaningful
         | ways with AI. Apart from the obvious work benefits (I'm
         | shipping code ~10x faster than pre-AI), LLMs act as my personal
         | nutritionist, trainer, therapist, research assistant, executive
         | assistant (triaging email, doing SEO-related work, researching
         | purchases, etc.), and a much better/faster way to search for
         | and synthesize information than my old method of using Google.
         | 
         | The benefits I've gotten are much more than conveniences and
         | the only argument I can find that anyone else is worse off
         | because of these benefits is that I don't hire junior
         | developers anymore (at max I was working with 3 for a
         | contracting job). At the same time, though, all of them are
         | _also_ using LLMs in similar ways for similar benefits (and
         | working on their own projects) so I 'd argue they're net much
         | better off.
        
           | vouaobrasil wrote:
           | A few programmers being better off does not make an entire
           | society better off. In fact, I'd argue that you shipping code
           | 10x faster just means in the long run that consumerism is
           | being accelerated at a similar rate because that is what most
           | code is used for, eventually.
        
             | simonw wrote:
             | I spent much of my career working on open source software
             | that helped other engineers ship code 10x faster. Should I
             | feel bad about the impact my work there had on accelerating
             | consumerism?
        
               | vouaobrasil wrote:
               | I don't know if you should feel bad or not, but even I
               | know that I have a role to play in consumerism that I
               | wish I didn't.
               | 
               | That doesn't necessitate feeling bad because the reaction
               | to feel good or bad about something is a side effect of
               | the sort of religious "good and evil" mentality that
               | probably came about due to Christianity or something. But
               | *regardless*, one should at least understand that because
               | our world has reached a sufficient critical mass of
               | complexity, even the things we do that we think are
               | benign or helpful can have negative side effects.
               | 
               | I never claim that we should feel bad about that, but we
               | should understand it and attempt to mitigate it
               | nonetheless. And, where no mitigation is possible, we
               | should also advocate for a better societal structure that
               | will eventually, in years or decades, result in fewer
               | deleterious side effects.
        
               | simonw wrote:
               | The TV show The Good Place actually dug into this quite a
               | bit. One of the key themes explored in the show was the
               | idea that there is no ethical consumption under
               | capitalism, because eventually the things you consume can
               | be tied back to some grossly unethical situation
               | somewhere in the world.
        
               | jfyi wrote:
               | That theme was primarily explored through the idea it's
               | impossible to live a truly ethical life in the modern
               | world due to unknowable externalities.
               | 
               | I don't think the takeaway was meant to really be about
               | capitalism but more generally the complexity of the
               | system. That's just me though.
        
         | simonw wrote:
         | "I would argue that there are very few benefits of AI, if any
         | at all."
         | 
         | OK, if you're going to say things like this I'm going to insist
         | you clarify which subset of "AI" you mean.
         | 
         | Presumably you're OK with the last few decades of machine
         | learning algorithms for things like spam detection, search
         | relevance etc.
         | 
         | I'll assume your problem is with the last few years of
         | "generative AI" - a loose term for models that output text and
         | images instead of purely being used for classification.
         | 
         | Are predictive text keyboards on a phone OK (tiny LLMs)? How
         | about translation engines like Google Translate?
         | 
         | Vision LLMs to help with wildlife camera trap analysis? How
         | about to help with visual impairments navigate the world?
         | 
         | I suspect your problem isn't with "AI", it's with the way
         | specific AI systems are being built and applied. I think we can
         | have much more constructive conversations if we move beyond
         | blanket labeling "AI" as the problem.
        
           | vouaobrasil wrote:
           | 1. Here is the subset: any algorithm, which is learning
           | based, trained on a large data set, and modifies or generates
           | content.
           | 
           | 2. I would argue that translation engines have their
           | positives and negatives, but a lot of them are negative,
           | because they lead to translators losing their jobs, and a
           | loss in general for the magical qualities of language
           | learning.
           | 
           | 3. Predictive text: I think people should not be presented
           | with possible next words, and think of them on their own,
           | because that means they will be more thoughtful in their
           | writing and less automatic. Also, with a higher barrier to
           | writing something, they will probably write less and what
           | they do write will be of greater significance.
           | 
           | 4. I am against all LLMs, including wildlife camera trap
           | analysis. There is an overabundance of hiding behind research
           | when we really already know the problem fairly well. It's a
           | fringe piece of conservation research anyway.
           | 
           | 5. Visual impairments: one can always appeal to helping the
           | disabled and impaired, but I think the tradeoff is not worth
           | the technological enslavement.
           | 
           | 6. My problem is categorically with AI, not with how it is
           | applied, PRECISELY BECAUSE AI cannot be applied in an ethical
           | way, since human beings en masse will inevitably have a
           | sufficient number of bad actors to make the net effect always
           | negative. It's human nature.
        
             | simonw wrote:
             | Thanks for this, it's a good answer. I think "generative
             | AI" is the closest term we have to that subset you describe
             | there.
        
               | vouaobrasil wrote:
               | Just to add one final point: I included modification as
               | well as generation of content, since I also want to
               | exclude technologies that simply improve upon existing
               | content in some way that is very close to generative but
               | may not be considered so. For example: audio improvent
               | like echo removal, ML noise removal, which I have already
               | shown to interpolate.
               | 
               | I think AI classification and stuff like classification
               | is probably okay but of course with that, as with all
               | technologies, we should be cautious of how we use it as
               | it can be used also in facial recognition, which in turn
               | can be used to create a stronger police state.
        
             | pj_mukh wrote:
             | I wish your parent comment didn't get downvoted, because
             | this is an important conversation point.
             | 
             | "PRECISELY BECAUSE AI cannot be applied in an ethical way,
             | since human beings en masse will inevitably have a
             | sufficient number of bad actors"
             | 
             | I think this is vibes based on bad headlines and no actual
             | numbers (and tbf, founders/CEO's talking outta their a**).
             | In my real-life experience the advantages of specifically
             | generative AI far outweighs the disadvantages, by like a
             | really large margin. I say this as someone academically
             | trained on well modeled Dynamical systems (the opposite of
             | Machine Learning). My team just lost. Badly.
             | 
             | Case-in-point: I work with language localization teams that
             | have fully adopted LLM based translation services (our
             | DeepL.com bills are huge), but we've only hired more
             | translators and are processing more translations faster.
             | It's just..not working out like we were told in the
             | headlines. Doomsday Radiologist predictions [1], same
             | thing.
             | 
             | [1]: https://www.nytimes.com/2025/05/14/technology/ai-jobs-
             | radiol...
        
               | vouaobrasil wrote:
               | > I think this (esp the sufficient number of bad actors)
               | is vibes based on bad headlines and no actual numbers. In
               | my real-life experience the advantages of specifically
               | generative AI far outweighs the disadvantages, by like a
               | really large margin.
               | 
               | We define bad actors in different ways. I also include
               | people like tech workers, CEOs who program systems that
               | take away large numbers of jobs. I already know people
               | whose jobs were eroded based on AI.
               | 
               | In the real world, lots of people hate AI generated
               | content. The advantages you speak of are only to those
               | who are technically minded enough to gain greater
               | material advantages from it, and we don't need the rich
               | getting richer. The world doesn't need a bunch of techies
               | getting richer from AI at the expense of people like
               | translators, graphic designers, etc, losing their jobs.
               | 
               | And while you may have hired more translators, that is
               | only temporary. Other places have fired them, and you
               | will too once the machine becomes good enough. There will
               | be a small bump of positive effects in the short term but
               | the long term will be primarily bad, and it already is
               | for many.
        
               | pj_mukh wrote:
               | I think we'll have to wait and see here, because all the
               | layoffs can be easily attributed to leadership making
               | crappy over-hiring decisions over COVID and now not being
               | able to admit to that and giving hand-wavy answers over
               | "I'm firing people because AI" to drive different
               | headline narratives (see: founders/CEO's talking outta
               | their a**).
               | 
               | It may also be the narrative fed to actual employees,
               | saying "You're losing your job because AI" is an easy way
               | to direct anger away from your bad business decisions. If
               | a business is shrinking, it's shrinking, AI was
               | inconsequential. If a business is growing AI can only
               | help. Whether it's growing or shrinking doesn't depend on
               | AI, it depends on the market and leadership decision-
               | making.
               | 
               | You and I both know none of this generative AI is good
               | enough unsupervised (and realistically, with deep human
               | edits). But they're still massive productivity boosts
               | which have always been huge economic boosts to the
               | middle-class.
               | 
               | Do I wish this tech could _also_ be applied to real
               | middle-class shortages (housing, supply-chain etc.),
               | sure. And I think it will come.
        
         | 8note wrote:
         | hiding from mosquitos under your net is a negative. the point
         | of going out to the woods is to be bitten by mosquitos and
         | youve ruined it.
         | 
         | its impossible to get benefit from the woods if youve brought a
         | bug net, and you should stay out rather than ruining the woods
         | for everyone
        
           | vouaobrasil wrote:
           | Rather myopic and crude take, in my opinion. Because if I
           | bring out a net, it doesn't change the woods for others. If I
           | introduce AI into society, it does change society for others,
           | even those who don't want to use the tool. You have really no
           | conception of subtlety or logic.
           | 
           | If someone says driving at 200mph is unsafe, then your
           | argument is like saying "driving at any speed is unsafe".
           | Fact is, you need to consider the magnitude and speed of the
           | technology's power and movement, which you seem incapable of
           | doing.
        
       | randometc wrote:
       | What's the GTM role referenced a couple of times in the post?
        
         | tptacek wrote:
         | Go-to-market. Outbound marketing and sales, pipeline
         | definition, analytics.
        
           | randometc wrote:
           | That's how I imagined it, kind of a hybrid of what I've seen
           | called Product Marketing Manager and Product Analyst, but
           | other replies and OpenAI job postings indicate maybe it's a
           | different role, more hands on building, getting from research
           | to consumer product maybe?
        
         | skywhopper wrote:
         | "Go To Market", ie the group that turns the tech into products
         | people can use and pay for.
        
         | koolba wrote:
         | GTM = go to market
         | 
         | An actual offering made to the public that can be paid for.
        
       | tptacek wrote:
       | This was good, but the one thing I most wanted to know about what
       | it's like building new products inside of OpenAI is how and how
       | much LLMs are involved in their building process.
        
         | wilkomm wrote:
         | That's a good question!
        
         | vFunct wrote:
         | He describes 78,000 public pull requests per engineer over 53
         | days. LMAO. So it's likely 99.99% LLM written.
         | 
         | Lots of good info in the post, surprised he was able to share
         | so much publicly. I would have kept most of the business
         | process info secret.
         | 
         | Edit: NVM. That 78k pull requests is for all users of Codex,
         | not all engineers of Codex.
        
       | nembal wrote:
       | wham. thanks for sharing anecdotal episodes from OAI's inner
       | mecahnism from an eng perspective. I wonder if OAI wouldn't be
       | married to Azure would the infra be more resilient, require less
       | eng effort to invent things to just run (at scale).
       | 
       | What i haven't seen much is the split between eng and research
       | and how people within the company are thinking about AGI and the
       | future, workforce, etc. Is it the usual SF wonderland or is there
       | an OAI specific value alignment once someone is working there.
        
       | simonw wrote:
       | Whoa, there is a _ton_ of interesting stuff in this one, and
       | plenty of information I 've never seen shared before. Worth
       | spending some time with it.
        
         | tomrod wrote:
         | Agreed!
        
       | codemac wrote:
       | o7
        
       | upghost wrote:
       | Granted the "OpenAI is not a monolith" comment, interesting that
       | use of AI assisted coding was a curious omission from the article
       | -- no mention if encouraged or discouraged.
        
       | tines wrote:
       | Interesting how ChatGPT's style of writing has made people start
       | bolding so much text.
        
         | isoprophlex wrote:
         | Possibly the dumbest, blandest, most annoying kind of cultural
         | transference imaginable. We dreamed of creating machines in our
         | image, and now we're shaping ourselves in the image of our
         | machines. Ugh.
        
         | layer8 wrote:
         | I remember this being common business practice for written
         | communication (email, design documents) circa 20 years ago, so
         | that people at least read the important points, or can quickly
         | pick them out again later.
        
         | pchristensen wrote:
         | People have bolded important points to make text easier to scan
         | long before AI.
        
       | reducesuffering wrote:
       | "Safety is actually more of a thing than you might guess if you
       | read a lot from Zvi or Lesswrong. There's a large number of
       | people working to develop safety systems. Given the nature of
       | OpenAI, I saw more focus on practical risks (hate speech, abuse,
       | manipulating political biases, crafting bio-weapons, self-harm,
       | prompt injection) than theoretical ones (intelligence explosion,
       | power-seeking). That's not to say that nobody is working on the
       | latter, there's definitely people focusing on the theoretical
       | risks. But from my viewpoint, it's not the focus."
       | 
       | This paragraph doesn't make any sense. If you read a lot of Zvi
       | or LessWrong, the misaligned intelligence explosion _is_ the
       | safety risk you 're thinking of! So readers "guesses" are
       | _actually_ right that OpenAI isn 't really following Sam
       | Altman's:
       | 
       | "Development of superhuman machine intelligence is probably the
       | greatest threat to the continued existence of humanity. There are
       | other threats that I think are more certain to happen (for
       | example, an engineered virus with a long incubation period and a
       | high mortality rate) but are unlikely to destroy every human in
       | the universe in the way that SMI could."[0]
       | 
       | [0] https://blog.samaltman.com/machine-intelligence-part-1
        
       | humbleferret wrote:
       | What a great post.
       | 
       | Some points that stood out to me:
       | 
       | - Progress is iterative and driven by a seemingly bottom up,
       | meritocratic approach. Not a top down master plan. Essentially,
       | good ideas can come from anywhere and leaders are promoted based
       | on execution and quality of ideas, not political skill.
       | 
       | - People seem empowered to build things without asking permission
       | there, which seems like it leads to multiple parallel projects
       | with the promising ones gaining resources.
       | 
       | - People there have good intentions. Despite public criticism,
       | they are genuinely trying to do the right thing and navigate the
       | immense responsibility they hold.
       | 
       | - Product is deeply influenced by public sentiment, or more
       | bluntly, the company "runs on twitter vibes."
       | 
       | - The sheer cost of GPUs changes everything. It is the single
       | factor shaping financial and engineering priorities. The expense
       | for computing power is so immense that it makes almost every
       | other infrastructure cost a "rounding error."
       | 
       | - I liked the take of the path to AGI being framed as a three
       | horse race between OpenAI (consumer product DNA), Anthropic
       | (business/enterprise DNA), and Google (infrastructure/data DNA),
       | with each organisation's unique culture shaping its approach to
       | AGI.
        
         | mikae1 wrote:
         | _> I liked the take of the path to AGI being framed as a three
         | horse race between OpenAI (consumer product DNA), Anthropic
         | (business /enterprise DNA), and Google (infrastructure/data
         | DNA)_
         | 
         | Wouldn't want to forget Meta which also has consumer product
         | DNA. They literally championed the act of making the consumer
         | the product.
        
           | smath wrote:
           | lol, I almost missed the sarcasm there :)
        
       | krashidov wrote:
       | > giant python monolith
       | 
       | this does not sound fun lol
        
       | jjani wrote:
       | > The thing that I appreciate most is that the company is that it
       | "walks the walk" in terms of distributing the benefits of AI.
       | Cutting edge models aren't reserved for some enterprise-grade
       | tier with an annual agreement. Anybody in the world can jump onto
       | ChatGPT and get an answer, even if they aren't logged in. There's
       | an API you can sign up and use-and most of the models (even if
       | SOTA or proprietary) tend to quickly make it into the API for
       | startups to use.
       | 
       | The comparison here should clearly be with the other frontier
       | model providers: Anthropic, Google, and potentially Deepseek and
       | xAI.
       | 
       | Comparing them gives the exact opposite conclusion - OpenAI is
       | the _only_ model provider that gates API access to their frontier
       | models behind draconic identity verification (also, Worldcoin
       | anyone?). Anthropic and Google do not do this.
       | 
       | OpenAI hides their model's CoT (inference-time compute,
       | thinking). Anthropic to this day shows their CoT on all of their
       | models.
       | 
       | Making it pretty obvious this is just someone patting themselves
       | on the back and doing some marketing.
        
         | harmonic18374 wrote:
         | Yes, also OpenAI being this great nimble startup that can turn
         | on a dime, while in reality Google reacted to _them_ and has
         | now surpassed them technically in every area, except image
         | prompt adherence.
        
       | hinterlands wrote:
       | It is fairly rare to see an ex-employee put a positive spin on
       | their work experience.
       | 
       | I don't think this makes OpenAI special. It's just a good
       | reminder that the overwhelming majority of "why I left" posts are
       | basically trying to justify why a person wasn't a good fit for an
       | organization by blaming it squarely on the organization.
       | 
       | Look at it this way: the flip side of "incredibly bottoms-up"
       | from this article is that there are people who feel rudderless
       | because there is no roadmap or a thing carved out for them to
       | own. Similarly, the flip side of "strong bias to action" and
       | "changes direction on a dime" is that everything is chaotic and
       | there's no consistent vision from the executives.
       | 
       | This cracked me up a bit, though: "As often as OpenAI is maligned
       | in the press, everyone I met there is actually trying to do the
       | right thing" - yes! That's true at almost every company that ends
       | up making morally questionable decisions! There's no Bond villain
       | at the helm. It's good people rationalizing things. It goes like
       | this: we're the good guys. If we were evil, we could be doing
       | things so much worse than X! Sure, some might object to X, but
       | they miss the big picture: X is going to indirectly benefit the
       | society because we're going to put the resulting money and power
       | to good use. Without us, you could have the bad guys doing X
       | instead!
        
         | harmonic18374 wrote:
         | I would never post any criticism of an employer in public. It
         | can only harm my own career (just as being positive can only
         | help it).
         | 
         | Given how vengeful Altman can reportedly be, this goes double
         | for OpenAI. This guy even says they scour social media!
         | 
         | Whether subconsciously or not, one purpose of this post is
         | probably to help this guy's own personal network along; to try
         | and put his weirdly short 14-month stint in the best possible
         | light. I think it all makes him look like a mark, which is
         | desirable for employers, so I guess it is working.
        
           | m00x wrote:
           | Calvin cofounded Segment that had a $3.2B acquisition. He's
           | not your typical employee.
        
             | harmonic18374 wrote:
             | He is still manipulatable and driven by incentive like
             | anyone else.
        
               | m00x wrote:
               | What incentives? It's not a very intellectual opinion to
               | give wild hypotheticals with nothing to go on other than
               | "it's possible".
        
               | harmonic18374 wrote:
               | I am not trying to advance wild hypotheticals, but
               | something about his behavior does not quite feel right to
               | me. Someone who has enough money for multiple lifetimes,
               | working like he's possessed, to launch a product
               | minimally different than those at dozens of other
               | companies, and leaving his wife with all the childcare,
               | then leaving after 14 months and insisting he was not
               | burnt out but without a clear next step, not even, "I
               | want to enjoy raising my child".
               | 
               | His experience at OpenAI feels overly positive and
               | saccharine, with a few shockingly naive comments that
               | others have noted. I think there is obvious incentive.
               | One reason for this is, he may be in burnout, but does
               | not want to admit it. Another is, he is looking to the
               | future: to keep options open for funding and connections
               | if (when) he chooses to found again. He might be lonely
               | and just want others in his life. Or to feel like he's
               | working on something that "matters" in some way that his
               | other company didn't.
               | 
               | I don't know at all what he's actually thinking. But the
               | idea that he is resistant to incentives just because he
               | has had a successful exit seems untrue. I know people who
               | are as rich as he is, and they are not much different
               | than me.
        
         | Bratmon wrote:
         | > It is fairly rare to see an ex-employee put a positive spin
         | on their work experience
         | 
         | Much more common for OpenAI, because you lose all your vested
         | equity if you talk negatively about OpenAI after leaving.
        
           | rvz wrote:
           | Absolutely correct.
           | 
           | There is a reason why there was a cult-like behaviour on X
           | amongst the employees in supporting to bringing back Sam as
           | CEO when he was kicked out by the OpenAI board of directors
           | at the time.
           | 
           |  _" OpenAI is nothing without it's people"_
           | 
           | All of "AGI" (which actually was the lamborghinis,
           | penthouses, villas and mansions for the employees) was all on
           | the line and on hold if that equity went to 0 or would be
           | denied selling their equity if they openly criticized OpenAI
           | after they left.
        
             | tptacek wrote:
             | Yes, and the reason for that is that employees at OpenAI
             | believed (reasonably) that they were cruising for Google-
             | scale windfall payouts from their equity over a relatively
             | short time horizon, and that Altman and Brockman leaving
             | OpenAI and landing at a well-funded competitor, coupled
             | with OpenAI corporate management that publicly opposed
             | commercialization of their technology, would torpedo those
             | payouts.
             | 
             | I'd have sounded cult-like too under those conditions (but
             | I also don't believe AGI is a thing, so would not have a
             | countervailing cult belief system to weigh against that
             | behavior).
        
               | kaashif wrote:
               | > I also don't believe AGI is a thing
               | 
               | Why not? I don't think we're anywhere close, but there
               | are no physical limitations I can see that prevent AGI.
               | 
               | It's not impossible in the same way our current
               | understanding indicates FTL travel or time travel is.
        
           | fragmede wrote:
           | The Silenced No More Act" (SB 331), effective January 1,
           | 2022, in California, where OpenAI is based, limits non-
           | disparagement clauses and retribution by employers, likely
           | making that illegal in California, but I am not a lawyer.
        
             | swat535 wrote:
             | Even if it's illegal, you'll have to fight them in court.
             | 
             | OpenAI will certainly punish you for this and most likely
             | make an example out of you, regardless of the outcome.
             | 
             | The goal is corporate punishment, not the rule of the law.
        
           | tedsanders wrote:
           | OpenAI never enforced this, removed it, and admitted it was a
           | big mistake. I work at OpenAI and I'm disappointed it
           | happened but am glad they fixed it. It's no longer hanging
           | over anyone's head, so it's probably inaccurate to suggest
           | that Calvin's post is positive because he's trying to protect
           | his equity from being taken. (though of course you could
           | argue that everyone is biased to be positive about companies
           | they own equity in, generally)
        
             | gwern wrote:
             | > It's no longer hanging over anyone's head,
             | 
             | The tender offer limitations still are, last I heard.
             | 
             | Sure, maybe OA can no longer cancel your vested equity for
             | $0... but how valuable is (non-dividend-paying) equity you
             | can't sell? (How do you even borrow against it, say?)
        
               | tedsanders wrote:
               | Nope, happy to report that was also fixed.
               | 
               | (It would be a pretty fake solution if equity
               | cancellation was halted, but equity could still be
               | frozen. Cancelled and frozen are de facto identical until
               | the first dividend payment, which could take decades.)
        
         | tptacek wrote:
         | Most posts of the form "Reflections on [Former Employer]" on HN
         | are positive.
        
         | ben_w wrote:
         | > It is fairly rare to see an ex-employee put a positive spin
         | on their work experience.
         | 
         | FWIW, I have positive experiences about many of my former
         | employers. Not _all_ of them, but many of them.
        
         | rrrrrrrrrrrryan wrote:
         | > There's no Bond villain at the helm. It's good people
         | rationalizing things.
         | 
         | I worked for a few years at a company that made software for
         | casinos, and this was absolutely not the case there. Casinos
         | absolutely have fully shameless villains at the helm.
        
         | torginus wrote:
         | Here's what I think - while Altman was busy trying to convince
         | the public the AGI was coming in the next two weeks, with vague
         | tales that were equaly ominous and utopistic, he (and his
         | fellow leaders) have been extremely busy at trying hard to turn
         | OpenAI into a product company with some killer offerings, and
         | from the article, it seems they were rather good and successful
         | in that.
         | 
         | Considering the high stakes, money, and undoubtedly the ego
         | involved, the writer might have acquired a few bruises along
         | the way, or might have lost out on some political in fights
         | (remember how they mentioned they built multiple Codex
         | prototypes, it must've sucked to see some other people's
         | version chosen instead of your own).
         | 
         | Another possible explanation is that the writer's just had
         | enough - enough money to last a lifetime, just started a
         | family, made his mark on the world, and was no longer compelled
         | (or have been able to) keep up with methed-up fresh college
         | grads.
        
           | matco11 wrote:
           | > remember how they mentioned they built multiple Codex
           | prototypes, it must've sucked to see some other people's
           | version chosen instead of your own
           | 
           | Well it depends on people's mindset. It's like doing a
           | hackathon and not winning. Most people still leave inspired
           | by what they have seen other people building, and can't wait
           | to do it again.
           | 
           | ...but of course not everybody likes to go to hackathons
        
         | curious_cat_163 wrote:
         | > It is fairly rare to see an ex-employee put a positive spin
         | on their work experience.
         | 
         | I liked my jobs and bosses!
        
         | Spooky23 wrote:
         | I'm not saying this about OpenAI, because I just don't know.
         | But Bond villains exist.
         | 
         | Usually the level 1 people are just motivated by power and
         | money to an unhealthy degree. The worst are true believers in
         | something. Even something seemingly mild.
        
       | smeeger wrote:
       | > everyone I met there is actually trying to do the right thing
       | 
       | making human beings obsolete is not the right thing. nobody in
       | openAI is doing the right thing.
       | 
       | in another part of the post he says safety teams work primarily
       | on making sure the models dont say anything racist as well as
       | limiting helpful tips on building weapons of terror... and that
       | AGI safety is basically not a focus. i dont think this company
       | should be allowed to exist. they dont have ANY right to threaten
       | the existence and wellbeing of me and my kids!
        
       | seydor wrote:
       | seems like the whole thing was meant to be a jab at Meta
        
         | ishita159 wrote:
         | was it?
         | 
         | it was however interesting to know that it isn't just Meta
         | poaching OpenAI, but the reverse also happened.
        
           | latency-guy2 wrote:
           | Very apt, OpenAI's start was always poach-central, we know
           | this from executive email leaks via Elon/Sam respectively.
           | 
           | Any gibberish on any company's behalf of "poaching" is
           | nonsense regardless IMO.
        
         | pchristensen wrote:
         | I definitely didn't get that feeling. There was a whole section
         | about how their infra resembles Meta and they've had excellent
         | engineers hired from Meta.
        
       | bawana wrote:
       | This is a politically correct farewell letter. Obviously
       | something we little people who need jobs have to resort to so the
       | next HR manager doesn't think we are a risk to stock valuation.
       | For a deeper understanding, read Empire of AI by Karen Hao. She
       | defrocks Sam Altman to reveal he is just another human. Like
       | Steve Jobs, he is an adept salesman appealing to the naive
       | altruistic sentiments of humans while maintaining his singular
       | focus on scale. Not so different from the archetype of
       | Rockefeller in his pursuit of monopoly through scale using any
       | means, sam is no different than google which even forgot its own
       | rallying cry 'dont be evil'. Other actors in the story seem to
       | have been infected by the same meme virus, leaving openAI for
       | their own empires- Musk left after he and altman conflicted over
       | who would be CEO.(birth of xAI). Amodei, his sister and others
       | left to start anthropic. Sutskever left to start 'safe something
       | or other'(smacks of the same misdirection sam used when openAI
       | formed as a nonprofit ) giving the idea of a nonprofit a mantle
       | of evil since OPENAI has pivoted to profit.
       | 
       | The bottom line is that scaling requires money and the only way
       | to get that in the private sector is to lure those with money
       | with the temptation they can multiply their wealth.
       | 
       | Things could have been different in a world before financial
       | engineers bankrupted the US (the crises of enron, salomon bros,
       | 2008 mortgage debacle all added hundreds of billions to us debt
       | as the govt bought the 'too big to fail' kool-aid and bailed out
       | wall street by indenturing main street). Now 1/4 of our budget is
       | simply interest payment on this debt. There is no room for govt
       | spending on a moonshot like AI. This environment in 1960 would
       | have killed Kennedy's inspirational moonshot of going to the moon
       | while it was still an idea in his head in his post coital bliss
       | with Marilyn at his side.
       | 
       | Today our govt needs money just like all the other scrooge-
       | infected players in the tower of debt that capitalism has built.
       | 
       | Ironically it seems china has a better chance now. It seems its
       | release of deep seek and the full set of parameters is giving it
       | a veneer of altruistic benevolence that is slightly more
       | believable than what we see here in the west. China may win
       | simply on thermodynamic grounds. Training and research in DL
       | consumes terawatt hours and hundreds of thousands of chips. Not
       | only are the US models on older architectures (10-100x more
       | energy inefficient) but the 'competition' of multiple players in
       | the US multiplies the energy requirements.
       | 
       | Would govt oversight have been a good thing? Imagine if General
       | Motors, westinghouse, bell labs, and ford competed in 1940 each
       | with their own manhattan project to develop nuclear weapons ?
       | Would the proliferation of nuclear have resulted in human
       | extinction by now?
       | 
       | Will AI's contribution to global warming be just as toxic global
       | thermonuclear war?
       | 
       | These are the questions that come to mind after Hao's historic
       | summary.
        
       | bagxrvxpepzn wrote:
       | He joins a proven unicorn at its inflection point and then leaves
       | mere days after hitting his vesting cliff. All of this "learning"
       | and "experience" talk is sopping wet with cynicism.
        
         | dang wrote:
         | Can you please make your substantive points without crossing
         | into personal attack and/or name-calling?
         | 
         | https://news.ycombinator.com/newsguidelines.html
        
           | bagxrvxpepzn wrote:
           | Sorry, I removed the personal attack.
        
             | dang wrote:
             | I appreciate the edit, but "sopping wet with cynicism"
             | still breaks the site guidelines, especially this one: "
             | _Please respond to the strongest plausible interpretation
             | of what someone says, not a weaker one that 's easier to
             | criticize. Assume good faith._"
             | 
             | https://news.ycombinator.com/newsguidelines.html
        
         | guywithabike wrote:
         | He co-founded and sold Segment. You think he was just at OpenAI
         | to collect a check? He lays out exactly why he joined OpenAI
         | and why he's leaving. If you think everyone does things only
         | for cynical reasons, it might be a reflection more of your
         | personal impulses than others.
        
           | cainxinth wrote:
           | Just because someone claims they are speaking in good faith
           | doesn't mean we have to take their word for it. Most people
           | in tech dealing with big money are doing it for cynical
           | reasons. The talk of changing the world or "doing something
           | hard" is just marketing typically.
        
             | m00x wrote:
             | Calvin works incredibly hard and has very little ego. I was
             | surprised he joined OpenAI since he's loaded from the
             | Segment acquisition, but if anyone it makes sense he would
             | do this. He's always looking to find the hardest problem
             | and work on it.
             | 
             | That's what he did at Segment even in the later stages.
        
         | tptacek wrote:
         | I did not pick up much cynicism in this post. What about it
         | seemed cynical to you?
        
           | bagxrvxpepzn wrote:
           | Given that he leaves OpenAI almost immediately after hitting
           | his 25% vesting cliff, it seems like his employment at OpenAI
           | and this blog post (which makes him and OpenAI look good
           | while making the reader feel good) were done cynically. I.e.
           | primarily in his self-interest. What makes it even worse is
           | his stated reason for leaving:
           | 
           | > It's hard to go from being a founder of your own thing to
           | an employee at a 3,000-person organization. Right now I'm
           | craving a fresh start.
           | 
           | This is just wholly irrational for someone whose credentials
           | indicate someone who is capable of applying critical thinking
           | towards accomplishing their goals. People who operate at that
           | level don't often act on impulse or suddenly realize they
           | want to do something different. It seems much more likely he
           | intentionally planned to give himself a year of vacation at
           | OpenAI, which allows him to hedge a bit while taking a
           | breather before jumping back into being a founder.
           | 
           | Is this essentially speculation? Yes. Is it cynical to assume
           | he's acting cynically? Yes. Speculation on his true motives
           | is necessary because otherwise we'll never get confirmation,
           | short of him openly admitting to it (which is still fraught).
           | We have to look at behaviors and actions and assess
           | likelihoods from there.
        
             | m00x wrote:
             | He's likely received hundreds of millions from segment
             | acquisition. Do you think he cares about the OpenAI vesting
             | cliff?
             | 
             | It's more likely that he was there to see how OpenAI was
             | run so he could learn and something similar on his own
             | after.
        
             | tptacek wrote:
             | There's nothing cynical about leaving a job after cliffing.
             | If a company wants a longer commitment than a year before
             | issuing equity, it can set a longer cliff. We're all adults
             | here.
        
       | suncemoje wrote:
       | ,,the right people can make magic happen"
       | 
       | :-)
        
       | fidotron wrote:
       | > There's a corollary here-most research gets done by nerd-
       | sniping a researcher into a particular problem. If something is
       | considered boring or 'solved', it probably won't get worked on.
       | 
       | This is a very interesting nugget, and if accurate this could
       | become their Achilles heel.
        
         | ACCount36 wrote:
         | It's not "their" Achilles heel. It's the Achilles heel of the
         | way humans work.
         | 
         | Most top-of-their-field researchers are on top of their field
         | because they really love it, and are willing to sink insane
         | amount of hours into doing things they love.
        
       | ishita159 wrote:
       | this post was such a brilliant read. to read about how they still
       | have a YC-style startup culture, are meritocratic, and people get
       | to work on things they find interesting.
       | 
       | as an early stage founder, i worry about the following a lot.
       | 
       | - changing directions fast when i lose conviction - things
       | breaking in production - and about speed, or the lack of it
       | 
       | I learned to actually not worry about the first two.
       | 
       | But if OpenAI shipped Codex in 7 weeks, small startups have lost
       | the speed advantage they had. Big reminder to figure out better
       | ways to solve for speed.
        
       | vonneumannstan wrote:
       | >Safety is actually more of a thing than you might guess
       | 
       | Considering all the people who led the different safety teams
       | have left or been fired, Superalignment has been a total bust and
       | the various accounts from other employees about the lack of
       | support for safety work I find this statement incredibly out of
       | touch and borderline intentionally misleading.
        
       | imiric wrote:
       | Thanks for sharing.
       | 
       | One thing I was interested to read but didn't find in your post
       | is: does everyone believe in the vision that the leadership has
       | shared publicly, e.g. [1]? Is there some skepticism that the
       | current path leads to AGI, or has everyone drunk the Kool-Aid? If
       | there is some dissent, how is it handled internally?
       | 
       | [1]: https://blog.samaltman.com/the-gentle-singularity
        
         | fragmede wrote:
         | Externally there's no rigorous definition as to what
         | constitutes AGI, so I'd guess internally it's not one
         | monolithic thing they're targeting either. You'd need everyone
         | to take a class about the nature of intelligence first, and all
         | the different kinds of it just to begin with. There's
         | undoubtedly dissent internally as to the best way to achieve
         | chosen milestones on the way there, as well as disagreement
         | that those are the right milestones to begin with. Think
         | tactical disagreement, not strategic. If you didn't think that
         | AGI were ever possible with LLMs, would you even be there to
         | begin with?
        
           | imiric wrote:
           | Well, Sam Altman has a clear definition of ASI, and AGI is
           | something they've been thinking about for a long time, so
           | presumably they must have some accepted definition of it.
           | 
           | My question was whether everyone believes this vision that
           | ASI is "close", and more broadly whether this path leads to
           | AGI.
           | 
           | > If you didn't think that AGI were ever possible with LLMs,
           | would you even be there to begin with?
           | 
           | People can have all sorts of reasons for working with a
           | company. They might want to work on cutting-edge tech with
           | smart people and infinite resources, for investment or
           | prestige, but not necessarily buy into the overarching
           | vision. I'm just wondering whether such a profile exists
           | within OpenAI, and if so, how it is handled.
        
         | tedsanders wrote:
         | Not the author, but I work at OpenAI. There are wide variety of
         | viewpoints and it's fine for employees to disagree on timelines
         | and impact. I myself published a 100-page paper on why I think
         | transformative AGI by 2043 is quite unlikely
         | (https://arxiv.org/abs/2306.02519). From informal discussion, I
         | think the vast majority of employees don't think that we're
         | mere years from a post-scarcity utopia where we can drink mai
         | tais on the beach all day. But there is a lot of optimism about
         | the rapid progress in AI, and I do think that it's harder to
         | forecast the path of a technology that has the potential to
         | improve itself. So much depends on your definition of AGI. In a
         | sense, GPT-4 is already AGI in the literal sense that it's an
         | artificial intelligence with some generality. But in the sense
         | of automating the economy, it's of course not close.
        
           | criddell wrote:
           | > depends on your definition of AGI
           | 
           | What definition of AGI is used at OpenAI?
           | 
           | My definition: AGI will be here when you can put it in a
           | robot body in the real word and interact with it like you
           | would a person. Ask it to drive your car or fold your laundry
           | or make a mai tai and if it doesn't know how to do that, you
           | show it, and then it can.
        
             | tedsanders wrote:
             | In the OpenAI charter, it's "highly autonomous systems that
             | outperform humans at most economically valuable work."
             | 
             | https://openai.com/charter/
        
           | imiric wrote:
           | Thank you!
           | 
           | The hype around this tech strongly promotes the narrative
           | that we're close to exponential growth, and that AGI is right
           | around the corner. That pretty soon AI will be curing
           | diseases, eradicating poverty, and powering humanoid robots.
           | These scenarios are featured in the AI 2027 predictions.
           | 
           | I'm very skeptical of this based on my own experience with
           | these tools, and rudimentary understanding of how they work.
           | I'm frankly even opposed to labeling them as intelligent in
           | the same sense that we think about human intelligence. There
           | are certainly many potentially useful applications of this
           | technology that are worth exploring, but the current ones are
           | awfully underwhelming, and the hype to make them seem more
           | than they are is exhausting. Not to mention that their
           | biggest potential to further degrade public discourse and
           | overwhelm all our communication channels with even more spam
           | and disinformation is largely being ignored. AI companies
           | love to talk about alignment and safety, yet these more
           | immediate threats are never addressed.
           | 
           | Anyway, it's good to know that there are disagreements about
           | the impact and timelines even inside OpenAI. It will be
           | interesting to see how this plays out, if nothing else.
        
       | throwawayohio wrote:
       | > As often as OpenAI is maligned in the press, everyone I met
       | there is actually trying to do the right thing.
       | 
       | I appreciate where the author is coming from, but I would have
       | just left this part out. If there is anything I've learned during
       | my time in tech (ESPECIALLY in the Bay Area) it's that the people
       | you didn't meet are absolutely angling to do the wrong thing(TM).
        
         | jjulius wrote:
         | When your work provides lunch in a variety of different
         | cafeterias all neatly designed to look like standalone
         | restaurants, directly across from which is an on-campus bank
         | that will assist you with all of your financial needs before
         | you take your company-operated Uber-equivalent to the next
         | building over and have your meeting either in that building's
         | ballpit, or on the tree-covered rooftop that - for some reason
         | - has foxes on top, it's easy to focus only on the tiny "good"
         | thing you're working on and not the steaming hot pile of
         | garbage that the executives at your company are focused on but
         | would rather you not see.
         | 
         | Edit: And that's to say nothing of the very generous pay...
        
         | myaccountonhn wrote:
         | I've been in circles with very rich and somewhat influential
         | tech people and it's a lot of talk about helping others, but
         | somehow beneath the veneer of the talk of helping others you
         | notice that many of them are just ripping people off, doing
         | coke and engaging in self-centered spiritual practices
         | (especially crypto people).
         | 
         | I also don't trust that people within the system can assess if
         | what they're doing is good or not. I've talked with higher ups
         | in fashion companies who genuinely believe their company is
         | actually doing so much great work for the environment when they
         | basically invented fast-fashion. I've felt it first hand
         | personally how my mind slowly warped itself into believing that
         | ad-tech isn't so bad for the world when I worked for an ad-tech
         | company, and only after leaving did I realize how wrong I was.
        
         | paxys wrote:
         | And it's not just about some people doing good and others doing
         | bad. Individual employees all doing the "right thing" can still
         | be collectively steered in the wrong direction by higher ups.
         | I'd say this describes the entirety of big tech.
        
         | archagon wrote:
         | Yes. We already know that Altman parties with extremists like
         | Yarvin and Thiel and donates millions to far-right political
         | causes. I'm afraid the org is rotten at its core. If only the
         | coup had succeeded.
        
       | LZ_Khan wrote:
       | This is just the exact same culture as Deepmind minus the
       | "everything on Slack" bulletpoint.
        
       | paxys wrote:
       | > An unusual part of OpenAI is that everything, and I mean
       | everything, runs on Slack.
       | 
       | Not that unusual nowadays. I'd wager every tech company founded
       | in the last ~10 years works this way. And many of the older ones
       | have moved off email as well.
        
       | zzzeek wrote:
       | > On the other hand, you're trying to build a product that
       | hundreds of millions of users leverage for everything from
       | medical advice to therapy.
       | 
       | ... then the next paragraph
       | 
       | > As often as OpenAI is maligned in the press, everyone I met
       | there is actually trying to do the right thing.
       | 
       | not if you're trying to replace therapists with chatbots, sorry
        
       | ThouYS wrote:
       | These one or two year tenures.. I don't know man
        
       | AIorNot wrote:
       | I'm 50, worked at few cool places and lots of boring ones. to
       | paraphrase, Tolstoy tends to be right -all happy families are
       | similar and unhappy families are unhappy in unique ways
       | 
       | OpenAI is currently selected for the brightest and young excited
       | minds, (and a lot of money).. bright, young (as in full of
       | energy) and excited people will work well anywhere- esp if given
       | a fair amount of autonomy.
       | 
       | Young people talking about how hard they worked is not a sign of
       | a great corp culture, just a sign that they are in the super
       | excited stage of their careers
       | 
       | In the long run who knows, I tend to view these companies as
       | groups of like minded people and groups of people change and the
       | dynamic changees overnight -so if they can sustain that culture
       | sure, but who knows..
        
         | rogerkirkness wrote:
         | Calvin is the founder/CTO of Segment, not old but also not some
         | doe eyed new grad.
        
           | jonas21 wrote:
           | On one hand, yes. But on the other hand, he's still in his
           | 30s. In most fields, this would be considered young / early
           | career. It kind of reinforces the point that bright, young
           | people can get a lot done in the tech world.
        
             | paulcole wrote:
             | > In most fields, this would be considered young / early
             | career
             | 
             | Is it considered young / early career in this field?
        
             | m00x wrote:
             | Calvin is loaded from the Segment exit, he would not work
             | if he wasn't excited about the work. The other founders
             | just went on to do their own thing or non-profits.
             | 
             | I worked there for a few years and Calvin is definitely
             | more of the grounded engineering guy. He would introduced
             | him as an engineer and just get talking code. He would
             | spend most of his time with the SRE/core team trying to
             | tackle the hardest technical problem at the company.
        
         | tptacek wrote:
         | I said this elsewhere on the thread and so apologize for
         | repeating, but: I know mid-career people working at this firm
         | who have been through these conditions, and they were energized
         | by the experience. They're shipping huge stuff that tens of
         | millions of people will use almost immediately.
         | 
         | The cadence we're talking about isn't sustainable --- has never
         | been sustained anywhere --- but if insane sprints like this (1)
         | produce intrinsically rewarding outcomes and (2) punctuate
         | otherwise-sane work conditions, they can work out fine for the
         | people involved.
         | 
         | It's completely legit to say you'd never take a job where this
         | could be an expectation.
        
       | theletterf wrote:
       | For a company that has grown so much in such a short time, I
       | continue to be surprised by its lack of technical writers. Saying
       | docs could be better is an euphemism, but I still can't find
       | fellow tech writers working there. Compare this with Anthropic
       | and its documentation.
       | 
       | I don't know what's the rationale for not hiring tech writers
       | other than nobody suggesting it yet, which is sad. Great dev
       | tools require great docs, and great docs require teams that own
       | them and grow them as a product.
        
         | mlinhares wrote:
         | The higher ups don't think there's value in that. Back at
         | DigitalOcean they had an amazing tech writing team, with people
         | with years of experience, doing some of the best tech docs in
         | the industry, when the layoffs started the writing team was the
         | first to be cut.
         | 
         | People look at it as a cost a and nothing else.
        
       | frankfrank13 wrote:
       | > As often as OpenAI is maligned in the press, everyone I met
       | there is actually trying to do the right thing
       | 
       | I doubt many people would say something contrary to this about
       | their (former) colleagues, which means we should always take this
       | with a (large) grain of salt.
       | 
       | Do I think (most) AT&T employees wanted to let the NSA spy on us?
       | Probably not. Google engineers and ICE? Palantir and.. well idk i
       | think everyone there knows what Palantir does.
        
       | JonathanRaines wrote:
       | Fascinating that you chose to compare OpenAI's culture to Los
       | Alamos. I can't tell if you're hinting AI is as world ending as
       | nuclear weapons or not.
        
       | troupo wrote:
       | > As often as OpenAI is maligned in the press, everyone I met
       | there is actually trying to do the right thing.
       | 
       | To quote Jonathan Nightingale from his famous thread on how
       | Google sabotaged Mozilla [1]:
       | 
       | --- start quote ---
       | 
       | The question is not whether individual sidewalk labs people have
       | pure motives. I know some of them, just like I know plenty on the
       | Chrome team. They're great people. But focus on the behaviour of
       | the organism as a whole. At the macro level, google/alphabet is
       | very intentional.
       | 
       | --- end quote ---
       | 
       | Replace that with OpenAI
       | 
       | [1]
       | https://archive.is/2019.04.15-165942/https://twitter.com/joh...
        
       | cess11 wrote:
       | 20 years from now, the only people who will remember how much you
       | worked is your family, especially your kids.
       | 
       | Seems like an awful place to be.
        
       | viccis wrote:
       | >It's hard to imagine building anything as impactful as AGI
       | 
       | >...
       | 
       | >OpenAI is also a more serious place than you might expect, in
       | part because the stakes feel really high. On the one hand,
       | there's the goal of building AGI-which means there is a lot to
       | get right.
       | 
       | I'm kind of surprised people are still drinking this AGI Koolaid
        
       | brcmthrowaway wrote:
       | Lucky to be able to write this .. likely just vested with FU
       | money!
        
       | yahoozoo wrote:
       | It would be interesting to read the memoirs of former OpenAI
       | employees that dive into whether they thought the company was on
       | the right track towards AGI. Of course, that's an NDA violation
       | at best.
        
       | jordanmorgan10 wrote:
       | I'm at a point my life and career where I'd never entertain
       | working those hours. Missed basketball games, seeing kids come
       | home from school, etc. I do think when I first started out, and
       | had no kiddos, maybe some crazy sprints like that would've been
       | exhilarating. No chance now though
        
         | chribcirio wrote:
         | > I'm at a point my life and career where I'd never entertain
         | working those hours.
         | 
         | That's ok.
         | 
         | Just don't complain about the cost of daycare, private school
         | tuition, or your parents senior home/medical bills.
        
       | dcreater wrote:
       | This is silicon valley culture on steroids: I really have to
       | question if it is positive for any involved party. Codex almost
       | has no mindshare and rightly so. It's a textbook also ran, except
       | it came from the most dominant player and was outpaced by Claude
       | code on the order of weeks.
       | 
       | Why go through all that? Instead what would have been a much
       | better scenario is openai carefully assessing different
       | approaches to agentic coding and releasing a more fully baked
       | product with solid differentiation. Even Amazon just did that
       | with Kiro
        
       ___________________________________________________________________
       (page generated 2025-07-15 23:00 UTC)