[HN Gopher] Everyone Mark Zuckerberg Has Hired So Far for Meta's...
       ___________________________________________________________________
        
       Everyone Mark Zuckerberg Has Hired So Far for Meta's
       'Superintelligence' Team
        
       Author : mji
       Score  : 34 points
       Date   : 2025-06-30 19:13 UTC (3 hours ago)
        
 (HTM) web link (www.wired.com)
 (TXT) w3m dump (www.wired.com)
        
       | JLvL wrote:
       | * Trapit Bansal: pioneered RL on chain of thought and co-creator
       | of o-series models at OpenAl.
       | 
       | * Shuchao Bi: co-creator of GPT-4o voice mode and o4-mini.
       | Previously led multimodal post-training at OpenAl.
       | 
       | * Huiwen Chang: co-creator of GPT-4o's image generation, and
       | previously invented MaskIT and Muse text-to-image architectures
       | at Google Research.
       | 
       | * Ji Lin: helped build o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5,
       | 4o-imagegen, and Operator reasoning stack.
       | 
       | * Joel Pobar: inference at Anthropic. Previously at Meta for 11
       | years on HHVM, Hack, Flow, Redex, performance tooling, and
       | machine learning.
       | 
       | * Jack Rae: pre-training tech lead for Gemini and reasoning for
       | Gemini 2.5. Led Gopher and Chinchilla early LLM efforts at
       | DeepMind.
       | 
       | * Hongyu Ren: co-creator of GPT-4o, 4o-mini, o1-mini, o3-mini, o3
       | and o4-mini. Previously leading a group for post-training at
       | OpenAl.
       | 
       | * Johan Schalkwyk: former Google Fellow, early contributor to
       | Sesame, and technical lead for Maya.
       | 
       | * Pei Sun: post-training, coding, and reasoning for Gemini at
       | Google Deepmind. Previously created the last two generations of
       | Waymo's perception models.
       | 
       | * Jiahui Yu: co-creator of o3, o4-mini, GPT-4.1 and GPT-4o.
       | Previously led the perception team at OpenAl, and co-led
       | multimodal at Gemini.
       | 
       | * Shengjia Zhao: co-creator of ChatGPT, GPT-4, all mini models,
       | 4.1 and o3. Previously led synthetic data at OpenAl.
        
         | smeeger wrote:
         | how could these people actively try to open pandoras box and
         | make all humans obsolete? if we survive this i imagine there
         | will be something like the Nuremberg trials for these people
         | who traded in everyones safety and wellbeing for money. and i
         | hope the results will be the same.
        
       | weird_trousers wrote:
       | So much wasted money it makes me sick...
       | 
       | There are so much money needed to solve another problems,
       | especially for health.
       | 
       | I don't blame the new comers, but Zuckerberg.
        
         | dekhn wrote:
         | zuck funds health research (a lot, and very ML focused) already
        
           | xvector wrote:
           | Wild how HN is flagging this objectively correct comment into
           | the ground because "zuck bad!!1"
        
             | dekhn wrote:
             | I really do wish there was a way to downvote "because I
             | don't like what the person is saying, even if it's true"
        
         | linotype wrote:
         | Better on ML than the next VR vaporware.
        
         | twoodfin wrote:
         | This stuff is ridiculously important for healthcare: It's a
         | demographic fact that both the US and the world at large are
         | simply not training enough doctors and nurses to provide
         | today's standard of care at current staffing levels as the
         | population ages.
         | 
         | We need massive productivity boosts in medicine just as fast as
         | we can get them.
        
           | hn_throwaway_99 wrote:
           | I sincerely doubt this understaffing of medical professionals
           | is a technology problem, and I believe it much more likely to
           | be an economic structural problem. And overall, I think that
           | powerful generative AI will make these economic structural
           | problems much worse.
        
           | trainerxr50 wrote:
           | It doesn't take super intelligence to give my elderly father
           | a bath or wipe his ass.
           | 
           | I think the main problem is we would almost need an economic
           | depression so that at the margin there were for less
           | alternative jobs available than giving my father a bath.
           | 
           | Then also consider that say we do have super-intelligence
           | that adds a few years to his life because of better
           | diagnostics and treatment of death. It actually makes the day
           | to day care problem worse in the aggregate.
           | 
           | We are headed towards this boomer long term care disaster and
           | there is nothing that is going to avert it. Boomers I talk to
           | are completely in denial of this problem too. They are
           | expecting the long term care situation to look like what
           | their parents had. I just try to convince every boomer I know
           | that they have to do everything they can do physically now to
           | better themselves to stay out of long term care as long as
           | possible.
        
         | cheevly wrote:
         | Do you realize how much health-related research Zuckerburg's
         | foundation does? There was literally a post on here last week
         | about it, geez.
        
         | xvector wrote:
         | Superintelligence or even just AGI short circuits all our
         | problems.
        
       | goatlover wrote:
       | Will the Superintelligence finally make the Metaverse profitable
       | and popular?
        
       | jxjnskkzxxhx wrote:
       | Is mark Zuckerberg systematically behind the curve on every hype?
        
         | pyman wrote:
         | He's just trying to figure out how to monetise your WhatsApp
         | messages
        
           | 4ndrewl wrote:
           | In the "metaverse"
        
         | JumpCrisscross wrote:
         | > _Is mark Zuckerberg systematically behind the curve on every
         | hype?_
         | 
         | Trend following with chutzpah, particulalry through
         | acquisitions, has been a winning strategy for Zuckerberg and
         | his shareholders.
        
         | bamboozled wrote:
         | This in includes fashion and hairstyles it seems...
        
       | pyman wrote:
       | Mark Zuckerberg hiring top AI researchers worries me more than
       | Iran hiring nuclear scientists.
        
         | quantified wrote:
         | With luck, they'll vaporize billions of dollars on nothing of
         | consequence.
         | 
         | If they come up with anything of consequence, we'll have an
         | incredibly higher level of Facebook monitoring of our lives in
         | all scopes. Also such a level of AI crap (info/disinfo in
         | politics, crime, arts, etc.) that ironically in-person
         | exchanges will be valued more highly than today. When
         | everything you see on pixels is suspect, only the tangible can
         | be trusted.
        
           | smeeger wrote:
           | do you remember the chorus of people on HN two years ago who
           | said that the next AI winter was already upon us?
        
         | smeeger wrote:
         | better than sam altman having them
        
       | hooloovoo_zoo wrote:
       | Poor Sam Altman, 300B worth of trade secrets bought out from
       | under him for a paltry few hundred million.
        
         | JumpCrisscross wrote:
         | > _Poor Sam Altman, 300B worth of trade secrets bought out from
         | under him for a paltry few hundred million_
         | 
         | Sorry, you don't lose people when you treat them well. Add to
         | that Altman's penchant for organisational dysfunction and the
         | (in part resulting) illiquidity of OpenAI's employees' equity-
         | not-equity and this makes a lot of sense. Broadly, it's good
         | for the American AI ecosystem for this competition for talent
         | to exist.
        
           | hn_throwaway_99 wrote:
           | In retrospect, I wonder if the original ethos of the non-
           | profit structure of OpenAI was a scam from the get go, or
           | just woefully naive. And to emphasize, I'm not talking just
           | about Altman.
           | 
           | That is, when you create this cutting edge, powerful tech, it
           | turns out that people are willing to pay gobs of money for
           | it. So if somehow OpenAI had managed to stay as a non-profit
           | (let's pretend training didn't cost a bajillion dollars),
           | they still would have lost all of their top engineers to
           | deeper pockets if they didn't pursue an aggressive
           | monetization strategy.
           | 
           | That's why I want to gag a little when I hear all this
           | flowery language about how AI will cure all these diseases
           | and be a huge boon to humanity. Let's get real - people are
           | so hyped about this because they believe it will make them
           | rich. And it most likely will, and to be clear, I don't blame
           | them. The only thing I blame folks for is trying to wrap "I'd
           | like to get rich" goals in moralistic BS.
        
             | JumpCrisscross wrote:
             | > _wonder if the original ethos of the non-profit structure
             | of OpenAI was a scam from the get go, or just woefully
             | naive_
             | 
             | Based on behaviour, it appears they didn't think they'd do
             | anything impactful. When OpenAI accidentally created
             | something important Altman immediately (a) actually got
             | involved to (b) reverse course.
             | 
             | > _if somehow OpenAI had managed to stay as a non-profit
             | (let 's pretend training didn't cost a bajillion dollars),
             | they still would have lost all of their top engineers to
             | deeper pockets if they didn't pursue an aggressive
             | monetization strategy_
             | 
             | I'm not so sure. OpenAI would have held a unique position
             | as both first mover and moral arbiter. That's a powerful
             | place to be, albeit not a position Silicon Valley is
             | comfortable or competent in.
             | 
             | I'm also not sure pursuing monetisation requires a for-
             | profit structure. That's more a function of the cost of
             | training, though again, a licensing partnership with, I
             | don't know, Microsoft, would alleviate that pressure
             | without requiring giving up control.
        
             | s1artibartfast wrote:
             | Getting rich going good is better than just getting rich.
             | People like both.
             | 
             | Which part are you skeptical about? that people also like
             | to do good, or that AI can do good?
        
             | meepmorp wrote:
             | It wasn't exactly a scam, it's just nobody thought it'd be
             | worth real money that fast, so the transition from noble
             | venture to cash grab happened faster than expected.
        
       | smeeger wrote:
       | the idea of mark zuckerberg being at the helm of digital super-
       | intelligence sickens me.
        
         | ajkjk wrote:
         | the cringiest _possible_ future
        
       ___________________________________________________________________
       (page generated 2025-06-30 23:01 UTC)