[HN Gopher] The Contradictions of Sam Altman
       ___________________________________________________________________
        
       The Contradictions of Sam Altman
        
       Author : mfiguiere
       Score  : 161 points
       Date   : 2023-03-31 19:34 UTC (1 days ago)
        
 (HTM) web link (www.wsj.com)
 (TXT) w3m dump (www.wsj.com)
        
       | tdsone3 wrote:
       | https://archive.is/YTOeD
        
       | antirez wrote:
       | I believe you can't, at the same time, accuse OpenAI of
       | benefitting of ideas mostly invented elsewhere, and also claim
       | that they are in a predominant position. It does not make any
       | sense, and they are not naturally positioned for a monopoly. Just
       | other companies have to move their asses.
        
         | tomlue wrote:
         | I'm a fan of OpenAI, but this is nonsense. All of human
         | existence is mostly other people's ideas. Among a ridiculously
         | huge list of other things, OpenAI benefits from the mountains
         | of labor that made scalable neural networks possible.
         | 
         | Have they had their own good ideas? Definitely. Are they
         | benefitting of ideas mostly invented elsewhere? Also
         | Definitely, just like everybody else.
        
           | antirez wrote:
           | I don't think you understood me. I'm with you, but given that
           | often OpenAI is accused of using public ideas for profit,
           | this, in turn, means they are obviously not in a dominant
           | position. So far they are just better than the others.
        
           | version_five wrote:
           | I'm not really a fan of openAI, but I think we're seeing the
           | classic mistake of confusing product with technology. Steve
           | Jobs / Apple didn't anything you'd call new ideas either
           | (obvious cliche but so is the criticism). It's execution and
           | design once the tech reaches a certain level
        
             | poopypoopington wrote:
             | I've heard Altman (on the Lex Friedman podcast) and Sundar
             | Pichai (on the Hard Fork podcast) say things to this
             | degree. The thing that OpenAI really managed to crack was
             | building a great product in ChatGPT and finding a good
             | product market fit for LLMs.
        
               | anon84873628 wrote:
               | What's the product market fit for LLMs, and how does
               | OpenAI fill it?
        
               | fooster wrote:
               | How much money is openai making from chatgpt premium now?
               | How much revenue are they they making from the api?
        
               | Robotbeat wrote:
               | Well sure, but there still aren't any other LLMs at the
               | level of GPT3/3.5 let alone GPT-4. GPT3.5 just using the
               | API returns fantastic results even without the ChatGPT
               | interface (which isn't terribly hard to replicate, and
               | others have using the API).
               | 
               | There are dozens if not hundreds of companies that
               | could've done something profound like ChatGPT if they had
               | full access to GPT3/3.5. And honestly, OpenAI stumbled a
               | lot with ChatGPT losing history access, showing other
               | users' history... but that doesn't matter much as the
               | underlying technology is so profound. I think this really
               | is a case of the under-the-hood capability (GPT3/3.5/4)
               | mattering more than productization and execution.
               | 
               | (Now I think there are not a ton of companies that could
               | do what Microsoft is trying to do by expanding GPT4 to
               | power office productivity... that is a separate thing and
               | probably only about 3 companies could do that, at best:
               | Microsoft, Apple, and Google... and theoretically Meta
               | but their lack of follow through with making Metaverse
               | useful makes me doubt it.)
        
               | mustacheemperor wrote:
               | Hm, I wonder how much of the API's performance is related
               | to training/finetuning done by OpenAI planned towards the
               | ChatGPT product. I think the RLHF is partly product
               | design and partly engineering.
        
             | vitehozonage wrote:
             | Rather than execution or design perhaps this time it was
             | mainly about being unethical enough to sell out to
             | investors and unflinchingly gather enough data in one place
             | by carelessly ignoring copyright, authors' desires, privacy
             | regulations, etc.
        
             | brookst wrote:
             | It's a good comparison. And once again tech enthusiasts are
             | confused and outraged that the product people are getting
             | credit for tech they didn't invent. Once again missing the
             | forest that people buy products, not tech.
        
               | simonh wrote:
               | There's an awful lot of judgement, engineering and
               | technique that goes into a really well thought out
               | product. It's often deeply underestimated, and culture
               | makes a huge difference in execution. Bing/Sydney came
               | out after ChatGPT, based on exactly the same tech, but
               | was hot garbage.
        
               | mustacheemperor wrote:
               | It's interesting to see the ad implementation, I recall
               | some predictions that Microsoft would be particularly apt
               | at finding a way to integrate advertising organically.
               | Instead it just seems to have made the bot more stupid
               | because sponsored products are forced into its
               | recommendations.
               | 
               | I've gone back to just using the GPT API, unless I
               | absolutely need to search the internet or information
               | after 2021 for some reason.
        
               | brookst wrote:
               | I don't know, my family talks almost daily about how
               | amazing bing chat is. The Sydney eta was kind of crazy,
               | but the core product seems to be doing well.
               | 
               | It is definitely solving a different problem than chatgpt
               | though, and maybe a less inspiring problem. Chatgpt is
               | like an open world game where you can do anything; Bing
               | chat is just a point solution for the vicious spiral of
               | SEO and Google's profit motive that rendered search
               | results and web pages so useless.
        
               | FormerBandmate wrote:
               | Bing/Sydney is better than ChatGPT. It had serious bugs
               | in beta testing
        
               | simonh wrote:
               | It's a dramatically worse chat bot, but being able to
               | search the internet does give it an additional useful
               | capability, while limiting it to five interactions papers
               | over its psychotic tendencies.
        
             | tomlue wrote:
             | mostly agree. Though I wouldn't underestimate the tech and
             | engineering work behind OpenAI. That microsoft partnership
             | is no joke.
        
           | wslh wrote:
           | > OpenAI benefits from the mountains of labor ...
           | 
           | Can't you say the same about Google? Google lives from the
           | labor of others.
           | 
           | Not entering into the debate if this is ethically or not just
           | saying that OpenAI is not much different. When photo services
           | such as Google Photos recognize the Eiffer Tower in Paris
           | they are using images from others.
        
         | matthewdgreen wrote:
         | There have been many examples of companies inventing new
         | technology, then failing to take it to market until a
         | competitor copied the ideas. The classic example is Apple and
         | Xerox PARC. The criticism in this case is that while icons and
         | GUIs are obviously harmless (so it's good we got them out of
         | the lab!), maybe AI is the kind of tech we should have let
         | researchers play with for a while longer, before we started an
         | arms race that puts it in everyone's house.
        
         | ren_engineer wrote:
         | OpenAI acts like they are some underdog startup for PR purposes
         | while actually having access to effectively unlimited resources
         | due to their relationship with Microsoft, which rubs people the
         | wrong way
         | 
         | Plenty of other companies had released GPT powered chat bots
         | like ChatGPT, they just couldn't offer it for free because they
         | didn't have a sweetheart deal with Microsoft for unlimited
         | GPUs. Google did drop the ball though, they were afraid of
         | reputation risk. Google should have used DeepMind or another
         | spinoff to release their internal chatbot months ago
        
           | simonh wrote:
           | I think you're missing how much of a profound difference the
           | intensive RLHF training OpenAI did makes in ChatGPT.
           | Microsoft's Sydney seems to also be GPT 3.5 based, came out
           | after ChatGPT, and it was an utter dumpster fire on launch in
           | comparison.
           | 
           | Meanwhile nobody has even caught up to ChatGPT yet, not even
           | Microsoft whose resources are the secret sauce you think is
           | the game changer, and now 4.0 is out and even more massively
           | moved the ball forward.
        
             | jamaliki wrote:
             | No, actually. Microsoft's Sydney is GPT 4 [1].
             | 
             | 1 - https://blogs.bing.com/search/march_2023/Confirmed-the-
             | new-B...
        
           | nomel wrote:
           | > to release their internal chatbot months ago
           | 
           | I'm not sure that would have been wise. Bard clearly isn't
           | ready.
        
             | sebzim4500 wrote:
             | Yeah but if they released earlier that fact wouldn't have
             | been so embarassing.
             | 
             | As it was, they intially just claimed that releasing a
             | competitor was irresponsible, then they eventually did it
             | anyway (badly).
        
         | rhaway84773 wrote:
         | I'm not sure why that doesn't make sense.
         | 
         | In a winner takes all market, where the product is highly
         | complex, which is likely the case for any product developed
         | since the advent of computers, if not before, the predominant
         | position will almost certainly be taken by somebody, and since
         | it's a highly complex product, it's likely no one entity
         | thought of and/or implemented even close to a majority of the
         | ideas needed to make it work.
         | 
         | In fact, it's likely to be a random winner amongst 10-20
         | entities who implemented some of the ideas, and another
         | potentially larger number of entities who implemented equally
         | good, or even better, ideas, which happened to fail for reasons
         | that couldn't have been known in advance.
        
           | curiousllama wrote:
           | > In a winner takes all market
           | 
           | Just because Search was winner take all, doesn't mean AI will
           | be. What network effects or economies of scale are
           | unachievable by competitors? Besides, Alpaca showed you can
           | replicate ChatGPT on the cheap once its built - what's
           | stopping others from succeeding?
        
             | tyfon wrote:
             | Yeah, I don't think this will be a corporate thing but a
             | private decentralized thing.
             | 
             | It's probably the worst fear of the likes of google etc, a
             | 100% local "search engine" / knowledge center that does not
             | even require the internet.
             | 
             | I've been running the 65B model for a bit. With the correct
             | prompt engineering you get very good results even without
             | any fine tuning. I can run stable diffusion etc fine too
             | locally. If anything will let us break free from the
             | corporate, it is this.
        
         | ftxbro wrote:
         | > I believe you can't, at the same time, accuse OpenAI of
         | benefitting of ideas mostly invented elsewhere, and also claim
         | that they are in a predominant position.
         | 
         | There used to be a team at Google called Google Brain and they
         | all left to go to OpenAI after the employee protests against
         | taking military AI contracts in 2018. Now Microsoft has those
         | contracts and funneled $10B to OpenAI from the CIA. OK that's a
         | little bit of exaggeration but not so much; I guess not _all_
         | employees left, and Google Brain still technically exists. Also
         | some of the brain employees went to other startups not only
         | OpenAI.
        
           | FormerBandmate wrote:
           | Microsoft doesn't have $10 billion from the CIA. They're
           | splitting military cloud contracts with Amazon and other
           | startups, but it's just the same thing as corporate contracts
        
           | sebzim4500 wrote:
           | >funneled $10B to OpenAI from the CIA
           | 
           | Without some evidence to support it, this really sounds like
           | a conspiracy theory.
        
           | beebmam wrote:
           | Many of those employees have changed their minds about
           | military contracts in this new cold war era.
        
             | cowl wrote:
             | If I'm not mistaken their opposition was on principle not
             | whether it was needed or not. So the fact that we are in a
             | new cold war era does not change that Equation. The
             | principle is still the same. The only way for this "change
             | of mind" is if their opposition was due to "it's not needed
             | because US has no rivals". Or what most probably happened,
             | they realised that if they want to keep workign on this
             | field there is no escape from those kind of implications
             | and They don't have the big bad wolf Google to blame
             | anymore.
        
               | jeremyjh wrote:
               | And then Russia invades Ukraine and people change their
               | minds about what is important, and what is possible.
        
               | cowl wrote:
               | But that's my point. If their opposition was not on
               | principle but based on the naive Idea that "we will live
               | in peace and harmony", I really am afraid what other
               | naive principles are in the bases of their work and what
               | safeguards are being set in place for the AIs.
        
               | jeremyjh wrote:
               | There is a such thing as being principled and then
               | finding out you were naive.
        
               | mistermann wrote:
               | There is also the substantial effectiveness of
               | propaganda, and the fact that it is essentially
               | impossible to know if one's beliefs/"facts" have been
               | conditioned by it.
               | 
               | That most discussions on such matters typically devolve
               | rapidly into the regurgitation of unsound memes doesn't
               | help matters much either.
        
               | sebzim4500 wrote:
               | Sure, but it is equally likely that your old perspective
               | was the result of propaganda and your new one is a
               | rational adjustment in the face of new information. Or
               | that both are propaganda, I suppose.
        
               | cowl wrote:
               | ofcourse but then you still must have some principles and
               | be as loud about how naive you were as you were when you
               | were protesting the thing in the first place.
        
               | josephg wrote:
               | Why must you be loud about changing your mind? Why not
               | just quietly realise you were wrong, have conversations
               | with friends about it and move on? That's what I'd do.
               | 
               | My life is a story. I'm under no obligation to share.
        
               | robbomacrae wrote:
               | Not the op but the quotes from John Maynard etc made me
               | think... if I listen to your original thesis and you
               | convince me (maybe you have some authority) and I go on
               | believing you, is it not harmful to me if you realize
               | your error and don't inform me? Boss: "why did you do X?"
               | Me: "But sir you told me doing X was good!" Or to put it
               | differently, if you spread the wrong word then don't
               | equally spread the correction then the sum of your
               | influence is negative.
        
             | pen2l wrote:
             | When I was a young person, I used to deride writings like
             | 1984 on the ground that the scenarios and stories presented
             | to carry the message were too far-fetched.
             | 
             | Reading your comment has set off an epiphany, I think I get
             | it now, there probably exists some higher-up person who is
             | thinking in these terms: we must always be in a state of
             | war, for if we are, the populace will want to be ready and
             | willing to fund the instruments of war. And we always
             | _want_ to be ready for war, because if ever we are not, we
             | lose our capability to win a potential future war. We must
             | even contribute our efforts to build these instruments of
             | war. War is constant. War is peace.
        
               | HyperSane wrote:
               | The more powerful your military is the less likely you
               | will need to use it to defend yourself.
        
             | InCityDreams wrote:
             | America! What is in your military now, eventually ends up
             | in the hands of the civilian police.
             | 
             | *and other countries are following. Fucking, sadly.
        
               | sebzim4500 wrote:
               | What the hell are they going to do with an M1 Abrams?
               | 
               | Having said that, I dread to see what the NYPD manage to
               | achieve with an F-35B.
        
           | [deleted]
        
       | m3kw9 wrote:
       | OpenAI getting rich now everyone wants a piece and everyone
       | fighting over it like a billion dollar inheretence fight. In a
       | round about way, if GPT5 is nearly as advertised, watch the govt
       | swoop in under national security guise
        
         | oldstrangers wrote:
         | Undoubtedly OpenAI already has some very close ties and/or
         | contracts with the DoD.
        
           | jstx1 wrote:
           | Any evidence for this, or are you just assuming that it's the
           | case?
        
             | roflyear wrote:
             | To me it overstates what is achieved. This isn't AGI.
             | Unsure how much the government would care about this.
        
         | unshavedyak wrote:
         | > if GPT5 is nearly as advertised, watch the govt swoop in
         | under national security guise
         | 
         | Re: GPT5, are there any .. reasonable/credible sources of
         | information on the subject? I've become deaf from all the
         | speculation and while i am very curious, i'm unsure if anything
         | substantiated has actually come out. Especially when
         | considering speculation from Sam himself.
        
           | blihp wrote:
           | There's barely any credible information about GPT4 (i.e.
           | OpenAI hasn't said very much about what's going on behind the
           | curtain) and there's absolutely none re: any releases beyond
           | that.
        
           | zarzavat wrote:
           | It's unlikely to be any time soon. Despite productization by
           | OpenAI, LLMs are still an active area of research. Research
           | is unpredictable. It may take years to gather enough
           | fundamental results to make a GPT-5 core model that is
           | substantially better than GPT-4. Or a key idea could be
           | discovered tomorrow.
           | 
           | Moreover, previous advances in GPTs have come from data
           | scaling: throwing more data at the training process. But data
           | corpus sizes have started to peak - there is only so much
           | high quality data in the world, and model sizes have reached
           | the limits of what is sensible at inference time.
           | 
           | What OpenAI can do while they are waiting is more of the easy
           | stuff, for example more multimodality: integrating DALL-e
           | with GPT-4, adding audio support, etc. They can also optimize
           | the model to make it run faster.
        
             | dougmwne wrote:
             | I keep keep hearing people claim we are at the end of
             | corpus scaling, but that seems totally unfounded. Where has
             | it been proven you can't running the training set through
             | in multiple epochs in randomized order? Who's to say you
             | can't collect all the non-English corpus and have the
             | performance transfer to English? Who's to say you can't run
             | the whole damn thing backwards and still have it learn
             | something?
        
             | SamPatt wrote:
             | There are diminishing returns from compute time but it
             | looks like even though they are diminishing there's still a
             | fair bit on the table.
             | 
             | Though my guess is that GPT-5 will be the last model which
             | gains significantly from just adding more compute to the
             | current transformer architecture.
             | 
             | Who the hell knows what comes next though?
        
             | whiplash451 wrote:
             | You are missing the scaling on context size.
        
         | ftxbro wrote:
         | They've already swooped in.
         | 
         | I mean the conspiracy argument would be that the $10B isn't a
         | normal investment. It's a special government project investment
         | facilitated by OpenAI board member and former clandestine CIA
         | operative and cybersecurity executive Will Hurd through his
         | role on the board of trustees of In-Q-Tel the investment arm of
         | the CIA. It's funneled through Microsoft instead of through
         | Google in part because of Google's No-Military-AI pledge in
         | 2018 demanded by its employees, after which Microsoft took over
         | its military contracts including project Maven. The new special
         | government project, the Sydney project, is the most urgent and
         | ambitious since the project to develop nuclear weapons in the
         | mid twentieth century.
         | 
         | Of course I don't necessarily believe any of that but it can be
         | fun to think about.
        
           | koboll wrote:
           | Wait holy shit this is at least partially true though?
           | https://openai.com/blog/will-hurd-joins
        
           | nirushiv wrote:
           | Please stop spreading FUD and unsubstantiated rumours all
           | over this thread
        
             | paganel wrote:
             | Why wouldn't the US Government invest billions of dollars
             | in a technology that it sees as essential? What's FUD-y
             | about that? Most of our industry itself is the result of
             | the US Government's past investments for military-related
             | purposes.
             | 
             | Later edit: Also, article from 2016 [1]
             | 
             | > There's more to the Allen & Co annual Sun Valley mogul
             | gathering than talk about potential media deals: The
             | industry's corporate elite spent this morning listening to
             | a panel about advances in artificial intelligence,
             | following sessions yesterday dealing with education,
             | biotech and gene splicing, and the status of Middle East.
             | 
             | > Netscape co-founder Marc Andreessen led the AI session
             | with LinkedIn's Reid Hoffman and Y Combinator's Sam Altman.
             | The main themes: AI will affect lots of businesses, and
             | it's coming quickly.
             | 
             | > Yesterday's sessions included one with former CIA
             | director George Tenant who spoke about the Middle East and
             | terrorism with New York Police Department Deputy
             | Commissioner of Intelligence & Counter-terrorism John
             | Miller and a former chief of Israeli intelligence agency
             | Mossad.
             | 
             | So, yes, all the intelligence agencies are pretty involved
             | in this AI thing, they'd be stupid not to be.
             | 
             | [1] https://deadline.com/2016/07/sun-valley-moguls-
             | artificial-in...
        
               | ftxbro wrote:
               | Now seven years later Will Hurd and George Tenet are
               | currently the managing director and chairman respectively
               | of Allen & Co! More facts worth considering are in the
               | mysterious hacker news comment from the other day:
               | https://news.ycombinator.com/item?id=35366484
        
             | [deleted]
        
             | kneel wrote:
             | It would be irresponsible for intelligence agencies NOT to
             | involve themselves in AI. LLMs have the capabilities to
             | catalyze economic shockwaves on the same magnitude of the
             | internet itself.
             | 
             | Notice how OpenAI is open to many Western friendly
             | countries but not certain competitive challengers?
             | https://platform.openai.com/docs/supported-countries
        
               | FormerBandmate wrote:
               | Out of the BRICS, Brazil, India, and South Africa are
               | there. Russia and China aren't, but that's not really an
               | issue of "competitive challengers" so much as
               | dictatorships who are invading or threatening to invade
               | democracies
        
               | ChatGTP wrote:
               | Realise that America has invaded plenty of countries,
               | overthrown leaders, been a huge driver of claimed change
               | and oil industries, basically done whatever it wants and
               | continues to do so, pretty much based on being able to
               | print as much money,as it likes and if you don't like
               | that, you'll face the full force of the military
               | industrial complex.
               | 
               | Look at Snowden and Assange. They tried to show us what's
               | behind the curtain and their lives were wrecked.
               | 
               | The rhetoric on here about Russia and China = "bad guys",
               | no questions asked is overly simplistic. Putin is clearly
               | in the wrong here. But what creates a person like that? I
               | believe we are somewhat responsible for it.
               | 
               | People cite possible atrocities of Xinjiang, but what
               | about Iraq and Siria, North Korea, Vietnam whole entire
               | countries destroyed. Incredible loss of life.
               | 
               | American attitudes are a huge source of division in the
               | world. Yes so are China and Russias.
               | 
               | We cannot only see one side of a story anymore, it's just
               | too dangerous. As we have more powerful weapons and we
               | do, we have to, absolutely have to learn to understand
               | each other and work through diplomacy with a more open
               | mind and peaceful outcomes which are beneficial for all.
               | 
               | No I'm not advocating for dictators, but you cannot
               | pretend that Americans invasions have been always
               | positive or for good intention, or that American
               | interests are always aligned with the rest of the worlds.
               | 
               | The arms races need to stop. Very quickly.
        
           | sebzim4500 wrote:
           | Is it not enormously more likely that Microsoft decided to
           | invest a small amount (to them) in a technology which is
           | clearly core to their future business plans?
        
           | xenospn wrote:
           | Who has time to come with this stuff?!
        
             | poopypoopington wrote:
             | ChatGPT
        
             | seattle_spring wrote:
             | Most of it seems to come from /r/conspiracy, which
             | unsurprisingly has a lot of overlap with another subreddit
             | that starts with /r/cons*
        
               | the_doctah wrote:
               | I'd love to go through all the things Liberals labeled a
               | conspiracy in the last 5 years that actually became true,
               | but I don't have that kind of time today
        
               | slickdork wrote:
               | I'd settle for three examples with sources.
        
               | pjohri wrote:
               | Well at least one: in the year 2000, I used to work for
               | Verizon and a picture from one of the local networks hubs
               | was circulated showing a bunch of thick cables tapping
               | into the network and alleging that the government was
               | listening to all calls Americans made. People made a lot
               | of fun of that photo until Snowden brought the details to
               | light.
        
       | [deleted]
        
       | mongol wrote:
       | How did he become so rich? His Wikipedia page does not have much
       | details on this.
        
         | [deleted]
        
         | ftxbro wrote:
         | Paul Graham is his Les Wexner.
         | 
         | "Sam Altman, the co-founder of Loopt, had just finished his
         | sophomore year when we funded them, and Loopt is probably the
         | most promising of all the startups we've funded so far. But Sam
         | Altman is a very unusual guy. Within about three minutes of
         | meeting him, I remember thinking 'Ah, so this is what Bill
         | Gates must have been like when he was 19.'"
         | 
         | "Honestly, Sam is, along with Steve Jobs, the founder I refer
         | to most when I'm advising startups. On questions of design, I
         | ask "What would Steve do?" but on questions of strategy or
         | ambition I ask "What would Sama do?"
         | 
         | What I learned from meeting Sama is that the doctrine of the
         | elect applies to startups. It applies way less than most people
         | think: startup investing does not consist of trying to pick
         | winners the way you might in a horse race. But there are a few
         | people with such force of will that they're going to get
         | whatever they want."
        
           | version_five wrote:
           | If anyone has read "Wild Sheep Chase" by Haruki Murukami,
           | there was the idea of being possessed by the sheep which
           | turned one into a forceful business man. I have only met a
           | couple people like this, but as in the quote it's immediately
           | obvious, and you see why they are where they are.
        
             | gautamcgoel wrote:
             | In my opinion, this book is his most underrated book. It's
             | very funny, highly recommend!
        
           | bmitc wrote:
           | I feel I'd take every word of that with a massive grain of
           | salt. It reeks of cult of personality.
        
             | dilap wrote:
             | I did take it with a huge grain of salt (and an eye-roll)
             | reading it years ago. However, given where sama and OpenAI
             | are today, perhaps pg was right all along!
        
           | jasmer wrote:
           | Loopt was a failure. Political skills. It's a stream of PR,
           | not anything operationally applied, or performant.
        
         | pbw wrote:
         | I don't have the source, it was a video interview, but Sam said
         | he has personally invested in around 400 startups. And it says
         | here he employs "a couple of dozen people" to manage his
         | investment and homes. At that scale I think you yourself
         | basically are a venture capital firm.
        
           | jeremyjh wrote:
           | You are just describing a rich person, not how he became
           | rich.
        
             | elorant wrote:
             | Those 400 startups are gradual investments. They didn't all
             | happen overnight. If you have really early access to some
             | very promising startups you don't need a shitton of money
             | to invest in them.
        
         | naillo wrote:
         | I'd imagine he got to invest early in a lot of successful YC
         | companies during his time there.
        
         | ohgodplsno wrote:
         | Parents were rich, sent their child to Stanford and used their
         | connections to let him build connections to other rich people,
         | founded a shitty startup in the middle of a period where any
         | rich kid making a social media company would get bought out for
         | dozens of millions, rest is history.
         | 
         | He's always been rich.
        
           | 876978095789789 wrote:
           | On Loopt's questionable acquisition:
           | https://news.ycombinator.com/item?id=3684357
        
         | 876978095789789 wrote:
         | He was in the first batch of YC startups with a feature phone
         | location-aware app called Loopt. Once smartphones came along,
         | it became largely obsolete, and started to become irrelevant,
         | but still got acquired under questionable circumstances anyway,
         | enough for Altman to get rich:
         | https://news.ycombinator.com/item?id=3684357
         | 
         | From there he became a VC and ultimately president of YC.
        
           | 876978095789789 wrote:
           | I'm not sure if I should elaborate further, but in the last
           | years of Loopt, it actually devolved into a hook-up app
           | servicing the gay community, basically Grindr before Grindr:
           | https://news.ycombinator.com/item?id=385178
           | 
           | I guess he was ahead of his time, in a way? Still, I've never
           | forgotten that this silly "success" was the first big exit of
           | the most touted YC founder, ever.
        
             | sebzim4500 wrote:
             | I think that's just being ahead of your time, no
             | qualification needed.
        
       | 29athrowaway wrote:
       | Should we also talk about the contradictions of WSJ?
       | 
       | The only way to never contradict yourself is to never say
       | anything.
       | 
       | Now, is AI the right work area to "move fast and break things"?
       | no.
        
       | molodec wrote:
       | Altman "hadn't been to a grocery store in four or five years". He
       | is so out of touch with real world, people needs and desires, and
       | fantasies about the future world based on the assumption that
       | most people want to be free "to pursue more creative work." I
       | think most people don't actually dream about pursuing creative
       | work. Being absolutely "free" of work doesn't make one more
       | creative. Real problems and constraints force people to come up
       | with creative solutions.
        
       | leobg wrote:
       | The contradiction I noticed during his Lex interview was him
       | talking about being attacked by Elon Musk. He said it reminded
       | him of how Elon once said he felt when the Apollo astronauts
       | lobbied against SpaceX. Elon said it made him sad. Those guys
       | were his heroes. And that the wish they would come visit and look
       | at the work SpaceX was doing. I found that comparison by Altman
       | disingenuous. First, he didn't seem so much sad as he seemed
       | angry. At one point in the interview, he said that he thought
       | about some day attacking back. That's not at all how Elon had
       | felt about those astronauts. And second, why doesn't Altman just
       | invite Elon and show him the work they are doing? Wouldn't take
       | more than a phone call.
        
       | DubiousPusher wrote:
       | Stop asking the market to look after collective interest. This is
       | the job of the government. The ultimate effect of virtue wanking
       | CEOs and corporate governance is to deceive people into thinking
       | democracy is something that can be achieved by for profit
       | organizations and that they can forsake the formal binding of
       | collective interest through law.
       | 
       | It's nice if people are nice but it is not a bulwark of the
       | collective good. It is a temporary social convenience. The higher
       | that niceness exists in the social order, the greater its
       | contemporary benefit but also, the more it masks the
       | vulnerability of that social benefit.
       | 
       | It matters if Sam Altman is Ghandi or Genghis Khan in a concrete
       | way but you, as a citizen must act as if it doesn't matter.
       | 
       | If AI poses a danger to the social good, no amount of good guy
       | CEOs will protect us. The only thing that will is organization
       | and direct action.
        
         | erlend_sh wrote:
         | The market should absolutely be looking after the collective
         | interest. That's what the we the collective created the market
         | for in the first place!
        
           | DubiousPusher wrote:
           | The market will follow its inceotives. You shape those
           | incentives with laws. If you want a market that allows people
           | to take risks, you do that by inventing the limit liability
           | corporation not by telling people to be nice and to not
           | pursue their debts unto their debtors' personal property. If
           | you want a market that discourages monopoly you do that by
           | regulating combination not by writing articles about how
           | "good businessmen" don't act anticompetitively.
        
         | jokethrowaway wrote:
         | That's completely misguided. Consumers purchase from companies
         | they like.
         | 
         | Why do you think companies are all woke and virtue signalling?
         | Because they interpreted the vocal woke minority as the voice
         | of the country and they want to capture that market.
         | 
         | Corporations will absolutely try to do go in order to maximise
         | their profit.
         | 
         | And this is ignoring private charities which get more done than
         | any government has ever done.
         | 
         | Collective interest is a nice concept but the government, like
         | all large organizations, is not capable of moving in any
         | direction. Whatever you need done, chances are someone's cousin
         | will get a job, the job will be done poorly and the taxpayers
         | will pay more in taxes to fix the problem again and again and
         | again.
         | 
         | A government can't fail and it's therefore inefficient.
        
           | runarberg wrote:
           | Your post ignores the existence of democracies. Governments
           | fail all the time. In a democracy failures of government will
           | often yield a total collapse and complete replacement. If a
           | failure is spectacular enough, these failure often come with
           | constitutional reforms or even revolutions.
           | 
           | In addition, democratic governments (and even many autocratic
           | governments) have some levels of distribution of power. Your
           | small municipal government may very well end up being
           | absorbed into you neighboring municipality because it is more
           | efficient. Maybe an intermediate judicial step is introduced
           | at a county, or even country level.
           | 
           | Governments do try, and often succeed into making your
           | freedom and your interaction with society at large as
           | efficient as possible, while trying to maximize your
           | happiness. (Although I'll admit way to often democratic
           | governments do act in a way that maximized the profits of the
           | wealthy class more then your happiness).
        
         | clarkmoody wrote:
         | The third way is to build AI technology that empowers the
         | individual against state and corporate power alike. Democracy
         | got us here. It cannot get us out.
        
           | Quekid5 wrote:
           | Who do you imagine has the majority of compute power?
        
           | maxbond wrote:
           | That's not a stable equilibrium. Blogs gave individuals
           | asymmetric control over disseminating information - it didn't
           | last. If you don't create institutions and power structures
           | that cement and defend some ability of individuals, it will
           | decay as that power is usurped by whatever institutions and
           | power structures benefit from doing so.
        
           | rhcom2 wrote:
           | > AI technology that empowers the individual against state
           | and corporate power alike
           | 
           | What does this even mean though? Seems like hand waving that
           | "AI" is just going to fix everything.
        
             | bredren wrote:
             | It's much easier to imagine progressed applications powered
             | by recent AI that would provide outsized civic weaponry.
             | 
             | Identification of unusual circumstances or anomalous public
             | records seem ripe.
             | 
             | But more straight forward and customized advice on how to
             | proceed on any front--super wikihow--makes anyone more
             | powerful.
             | 
             | Today, complicated solutions can sometimes to require
             | extensive deep research and distillation of material.
             | 
             | So much so that DIY folks can seem like wizards to those
             | who only know of turn key solutions and answers.
             | 
             | At the risk of causing a draft from further hand waving: a
             | bigger tent can mean a higher likelihood of a special
             | person or group of folks emerging.
        
           | joe_the_user wrote:
           | _The third way is to build AI technology that empowers the
           | individual against state and corporate power alike_
           | 
           | Ha, I'm OK with that as long as I get to pick the individual!
           | 
           | I mean, an AGI under the control of some individual could
           | indeed make them more powerful than a corporation or even a
           | state but whether increases average individual freedom is
           | another question.
        
           | felix318 wrote:
           | Such techno-utopianism... political power belongs to people
           | who control the guns. There is no way around it.
        
           | smoldesu wrote:
           | So long as you buy your inferencing hardware from another
           | private party, I'd wager you're helpless against both state
           | and corporate power.
        
           | YawningAngel wrote:
           | Given that we don't have such AI technology at present, would
           | it not be prudent for us to assume that it may not be
           | available imminently and plan for how we can address the
           | problem without it?
        
         | m3kw9 wrote:
         | It's all about risk and rewards, nobody who owns openai is
         | going say let's pause. It also never stopped the country that
         | first invented nuclear bomb, sure they could paus and then
         | Russia would have done it and would have it first, and then
         | said "thanks for pausing"
        
           | DubiousPusher wrote:
           | I don't know why people think nukes are a good example here.
           | Nukes were outright birthed within the government within that
           | government at its height of intervention into the market, at
           | the height of its reach into the daily lives of every
           | American, at the height of American civic engagement.
           | 
           | Policy makers spent a huge amount of time creating a
           | framework for them. Specifically there was a huge debate
           | about whether they should be under the direct control of the
           | military. The careful decision to place them in civilian
           | control under the Department of Energy is probably part of
           | the reason they haven't been unleashed since.
        
         | bennysonething wrote:
         | Though we do vote with our wallets too.
        
           | avgcorrection wrote:
           | How great then that some wallets are millions of times larger
           | than others.
        
           | bloodyplonker22 wrote:
           | Yes, but for your own immediate benefit. People, in general,
           | are just not trained to think long term and "for the greater
           | good".
        
         | josephg wrote:
         | Why not both?
         | 
         | I agree government is useful and needed sometimes. But laws are
         | slow, blunt instruments. Governments can't micromanage every
         | decision companies make. And if they tried, they would hobble
         | the companies involved.
         | 
         | The government moves slowly. When AGI is invented (and I'm
         | increasingly convinced it'll happen in the next decade or two),
         | what comes next will not be decided by a creaky federal
         | government full of old people who don't understand the
         | technology. The immediate implications will be decided by Sam
         | Altman and his team. I hope on behalf of us all that they're up
         | to the challenge.
        
           | TheOtherHobbes wrote:
           | They'll be decided by AI, not by governments, corporations,
           | or individuals.
           | 
           | There's still a kind of wilful blindness about AI really
           | means. Essentially _it 's a machine that can mimic human
           | behaviours more convincingly than humans can._
           | 
           | This seems like a paradox, but it really isn't. It's the
           | inevitable end point of automatons like Eliza, chess, go, and
           | LLMs.
           | 
           | Once you have a machine that can automate and mimic social,
           | political, cultural, and personal interactions, that's it -
           | that's the political singularity.
           | 
           | And that's true even the machine isn't completely reliable
           | and bug free.
           | 
           | Because neither are humans. In fact humans seem predisposed
           | to follow flawed sociopathic charismatic leaders, as long as
           | they trigger the right kinds of emotional reactions.
           | 
           | Automate that, and you have a serious problem.
           | 
           | And of course you don't need sentience or intent for this.
           | Emergent programmed automated behaviour will do the job just
           | fine.
        
           | jasonhansel wrote:
           | > And if they tried, they would hobble the companies
           | involved.
           | 
           | Well, yeah, that's the point of regulation: to limit
           | corporate behavior. There are plenty of other highly-
           | regulated industries in the US; why shouldn't AI be one of
           | them?
        
           | matthewdgreen wrote:
           | >The immediate implications will be decided by Sam Altman and
           | his team. I hope on behalf of us all that they're up to the
           | challenge.
           | 
           | Will they really be determined by OpenAI? So far what Altman
           | has accomplished with OpenAI is to push a lot of existing
           | research tech out into the open world (with improvements, of
           | course.) This has in turn forced the hands of the original
           | developers at Google and Meta to push their own research out
           | into the open world and further step up their internal
           | efforts. And that in turn creates fierce pressure on OpenAI
           | to move even faster, and take even fewer precautions.
           | 
           | Metaphorically, there was a huge pile of rocks perched at the
           | top of a hill. Altman's push got the rocks rolling, but
           | there's really no guarantee that anyone will have much say in
           | where they end up.
        
         | whiddershins wrote:
         | The problem is that I don't believe we have any organization in
         | government currently staffed and active that I trust to take
         | any action that will benefit the public at large.
         | 
         | The problem space is too confusing, and the people making
         | decisions are too incompetent. It's a huge skills and knowledge
         | gap.
         | 
         | And that's without factoring in corruption and bad intentions.
        
         | merlinoa wrote:
         | This is socialist nonsense. The government won't protect
         | anything outside of their interests[0]. Free markets are good
         | and necessary for human flourishing[1].
         | 
         | [0] https://www.amazon.com/Creature-Jekyll-Island-Federal-
         | Reserv... [1] https://www.amazon.com/Human-Action-Ludwig-Von-
         | Mises/dp/1614...
        
         | dehrmann wrote:
         | > Stop asking the market to look after collective interest.
         | This is the job of the government.
         | 
         | One of the main roles of government is to step in where markets
         | fail.
        
           | nateabele wrote:
           | > _One of the main roles of government is to step in where
           | markets fail._
           | 
           | Because that's worked out great so far?
           | 
           | In every case I can think of, the government (usually because
           | it's already captured) only ever serves to further exacerbate
           | the issue.
        
             | liketochill wrote:
             | Has a government never prevented a merger that would have
             | created a monopoly?
        
             | mafuy wrote:
             | Seriously? You can't think of how it was useful that the
             | government mandated that factory door must remain open to
             | help prevent human deaths in case of a fire? You can't
             | think of the advantages of governmental food and medicine
             | safety obligations?
        
               | whiddershins wrote:
               | The government has had some wins, and it would be naive
               | to say they haven't.
               | 
               | They have also had many catastrophic failures.
               | 
               | It's unclear whether in the end regulations trend towards
               | net benefit, but it does seem likely that the more
               | nebulous a problem, the harder it is for government to
               | get it right. Or anyone, for that matter. But especially
               | government because the feedback loop is so slow and bad.
        
           | DubiousPusher wrote:
           | That's literally my point. When you want behavior that is
           | contrary to market incentive you need laws not guilt ridden
           | editorials.
        
         | hackerlight wrote:
         | > This is the job of the government. The ultimate effect of
         | virtue wanking CEOs and corporate governance is to deceive
         | people into thinking democracy is something that can be
         | achieved by for profit organizations
         | 
         | Sam Altman says the opposite of what you're insinuating, if by
         | "virtue wanking CEO deceiving people" you are referring to the
         | subject of the article, Sam Altman. He says he wants a global
         | regulatory framework enacted by government and decided upon
         | democratically.
        
         | yafbum wrote:
         | I agree that it's the government's role but I think you can
         | look a bit beyond the law itself, which is often hard to get
         | right, especially in very fresh new domains. Some nice
         | behaviors can be induced by mere fear of government
         | intervention and fear of future laws, and I think we're seeing
         | some of that now.
        
         | AussieWog93 wrote:
         | This is a really cynical take.
         | 
         | CEOs and companies can, and should, act ethically. Not just
         | because it's the "right thing to do", but because it's the best
         | way to guarantee the integrity of the brand in the long term.
        
           | DubiousPusher wrote:
           | I feel I made it clear in my post that indivual integrity
           | matters and has real consequences. But you as a citizen have
           | 1 no way of validating a CEOs real intentions and 2 no
           | recourse when that CEO fails to live up to those intentions.
           | If you only fight for the protections you want once you need
           | them, you will be at a serious disadvantage to win them.
        
           | Bukhmanizer wrote:
           | I'm not sure you could ever say that a company can act
           | ethically. People within the company may act ethically, but
           | the company itself is just a legal entity that represents a
           | group of people. The company has no consideration of ethics
           | to act ethically.
           | 
           | A company that is composed of 100% ethical actors may one day
           | have all their employees quit and replaced with 100%
           | unethical actors. Yet the fundamental things that make the
           | company _that company_ would not have changed.
        
           | jasonhansel wrote:
           | If unelected CEOs have more power than the democratically
           | elected government, we aren't in a democracy. That's the
           | problem.
        
           | jcz_nz wrote:
           | CEO's and companies can act ethically while that aligns with
           | the interest of the shareholders. Reality is that at some
           | point, this becomes impossible even for those with the best
           | intentions. "Do no evil" rings any bells?
        
           | sbarre wrote:
           | History proves that we have way more CEOs and companies
           | acting out of self-interest, against the common good, than
           | otherwise.
           | 
           | So yeah, they _should_ , but they don't.
        
           | LapsangGuzzler wrote:
           | > CEOs and companies can, and should, act ethically.
           | 
           | I can and should always drive the speed limit. But that
           | doesn't mean I do, which is why highway patrol exists, to
           | keep people in check. "Should" is such a worthless word when
           | it comes to these discussions because if you believe that an
           | executive needs to act a certain way but you don't believe it
           | enough that some sort of check is placed on them, then you
           | must not believe in importance of their good behavior that
           | strongly.
        
         | dantheman wrote:
         | In general governments have done far more harm to individuals
         | than anyone in the market. The job of the government is to
         | protect your individual rights not the collective interest.
        
           | Quekid5 wrote:
           | I refer you to Thomas Midgley Jr.
        
           | piloto_ciego wrote:
           | I don't know that this is true.
           | 
           | At very least they both share immense responsibility for
           | causing individual harm. Sure the government may start a war,
           | but that war can't happen without bombs and bullets, and in
           | America at least those factories aren't run by the
           | government. There is an intermediate step oftentimes, but I
           | don't think that necessarily disconnects companies from
           | responsibility.
           | 
           | If you work at a guided bomb factory you may not be the
           | person dropping it, but you are responsible for the
           | destruction it causes in a small way.
           | 
           | Also, if global warming kills us all then it is likely that
           | the oil companies bear some responsibility for it right?
           | 
           | Government sucks - I agree with that statement, but we
           | shouldn't act like corporations are appreciably less
           | responsible.
        
             | fyloraspit wrote:
             | They are worse together. Achieving some fine balance of
             | corporations may seem somewhat utopian but we are pretty
             | far from utopia in the current day.
             | 
             | Building a mega corporation without big government/s I
             | would argue is basically impossible. And local level
             | governance is more likely and potent without big
             | government. Again though, all of that is quite hard to
             | achieve / see how to achieve when people with existing
             | power enjoy the status quo control more of the levers than
             | the masses, including the ones used to influence the
             | masses.
        
             | jokethrowaway wrote:
             | The market responds to need.
             | 
             | If nobody were able to socialise the cost of going in a
             | foreign country and killing people, there would be no war.
             | 
             | If the government didn't steal my money against my will on
             | threat of incarceration, there is no way in hell I'd spend
             | my money on bullets to kill someone's son in another
             | country.
        
           | syzarian wrote:
           | The absence of government just leads to a situation in which
           | some group takes control of a given area. In effect
           | government will then exist again. During the absence of
           | government there will be chaos and rampant crime.
        
             | jokethrowaway wrote:
             | I'd take a local firm offering to protect me for money over
             | one that manages the large part of a continent.
             | 
             | The small one will redistribute my money where I live at
             | least.
             | 
             | Besides, there is an alternative model where there are
             | competing groups of people and I can pick the best among
             | them based on price and services.
        
               | krapp wrote:
               | >Besides, there is an alternative model where there are
               | competing groups of people and I can pick the best among
               | them based on price and services.
               | 
               | In the absence of government, what's stopping these
               | groups from simply joining forces into a cartel, getting
               | some armed thugs and making you an offer you can't
               | refuse? History suggests that to be a far more likely
               | scenario. Oligarchy, rather than competition, is the
               | natural state of capitalism.
        
           | majormajor wrote:
           | That's just quibbling over what "your individual rights" are;
           | where does the line get drawn between "exercising my right"
           | and "having my rights infringed on by the actions of
           | another." There is no shortage of harm done by "anyone in the
           | market" today, whether it's currently illegal and we call it
           | "crime" instead of just a person exercising their freedom, or
           | whether it's harm that isn't regulated today.
        
           | 7e wrote:
           | Promoting the general welfare is literally discussed in the
           | first paragraph of the U.S. Constitution. The Bill of Rights
           | came later.
        
             | clarkmoody wrote:
             | The Constitution is just a piece of paper. For every
             | thousand steps the Congress, regulators, and state
             | assemblies take in the direction of tyranny, the courts
             | claw back one or two.
        
               | [deleted]
        
         | atmosx wrote:
         | If we could connect social responsibility with the stock, it
         | would matter a great deal :-)
        
         | deltree7 wrote:
         | chatGPT, like Google search has essentially democratized
         | knowledge and wisdom to every human on planet earth.
         | 
         | OTOH, it's the governments that have banned it.
         | 
         | Do you think the citizen's of Italy are better off because of
         | chatGPT?
         | 
         | Corporations are more benevolent than government
        
         | davnicwil wrote:
         | I don't know, isn't history really just a series of specific
         | people doing concrete actions?
         | 
         | Do you think on some level the idea of some abstract
         | 'government' taking care of things is just a narrative we apply
         | to make ourselves feel better?
         | 
         | Sure individual decision makers _in_ that government can
         | concretely affect reality, but beyond that are we just telling
         | a story and really nobody is  'in control'?
        
           | DubiousPusher wrote:
           | The actions that idnoviduals take on behalf of government are
           | a direct reflection of the "abstract" policies and laws of
           | that government. If you cannot discern this from 20th century
           | history I don't know what to tell you.
        
           | runarberg wrote:
           | > I don't know, isn't history really just a series of
           | specific people doing concrete actions?
           | 
           | That is like saying: "Aren't human brains just series of
           | neurons, firing at specific moments."
           | 
           | History is as much--if not more--about interactions between
           | people, feedback loops, collective actions, collective
           | reactions, environmental changes, etc. I would argue that the
           | individual is really really insignificant next to the system
           | this individual resides under, and interacts with.
        
           | Mezzie wrote:
           | > beyond that are we just telling a story and really nobody
           | is 'in control'?
           | 
           | Basically. Or at least that's the impression I'm left with
           | after spending a couple of years in politics work.
           | 
           | I had a nice long breakdown.
        
             | trendroid wrote:
             | Possible to Elaborate?
        
               | Mezzie wrote:
               | tl;dr: Nobody is steering the ship because they don't
               | know they're on a ship. Or that the ocean exists.
               | 
               | It's hard without doxxing myself or calling out specific
               | people and organizations which I'd rather not because I'm
               | a nobody and can't afford lawsuits, but for various
               | reasons I ended up political education and marketing for
               | civics advocacy. Ish. To be semi on topic, I know some
               | people who are published in the WSJ (as well as the
               | people who actually _wrote_ the pieces). I 'm also a 3rd
               | generation tech nerd in my mid 30s so I'm very
               | comfortable with the digital world - easily the most so
               | outside of the actual software engineering team.
               | 
               | I've spoken with and to a lot of politicians and
               | candidates from across the US - mostly on the local and
               | state level but some nationally. And journalists from
               | publications that are high profile, professors of legal
               | studies, heads of think tanks, etc.
               | 
               | My read of the situation is that our political class is
               | entangled in a snare of perverse disincentives for action
               | while also being so disassociated from the world outside
               | of their bubble that they've functionally no idea what's
               | going on. Our systems (cultural, political, economic,
               | etc.) have grown exponentially more complex in the past
               | 30 years and those of us on HN (myself included)
               | understand this and why this happened. I'm a 3rd
               | generation tech nerd, I can explain pretty easily how we
               | got here and why things are different. The political
               | class, on the other hand, has had enough power to not
               | need to adapt and to force other people to do things
               | their way. If your 8500 year old senator wants payment by
               | check and to send physical mail, you do it. (Politicians
               | and candidates that would not use the internet were
               | enough of a problem in _2020_ that we had to account for
               | it in our data + analyses and do specific no tech
               | outreach). Since they didn 't know how the world is
               | changing, they also _haven 't been considering the
               | effects of the changes at all_.
               | 
               | Furthermore, even those of them that have some idea still
               | don't know how to problem solve _systems_ instead of
               | _relationships_. Complex systems thinking is _the_ key
               | skill needed to navigate these waters, and _none of them
               | have it_. It 's fucking _terrifying_. At best, they can
               | conceive of systems where everything about them is known
               | and their outputs can be precisely predicted. _At best_.
               | Complex systems are beyond them.
               | 
               | Add to this that we have a system which has slowly ground
               | itself to a deadlocked halt. Congress has functionally
               | abandoned most of its actual legislative duties because
               | it's way better for sitting congresspeople to not pass
               | any bills - if you don't do anything, then you don't piss
               | any of your constituents off. Or make mistakes. And you
               | can spend more time campaigning.
               | 
               | I left and became a hedonist with a drug problem after a
               | very frank conversation with a colleague who was my
               | political opposite at the time. I'm always open to being
               | wrong, and hearing that they didn't have any answer
               | either was a very 'welp, we're fucked' moment. I'm
               | getting better.
        
           | olao99 wrote:
           | history is a simplified, prettified story that we tell
           | ourselves about how things happened.
        
           | beepbooptheory wrote:
           | The more we remember the possibility of true collective self
           | determination, the more likely we are to survive all this
           | mess we're making.
           | 
           | These days we are constantly bombarded by this contradiction
           | of individualism being primary and desirable, but at the same
           | time impotent in the face of the world this individualism has
           | wrought. And its all a convenient way to demoralize us and
           | let us forget how effective motivated collective interest is.
           | Real history begins and ends with the collective!
        
             | DubiousPusher wrote:
             | Yes. There is a reason that when Britain felt threatened by
             | the turmoil in France they didn't just bar unions or
             | political clubs. They banned "combination" almost entirely
             | in general.
        
           | invig wrote:
           | "think on some level the idea of some abstract"
           | 
           | This! So much this! These conversations are being held at
           | such a high level of abstraction that they don't make sense.
           | It's one giant "feels" session.
        
             | DubiousPusher wrote:
             | Right. The National Health, turn of the century sanitation,
             | widespread vaccination and the EPA are just all about "the
             | feels".
        
         | TaylorAlexander wrote:
         | Sure but in my country (USA) the government is hopelessly inept
         | at regulating technology. We still don't have privacy
         | regulations and now to work around this they're trying to ban
         | specific foreign apps instead of protecting us from all apps!
         | I'd honestly be horrified if they tried to regulate AI. They
         | would be in bed with Facebook and Microsoft and they'd somehow
         | write legislation that only serves to insulate those companies
         | from legal repercussions instead of doing anything to protect
         | regular people. As far as I can tell it is the view of congress
         | that big tech can to whatever they want to us as long as the
         | government gets a piece.
        
           | alex_sf wrote:
           | There is no government that _isn 't_ hopeless inept at
           | regulating technology.
        
           | DubiousPusher wrote:
           | Agreed. The US has backslidden since the 20th century back
           | towards an elitest Republic and away from democracy. But even
           | in the US, collective action has a better track record than
           | "altruism".
        
             | TaylorAlexander wrote:
             | Sometimes I wonder if the back slide narrative is really
             | accurate, or if we're looking back at the myth of history
             | rather than the facts. When the country was founded, only
             | white men could vote and people of color were legally
             | property with no rights. That's obviously not democracy, so
             | I question at what point after that but before today we
             | really had democracy to have slid back from.
        
               | bugglebeetle wrote:
               | When we realize it's really only about from the 1970s
               | that we had full enfranchisement and political
               | participation of all citizens, this becomes more obvious.
               | "Coincidentally," this enfranchisement was followed by
               | the Volker shock and then the Reagan administration, both
               | of which led to the decimation of labor's political power
               | and share of the economic pie.
        
       | Overtonwindow wrote:
       | AI is not true AI.. at the moment. ChatGPT is inherently biased
       | by its developers, which means at least half the population may
       | not trust its answers. For true AI he will have to give it
       | autonomy, and I'm more interested if Altman is ready to live with
       | an AI he cannot control.
        
       | ur-whale wrote:
       | https://archive.is/Quo4t
        
       | gardenfelder wrote:
       | https://archive.ph/YTOeD
        
         | eternalban wrote:
         | None of archive.* are working for me - cloudflare dns issues.
         | Anyone else has access issues?
         | 
         | [thanks to those who replied. strangely stopped working for me
         | since yesterday [US]. can you post the ip you see?
         | 
         | Cloudflare returns a 1001 error: "Ray ID: 7b128f151e4b0c90 *
         | 2023-04-01 17:30:15 UTC" ]
        
           | version_five wrote:
           | Works for me in western europe
        
           | wslh wrote:
           | No issue here (Buenos Aires).
        
           | marginalia_nu wrote:
           | Works for me
        
         | wslh wrote:
         | Sidenote: I just reported to archive.is that it would be great
         | to have the capability to render it throught services such as
         | Pocket.
        
       | ThomPete wrote:
       | Listening to Sam talking with Lex Fridman about the dangers and
       | ethics of AI while his company destroys entire industries as a
       | consequence of their decision to keep GPT4 closed source and
       | spitting out an apex api aggregator is one for the history books.
       | 
       | Well played :)
        
         | nomel wrote:
         | > destroys entire industries as a consequence of their decision
         | to keep GPT4 closed source
         | 
         | Could you expand in that?
        
           | avgcorrection wrote:
           | Like contemporary language models, some HN commenters read
           | the text itself in isolation and then extrapolate about "what
           | that means" but then immediately jump to the conclusion that
           | whatever real-world (beyond the text) that they imagine will
           | eventually happen has in fact already happened--in effect
           | they're hallucinating.
        
       | washywashy wrote:
       | OpenAI/ChatGPT seem to be very creative based on variations it
       | can make on "things" that already exist. I'm just curious if we
       | see AI being truly creative and making something "new". Perhaps
       | everything is based on something though, and that's a rough
       | explanation for this creativity. Maybe AI's true creativity can
       | come from the input prompts of its "less intelligent", but more
       | flexibly creative users.
        
       | OscarTheGrinch wrote:
       | Perhaps if more CEOs / controlling shareholders were criminally
       | liable for damage caused by their products, like the Sackler
       | family, they wouldn't be so gung-ho.
        
         | leetharris wrote:
         | What a ridiculous take and a slippery slope.
         | 
         | Should car manufacturers be liable for drunk drivers? Should
         | kitchen knife manufacturers be responsible for stabbings?
         | 
         | Your idea is great if you want your country to be left behind
         | entirely in innovation.
        
           | dclowd9901 wrote:
           | Worth reminding that the slippery slope argument is not a
           | valid argument at all.
           | 
           | As with standard slippery slope reasoning, you jump to the
           | most extreme interpretation. Yet reality shows us that you
           | cannot, in fact, boil a frog, because at some point it just
           | gets too fucking hot.
           | 
           | Should car manufacturers be liable for drunk drivers? Maybe,
           | if they include a space in their vehicle specifically to
           | store and serve alcohol.
           | 
           | Should kitchen knife manufacturers be responsible for
           | stabbings? No. But no reasonable person would ever suggest
           | they should. I might remind you also that "reasonable
           | standard" is a legal concept.
        
             | robertlagrant wrote:
             | Without defining the reasonable standard, it remains a
             | silly idea in this case.
        
       | iibarea wrote:
       | There's tons of extremely effective marketing around who this guy
       | is, what he stands for - and so I'd instead look at what he's
       | done. He took a non-profit intended to offset the commercial
       | motives driving AI development, and turned it into a for profit
       | closely tied to Microsoft. I think he's an extremely shrewd
       | executive and salesman, but nothing he's done suggests any
       | altruistic motivations - that part always seems to be just
       | marketing, and always way down the road.
        
         | startupsfail wrote:
         | What I'm afraid of is that he and Ilya are not as good and
         | smart as they paint themselves.
         | 
         | And that a lot of key people had left (i.e. to Anthropic). And
         | that by pure inertia they have GPT-5 on their hands and not
         | much control over where this technology is going.
         | 
         | I can't tell for certain, but it does look like one of their
         | corner pieces, the ChatGPT system prompt which sits at the
         | funnel of the data collection had degraded significantly from
         | the previous version. Had the person that was the key to the
         | previous design left? Or it no longer matters?
         | 
         | One could argue that OpenAI is very hot and everyone would want
         | to work there. But a lot of newcomers only create more pressure
         | for the key people. And then there is the inevitable leakage
         | problem.
        
           | jstummbillig wrote:
           | There are some vague ideas and fears here. Understandable.
           | Trying find a silver lining from where to get somewhere:
           | Where would GPT4 and onwards be better housed? Is there a
           | setup -- an individual, a company, an institution, a concept,
           | license -- where the whole thing would clearly better fit,
           | than with OpenAI?
           | 
           | Note, I am not suggesting that they are particularly
           | un/qualified or un/trustworthy. I am just trying to figure,
           | if the problem is with the nature of the technology, that
           | maybe there is not entity or setup, that would obviously be a
           | good fit for governing gpt because gpt is simply scary, or
           | this is a personality issue.
        
           | bmitc wrote:
           | > What I'm afraid of is that he and Ilya are not as good and
           | smart as they paint themselves.
           | 
           | This describes almost any venture capitalist or high-profile
           | startup founders, as far as I can tell. Most don't realize
           | their either privileged path or lucky path or both had more
           | to do with it than their smarts.
           | 
           | I really like James Simons as he mostly attributes his
           | success to luck and being able to hire and organize smart
           | people and give them the tools they need to work. He
           | basically describes it as luck and taste, despite his actual
           | smarts and his enormous impact on the world.
        
             | Mistletoe wrote:
             | I don't know everything about him but from what I do know,
             | I would put Bezos in the "not just luck" very lonely camp.
             | I think his Day 1 work and iterate every day idea is just
             | that powerful and real and he really did it instead of
             | talking about it. Even though he says he won several
             | lotteries to get where he is, I'm not so sure.
        
             | saghm wrote:
             | > I really like James Simons as he mostly attributes his
             | success to luck and being able to hire and organize smart
             | people and give them the tools they need to work. He
             | basically describes it as luck and taste, despite his
             | actual smarts and his enormous impact on the world.
             | 
             | Plenty of really smart people don't end up having a big
             | impact on the world, and it's possible to make a difference
             | without being an outlier in terms of intelligence. Everyone
             | who has made an impact has benefited to some degree by
             | circumstances beyond their control though, so even if
             | someone is genuinely smarter than anyone else, it's a
             | fallacy for them to assume that it was the determining
             | factor in their success and a guarantee of future success.
        
             | iibarea wrote:
             | ... and Simons would maybe be the most justified in
             | overlooking luck, but he's smart enough to realize how
             | random the world is. Peter Norvig also emphasizes the role
             | of luck in his life. It's honestly a very good test of
             | self-awareness and empathy, though there's def some
             | negative selection against those traits in sv.
        
           | sebzim4500 wrote:
           | I'm sure they are both pretty smart, but if anything that
           | makes their apparent monopoly more concerning.
        
             | dnissley wrote:
             | Monopoly over what?
        
               | sebzim4500 wrote:
               | LLMs that actually work.
               | 
               | They are on GPT-4 and no one else is close to GPT-3.5.
        
           | FormerBandmate wrote:
           | You can use Anthropic's chatbot in Quora's Poe app. Right now
           | it isn't as good as Bing or ChatGPT. Misses some basic logic
           | things, and the "As an AI language model" BS still stops it
           | from doing fun things like making Jesus rap battle Gus Fring
           | (that was like a month ago, someone in the replies got it to
           | do that so I'll have to check it out again). I'd have to see
           | how it is at writing PowerPoints but idk
        
             | IncRnd wrote:
             | Verse 1 - Jesus
             | 
             | I'm the son of God, the King of Kings
             | 
             | You're just a drug lord, selling crystal meth and things
             | 
             | My teachings change lives, bring peace to the world
             | 
             | You bring addiction, violence and pain, unfurled
             | Chorus
             | 
             | Jesus, the savior, the light in the dark
             | 
             | Gus Fring, the villain, who leaves his mark
             | Verse 2 - Gus Fring
             | 
             | You talk a big game, but where's your proof?
             | 
             | I've built an empire, with power that's bulletproof
             | 
             | Your miracles are outdated, my tactics are new
             | 
             | I'll take you down, no matter what you do
             | Chorus
             | 
             | Jesus, the savior, the light in the dark
             | 
             | Gus Fring, the villain, who leaves his mark
             | Verse 3 - Jesus
             | 
             | My love conquers all, it's the greatest force
             | 
             | Your money and power, just lead to remorse
             | 
             | You're just a man, with a fragile ego
             | 
             | I'll show you mercy, but you reap what you sow
             | Chorus
             | 
             | Jesus, the savior, the light in the dark
             | 
             | Gus Fring, the villain, who leaves his mark
             | Verse 4 - Gus Fring
             | 
             | You may have won this battle, but the war is not done
             | 
             | I'll continue to rise, until I've won
             | 
             | You may have followers, but they'll never be mine
             | 
             | I'll always come out on top, every time
             | Chorus
             | 
             | Jesus, the savior, the light in the dark
             | 
             | Gus Fring, the villain, who leaves his mark
             | Outro
             | 
             | In the end, it's clear to see
             | 
             | Jesus brings hope and love, for you and me
             | 
             | Gus Fring may have power, but it's not enough
             | 
             | Jesus is the way, the truth, the life, and that's tough.
        
               | FormerBandmate wrote:
               | Huh, last time I checked it it gave me a message about
               | how that was "offensive to Christians". I'll have to
               | check it out again
        
           | sroussey wrote:
           | > ChatGPT system prompt which sits at the funnel of the data
           | collection had degraded significantly from the previous
           | version
           | 
           | They purposely moved free users to a simpler/cheaper model.
           | Depending on your setting and if you are paying, there are
           | three models you might be inferencing with.
        
             | startupsfail wrote:
             | I'm not talking about GPT-3.5 vs GPT-4. I'm talking about a
             | change to their system prompt.
        
         | TechnicolorByte wrote:
         | It's a capped-profit structure where excess profit will
         | supposedly go back to the no-profit side. From a recent NYTimes
         | article [1]:
         | 
         | > But these profits are capped, and any additional revenue will
         | be pumped back into the OpenAI nonprofit that was founded back
         | in 2015.
         | 
         | > His grand idea is that OpenAI will capture much of the
         | world's wealth through the creation of A.G.I. and then
         | redistribute this wealth to the people. In Napa, as we sat
         | chatting beside the lake at the heart of his ranch, he tossed
         | out several figures -- $100 billion, $1 trillion, $100
         | trillion.
         | 
         | How believable that is, who knows.
         | 
         | [1] https://www.nytimes.com/2023/03/31/technology/sam-altman-
         | ope...
        
           | G_z9 wrote:
           | This is ducking insane. How are people not up in arms about
           | this? Imagine if the guy who invented recombinant insulin
           | stated publicly that he intended to capture the entire
           | medical sector and then use the money and power to reshape
           | society by distributing wealth as he saw fit. That's ducking
           | insane and dangerous. This guy has lost his fucking mind and
           | needs to be stopped.
        
             | [deleted]
        
             | sebzim4500 wrote:
             | I find OpenAI a bit sketchy, but this is an overreaction.
             | The only difference between OpenAI and the rest is that
             | OpenAI claims to have good intentions, only time will tell
             | if this is true. But the others don't even claim to have
             | good intentions. It's not like any of OpenAI's actions are
             | unusually bad for a for-profit compnay.
        
             | tsunamifury wrote:
             | I'm sorry your AI keyboard didn't like your sentiment.
             | Words have been changed to reduce your vulgarity. Thankyou
             | for your human node input.
             | 
             | On a serious note I think you are right. In private the
             | ideology of him and his mentor Theil is a lot more...
             | elite. Their think tank once said "of all the people in the
             | world there are probably only 10,000 unique and valuable
             | characters. The rest of us are copies."
             | 
             | I'm not going to criticize that because it might be a valid
             | perspective but filter it through that kind of power. I
             | don't love that kind of thinking driving such a powerful
             | technology.
             | 
             | I am so sad that Silicon Valley started out as a place to
             | elevate humanity and ended with a bunch of tech elites who
             | see the rest of the world generally as a waste of space.
             | They claim fervently otherwise but at this point it seems
             | to be a very thin veneer.
             | 
             | The obvious example being GPT was not built to credit or
             | give attribution to its contributors. It is a vision of the
             | world where everything is stolen from all of us and put in
             | Sam Altmans hands because he's... better or Something.
        
             | blibble wrote:
             | > How are people not up in arms about this?
             | 
             | they will be once they realise
             | 
             | > This guy has lost his fucking mind and needs to be
             | stopped.
             | 
             | I agree, hopefully via regulation
             | 
             | otherwise the 21st century luddites will
        
           | cactusplant7374 wrote:
           | Are the profits capped for Altman?
        
           | Jasper_ wrote:
           | Returns are capped at 100x their initial investment, which,
           | you know, is not that big of a cap. VCs would go crazy for a
           | 100x return. Most companies, even unicorns, don't get there.
           | 
           | They're justifying it by saying AGI is so stupidly big that
           | OpenAI will see 100000x returns if uncapped. So, you know,
           | standard FOMO tactics.
           | 
           | [0] https://openai.com/blog/openai-lp
        
             | mirekrusin wrote:
             | This cap is much smaller, 100x was for initial investors.
             | Microsoft took every single penny they could to get 49%
             | stake.
             | 
             | If they won't do AGI, they won't go over the cap with
             | profits and all drama is for nothing - so saying it's fake
             | cap is not right.
             | 
             | Please somebody correct me if I'm wrong.
        
             | [deleted]
        
             | sebzim4500 wrote:
             | I mean, presumably they are at like 30x already?
        
               | jprete wrote:
               | I believe the Microsoft investment is around $10 billion,
               | so they can get up to a trillion dollars of return under
               | the cap.
        
           | iibarea wrote:
           | Again, this is unbelievably good marketing - and good sales
           | when pitching VCs. Plus it's a nice reworking of the very for
           | profit nonprofit model (see also FTX). But in terms of actual
           | reality openAI is mostly succeeding by being more reckless
           | and more aggressively commercial than the other players in
           | this space, and is in no meaningful way a nonprofit any
           | longer.
        
           | btown wrote:
           | I'm sure he believes that at such a time when OpenAI creates
           | AGI, all the company's investors' profit caps will have been
           | passed (or will immediately be passed), and thus he will have
           | removed all incentives for anyone at the company - including
           | himself - to keep it from the world.
           | 
           | But there are so, so many incentives other than equity that
           | come into play. Pride, well-meaning fear of proliferation,
           | national security concerns, non-profit-related but still-
           | binding contractual obligations... all can contribute to
           | OpenAI wanting to keep control of their creations even if
           | they have no specific financial incentive to do so.
           | 
           | Whether that level of control is good or bad is a much longer
           | conversation, of course.
        
         | birdymcbird wrote:
         | > altruistic motivations
         | 
         | feel like trend among Silicon Valley companies and tech
         | 'genius' personality about having altruism. some delusion that
         | basing personality on this lie will make them untouchable and
         | elevate their character, as if they not in this just to make
         | ton of money like every other company and industry. and
         | American media generally push this propaganda. SBF prime
         | example.
        
           | cactusplant7374 wrote:
           | > and American media generally push this propaganda. SBF
           | prime example.
           | 
           | Did they? I've only listened to one interview of SBF and that
           | was done by Tyler Cowen. He seemed totally aloof to the
           | seriousness of running an exchange. If anything we've been
           | convinced that idiosyncratic individuals are our saviors.
        
             | birdymcbird wrote:
             | Sbf constantly promoted in us media by news organization
             | like nytime and celebrity before all the fraud became
             | apparent.
             | 
             | once cat was out of bag they run story going easy on sbf.
             | never apologizing for promoting this fraud and they wrote
             | articles sympathetic to sbf, also giving him platform to
             | visit each news show or talk show and give defense of
             | himself as he knew nothing about what was happening, all
             | part of legal defense strategy to say incompetence but not
             | criminal negligent
        
         | cookingrobot wrote:
         | He didn't take equity in OpenAi. Does that suggest altruism?
        
           | iibarea wrote:
           | Assuming we take this at face value, once you have a lot of
           | money power becomes appealing - and control over a very
           | important player in the AI space is that. The original vision
           | of openAI was democratization of that decision-making
           | process, the model now is - these guys are in charge. Maybe
           | that's altruistic, because they're the smartest guys in the
           | room and they can mitigate the downside risks of this tech
           | (... not fucking AGI, but much more like the infinite
           | propaganda potential of chatGPT). I'm more a fan of
           | democratization, but that's not a universally held opinion in
           | sv.
        
         | choppaface wrote:
         | He also invited peter theil to YC and made his first millions
         | selling the personal data of Loopt users to low income credit
         | card vultures. Also ... Worldcoin?
        
       | victor106 wrote:
       | > OpenAI and Microsoft also created a joint safety board, which
       | includes Mr. Altman and Microsoft Chief Technology Officer Kevin
       | Scott, that has the power to roll back Microsoft and OpenAI
       | product releases if they are deemed too dangerous.
       | 
       | what a joke.
       | 
       | find me one instance where the ceo of a company picked public
       | interest/safety over profits when there was no regulatory
       | oversight?
        
         | dragonelite wrote:
         | Do people really think AI will go haywire like in the Hollywood
         | movies?
        
           | coffeebeqn wrote:
           | Some do. Personally I think that LLMs will hit a ceiling
           | eventually way before AGI. Just like self driving - the last
           | 20% is orders of magnitude more difficult than the first 80%
        
           | pell wrote:
           | I don't think we're close to a situation where they send us
           | into a Matrix. But I can see a scenario where they are
           | connected to more and more running systems of varying degree
           | of importance to human populations such as electrical grids,
           | water systems, factories, etc. If they're essentially given
           | executive powers within these systems I do see a huge
           | potential for catastrophic outcomes. And this is way before
           | any actual AGI. The simple "black box" AI does not need to
           | know what it's doing to cause real-world consequences.
        
           | gwd wrote:
           | Not like in Hollywood movies, but yes:
           | 
           | https://www.youtube.com/watch?v=gA1sNLL6yg4
           | 
           | https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ
           | 
           | Look around alignmentforum.org or lesswrong.com, and you'll
           | see loads of people who are worried / concerned at various
           | levels about what could happen if we suddenly create an AI
           | that's smarter than us.
           | 
           | I've got my own summary in this comment:
           | 
           | https://news.ycombinator.com/item?id=35281504
           | 
           | But this discussion has actually been going on for nearly two
           | decades, and there is a _lot_ of things to read up on.
           | 
           | EtA: Or, for a fun version:
           | 
           | https://www.decisionproblem.com/paperclips/index2.html
        
           | jeron wrote:
           | Yudkowsky, to begin with
        
           | sobkas wrote:
           | > Do people really think AI will go haywire like in the
           | Hollywood movies?
           | 
           | No, it will just make every inequality even harder to fight.
           | Because computer algorithm can't be biased so every decision
           | it makes will be objective. And because it's really hard to
           | know why AI made decision, it will be impossible to accuse it
           | of racism, bigotry, xenophobia. While rich and powerful will
           | be ones deciding (through hands of "developers") what data
           | will be used to train AIs.
        
           | barbazoo wrote:
           | I don't think it's about AI going haywire, more about how the
           | technology will be used by people for nefarious purposes.
        
           | benrutter wrote:
           | I don't for one, but I still think there could be legitimate
           | safety concerns. LLMs are unpredictable, and the possibility
           | for misinformation in pitching them as search aggregators is
           | pretty large. Disinformation can have, and previously has
           | had, genuinely dangerous effects.
        
       | dclowd9901 wrote:
       | Are there any employees of OpenAI around? I had a question:
       | 
       | Does anyone in the office stop to contemplate the ramifications
       | of developing technology that will likely put most people out of
       | a job, which will have a whole host of knock-on effects?
        
         | NickBusey wrote:
         | Self checkout machines put people out of a job.
         | 
         | Cars put stable boys out of jobs.
         | 
         | Light bulbs put candlemakers out of jobs.
         | 
         | Are the people who made them also morally culpable?
         | 
         | Let's just make no progress ever in the name of employment I
         | guess. /s
        
           | madmask wrote:
           | I think these comparisons always miss that humans are still
           | useful because they are the control system in the end. Even
           | if at very high level. When AGI comes along, humans will have
           | to compete with it in the market, and there may be very
           | little actual need for humans.
        
             | NickBusey wrote:
             | I believe humans will continue to be useful. You apparently
             | do not. I have not missed anything.
        
           | sammalloy wrote:
           | > Self checkout machines put people out of a job.
           | 
           | Have they, though? Is there good data on this? I haven't seen
           | anyone lose any jobs over this in my area. They either get
           | reassigned to different departments or get better jobs doing
           | the same thing in companies that refuse to use self-
           | checkouts. And I read here just a month or so ago that Amazon
           | closed many of their "no cashier" stores in NY and Trader
           | Joe's vowed to never use them.
           | 
           | > Light bulbs put candlemakers out of jobs
           | 
           | I would guess that such artisans moved on to other niche
           | products. Where I live, candlemakers today make a lot of
           | money, and I had a chance to watch them do their thing in
           | their small commercial space. This was a one person
           | operation, no employees and no mechanization, and judging by
           | the amount of product they were creating and their wholesale
           | prices, they were pulling in about about 300k a year or more,
           | after taking into account supplies.
        
             | NickBusey wrote:
             | I like that your first point asks for data on a common
             | phenomenon. (Side note: Everyone was reassigned or got a
             | better job? You sure about that?)
             | 
             | Then your second point is a wild anecdote with zero data.
             | (Side note: With zero additional data I can tell you that
             | your "judging" of their profit is wildly inaccurate.)
        
               | sammalloy wrote:
               | Google is your friend. Candlemaking businesses run by
               | sole proprietors are highly profitable with total revenue
               | in the billions. I probably wouldn't have believed it if
               | I didn't see the operation up close and personal for
               | myself. It's a lot of work for one person, and the the
               | person who runs the business I saw works 12 hours a day.
        
               | Carrok wrote:
               | > person who runs the business I saw works 12 hours a day
               | 
               | Oh OK, so if you work yourself to death you can make
               | slightly above average income.
               | 
               | Cool, I guess.
               | 
               | I'm not seeing how it's relevant to GP's point though.
               | Candle makers still exist yes.
               | 
               | But are you really arguing the light bulb did not cause
               | that specific job to become less common?
               | 
               | I also thought it was fun how, you asked for a citation,
               | then when you were asked for to provide one yourself, you
               | respond with "gOoGlE iS yOuR fRiEnD".
        
         | CatWChainsaw wrote:
         | I'll give them the benefit of the doubt. Many of them probably
         | do contemplate it. Aware that since AI is a sea change, there's
         | difficulty predicting the full range of first-order
         | consequences, much less all the resulting second-order ones.
         | 
         | But... genie, bottle; prisoner's dilemma. If they object to
         | what they're building, or how it's implemented, too
         | strenuously, they will be out of a job. Then not only do they
         | have more immediate concerns about sustaining themselves, they
         | have _no_ weight in how things play out.
        
         | galoisscobi wrote:
         | Maybe Sam thinks about this at some level. From his New Yorker
         | profile[0]:
         | 
         | > "The other most popular scenarios would be A.I. that attacks
         | us and nations fighting with nukes over scarce resources." The
         | Shypmates looked grave. "I try not to think about it too much,"
         | Altman said. "But I have guns, gold, potassium iodide,
         | antibiotics, batteries, water, gas masks from the Israeli
         | Defense Force, and a big patch of land in Big Sur I can fly
         | to."
         | 
         | This doesn't explicitly talk about him being worried about AI
         | putting a lot of people out of jobs but he is prepping for AI
         | going awry.
         | 
         | [0]: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-
         | ma...
        
           | CatWChainsaw wrote:
           | The fact that his backup plan if things go really wrong is to
           | bug out, damn the rest of the world, is, to put it mildly,
           | _not great_.
        
       | efficientsticks wrote:
       | > The goal, the company said, was to avoid a race toward building
       | dangerous AI systems fueled by competition and instead prioritize
       | the safety of humanity.
       | 
       | > "You want to be there first and you want to be setting the
       | norms," he said. "That's part of the reason why speed is a moral
       | and ethical thing here."
       | 
       | Clearly having either not learned or ignored the lessons from
       | Black Mirror and 1984, which is that others will copy and emulate
       | the progress.
       | 
       | The fact is that capitalism is no safe place to develop advanced
       | capabilities. We have the capability for advanced socialism, just
       | not the wisdom or political will.
       | 
       | (I'll answer the anonymous downvote: Altman has advocated giving
       | equity as UBI solution. It's a well-meaning technocratic idea to
       | distribute ownership, but it ignores human psychology and that
       | this idea has already been attempted in practice in 1990s Russia,
       | with unfavourable, obvious outcomes).
        
         | sampo wrote:
         | > lessons from Black Mirror and 1984
         | 
         | Those are works of fiction.
        
           | quickthrower2 wrote:
           | And works of prediction.
        
           | efficientsticks wrote:
           | They're dystopian fictions, ie. examples of what _not_ to do.
           | But experience has shown that the real-world often recreates
           | dystopian visions by example.
           | 
           | So trying to be the first to show something in a well-meaning
           | way can nonetheless have unfortunate consequences once the
           | example is copied.
        
             | latency-guy2 wrote:
             | Tell me, are the dystopian fictions that represent
             | socialism or communism as bad, just as reasonable?
             | 
             | Then following up with whatever your answer is: Why are you
             | picking and choosing which fictions are reasonable?
             | 
             | Let's dispel the notion that artists and writers are more
             | aware and in tune with humanity than other humans.
        
               | roywiggins wrote:
               | 1984's Oceania is at least as Stalinist as it is anything
               | else.
        
               | jjulius wrote:
               | >Then following up with whatever your answer is: Why are
               | you picking and choosing which fictions are reasonable?
               | 
               | This is arguing in bad faith. You don't care what their
               | answer will be, you have decided that they are absolutely
               | picking and choosing, and will still accuse them of as
               | much even if their answer to your first question is,
               | "Yes".
        
               | latency-guy2 wrote:
               | This isn't an argument in the first place buddy.
               | 
               | You're right that I don't care, because it has already
               | been decided that Orwell is representing the future if
               | things go "The Wrong Way (tm)", buy 1984 at Amazon for
               | $24.99, world's best selling book. Or more succinctly to
               | OP, "The Capitalist Way (tm)".
        
               | maxbond wrote:
               | It's okay to decide that something isn't worth arguing
               | against, and to spend your time in a way you find more
               | productive.
               | 
               | Having articulated an argument (which you absolutely
               | did), it's not okay to try to retcon that you were just
               | trolling and everyone else is the fool for having taken
               | you seriously.
        
               | latency-guy2 wrote:
               | > It's okay to decide that something isn't worth arguing
               | against, and to spend your time in a way you find more
               | productive.
               | 
               | Who the hell are you talking to? This is some weird segue
               | way into something that wasn't even being talked about at
               | all.
               | 
               | > Having articulated an argument (which you absolutely
               | did), it's not okay to try to retcon that you were just
               | trolling and everyone else is the fool for having taken
               | you seriously.
               | 
               | I think you're hallucinating anything and whatever you
               | want. Anyway don't feel the need to be productive today.
               | It's Saturday after all.
        
           | xiphias2 wrote:
           | Maybe because they are more digestible then reality. Reality
           | is much much worse.
        
           | CatWChainsaw wrote:
           | "The only thing stupider than thinking something will happen
           | because it is depicted in science fiction is thinking
           | something will not happen because it is depicted in science
           | fiction."
           | 
           | https://philosophybear.substack.com/p/position-statement-
           | on-...
        
         | atleastoptimal wrote:
         | The only issue is that in the real world, capitalism has a
         | better track record than socialism.
        
           | efficientsticks wrote:
           | I agree it's worth looking at the history, and to not repeat
           | its mistakes, though at the same time this is a new
           | situation, and it will continue to be new into the future, so
           | sticking to heuristics may not serve humanity as well than
           | being open-minded on the policy front.
        
         | htss2013 wrote:
         | Do we have the capability for advanced socialism? Because I
         | recall all the smartest economists circa 2021 saying inflation
         | wasn't a thing, it's transient, it's only covid affected supply
         | chains. In reality we are in an broad sticky inflation crisis
         | not seen since the 70s, which may be turning into a regional
         | banking crisis.
         | 
         | It's difficult to believe we have reached advanced socialism
         | capabilities, and all of the forecasting that would require,
         | when we don't even understand the basics of forecasting
         | inflation 1-2 years out.
        
           | efficientsticks wrote:
           | The ambiguity of "advanced socialism" is problematic for any
           | meaningful debate, so I apologise for that.
           | 
           | I was meaning something closer to "we have the resources and
           | technology (in this advanced era), just not the wisdom or
           | political will". The actual nature of what could be provided
           | is up for debate, but if we're looking at mass unemployment
           | in 2 decades' time, perhaps it's a conversation worth having
           | again.
        
       | moron4hire wrote:
       | I'd be very happy to see existing regulation on safety critical
       | software systems updated to put a moratorium on AI integration
       | for at least the next 5, maybe 10 years.
        
       | sacnoradhq wrote:
       | Sama <3
       | 
       | He stuck to the 99.9% gamble of setting money on fire that were
       | startups and navigated to the big time(tm).
       | 
       | Also, he helped lift Clerky, Triplebyte, YC, and may others pre,
       | during, and post Loopt.
       | 
       | Not many people (or no one) "deserves" success, but Sama brings a
       | healthy dose of goodwill wherever he goes.
        
         | choppaface wrote:
         | Triplebyte and Loopt both ended up selling / monetizing user
         | data in ways the users really didn't like.
        
         | jackblemming wrote:
         | Are you kidding? He had one startup that was more or less a
         | flop, then for some reason was appointed to a high position at
         | y combinator, got lucky allocating capital (plenty of idiots
         | can and have gotten lucky or were at the right place at the
         | right time) and now is the CEO of OpenAI. This man is the
         | definition of it's not what you know, it's who you know, and
         | that's not a good thing.
        
           | sacnoradhq wrote:
           | Even with the best ideas, execution, and teams 99.99% of
           | startups are. That's okay. They're assumed to be experiments.
           | 
           | There is no such thing as self-made. And there's nothing
           | wrong with friends and networking, especially as some
           | particular help or chance encounter could be pivotal to
           | nudging onto something great.
           | 
           | It's the trying and learning that are the gold to try again.
           | Timing, honest perspective, persistence, and a measure of
           | prepared luck seem to be more of it. There is no magic
           | formula. I wish success to all who want it.
        
           | boeingUH60 wrote:
           | Altman befriended Paul Graham, and his life blossomed...
        
         | w3454 wrote:
         | [dead]
        
       | schiffern wrote:
       | Looks like the media has chosen Sam Altman as the next Elon Musk.
       | 
       | This makes sense. He perfectly fits their cliche of a socially-
       | awkward technologist, and he's trusting (foolish?) enough to make
       | complex nuanced statements in public, which they can easily mine
       | for out-of-context clickbait and vilification fodder.
        
         | eternalban wrote:
         | 2014: https://www.ycombinator.com/blog/sam-altman-for-
         | president/
         | 
         | 2015: https://www.nytimes.com/2015/07/12/opinion/sunday/sam-
         | altman...
         | 
         | 2016: https://www.newyorker.com/magazine/2016/10/10/sam-
         | altmans-ma...
         | 
         | ..
         | 
         | That last link, _Sam Altman 's Manifest Destiny_, is worth the
         | read. However the last time I posted that link HN went down for
         | an hour right afterwards. (of course correlation is not
         | causation :/)
         | 
         | https://news.ycombinator.com/item?id=35334023
        
         | greyman wrote:
         | What would make him "socially-awkward"?
        
           | tedunangst wrote:
           | Never forget the time he wore sneakers to the Ritz.
        
           | schiffern wrote:
           | Sam or Elon?
           | 
           | I'll assume you meant Sam. IMO Sam is mostly just shy and
           | cerebral, but to many people that will come off as awkward
           | and robotic.
           | 
           | Watch his recent Lex Fridman interview. Personally I thought
           | it was great, but I'm aware enough to realize that (sadly)
           | many low-knowledge people will judge such demeanor harshly.
           | 
           | Mark my words: the media will, 10 times out of 10, _exploit
           | that misconception_ , not correct it. "Ye knew my nature when
           | you let me climb upon your back..."
        
             | lubesGordi wrote:
             | I don't know, it seemed to me his responses on Lex were
             | very measured and carefully restrained in a lot of places,
             | calculated and vague in others. He doesn't come off as
             | genuine to me at all.
        
             | drewcoo wrote:
             | If you mean the one where the interviewer, Lex, was wearing
             | a suit and Sam was in a hoodie, where Sam droned in a
             | robotic monotone and often sat with crossed arms, staring
             | downward . . . I think the knowledgeable people might also
             | assume he's the next evil tech overlord. Or certainly
             | distant and uncaring.
             | 
             | The only things missing were lighting from below and scenes
             | of robots driving human slaves.
        
             | [deleted]
        
             | Kkoala wrote:
             | It's interesting that in the article he was described being
             | "hyper-social", "very funny" and "big personality" as a
             | child. I guess those don't necessarily contradict with
             | awkward and robotic, but also wouldn't come to my mind at
             | the same time
        
           | andsoitis wrote:
           | You'll get some clues when you watch his recent interview
           | with Alex Fridman https://youtu.be/L_Guz73e6fw
        
         | seydor wrote:
         | No, the next Zuckerberg. The media sees (rightly) openai as a
         | competitor medium.
         | 
         | Although he s much more prepared to face the next Greta
         | (Yudowksi)
         | 
         | He has to fix his vocal fry however, it is annoying
        
         | [deleted]
        
         | phillryu wrote:
         | When I compare the two Elon was (lucky?) to at least have a
         | string of vision-fueled ventures that became a thing. What is
         | Sam's history of visions? Loopt? Is Y Combinator considered in
         | a new golden era after he took over? Did Worldcoin make any
         | sense at all?
         | 
         | I'm honestly hoping I'm entirely ignorant of his substance and
         | would feel better if someone here can explain there's more to
         | him than that... I would feel better knowing that what could be
         | history's most disruptive tech is being led by someone with
         | some vision for it, beyond the apocalypse that he described in
         | 2016 that he tries not to think about too much:
         | 
         | "The other most popular scenarios would be A.I. that attacks us
         | and nations fighting with nukes over scarce resources." The
         | Shypmates looked grave. "I try not to think about it too much,"
         | Altman said. "But I have guns, gold, potassium iodide,
         | antibiotics, batteries, water, gas masks from the Israeli
         | Defense Force, and a big patch of land in Big Sur I can fly
         | to." https://www.newyorker.com/magazine/2016/10/10/sam-altmans-
         | ma...
        
           | sixQuarks wrote:
           | Are you really insinuating that Elon was simply "lucky" when
           | it came to disrupting and transforming two gargantuan and
           | highly complex industries at the same time?
        
           | schiffern wrote:
           | I'm not talking about the _reality_ of Sam and Elon. I 'm
           | putting my ear to the ground and observing the way the media
           | is (and will) portray them.
           | 
           | I wish that "actual reality" was all that mattered and not
           | such low-knowledge "optics", but sadly we don't live in that
           | world.
        
           | wetmore wrote:
           | I'm with you, listening to his interview with Ezra Klein gave
           | me the impression that he doesn't actually think that deeply
           | about the possible impact of AI. He says it worries him, but
           | then he seems to wave those worries away with really
           | simplistic solutions that don't seem very tenable.
        
             | xiphias2 wrote:
             | The main question about OpenAI is this: can you have any
             | better structure to create singularity that will happen
             | anyways (Some people don't like the word AGI, so I just
             | definine it by machines having wastly more intellectual
             | power than humans).
             | 
             | Would it be better if Google, Tesla or Microsoft / Apple /
             | CCP or any other for profit company did it?
        
             | davidivadavid wrote:
             | What bothers me most is that the picture he paints of
             | _success_ itself is some handwavy crap about how it could
             | "create value" or "solve problems" or some other type of
             | abstract nonsense. He has displayed exactly 0 concrete,
             | desirable vision of what succeeding with AI would look
             | like.
             | 
             | That seems to be the curse of Silicon Valley, worshiping
             | abstractions to the point of nonsense. He would probably
             | say that with AGI, we can make people immortal, infinitely
             | intelligent, and so on. These are just potentialities with,
             | again, 0 concrete vision. What would we use that power for?
             | Altman has no idea.
             | 
             | At least Musk has some amount of storytelling about making
             | humanity multiplanetary you may or may not buy into. AI
             | "visionaries" seem to have 0 narrative except rehashed,
             | high-level summaries of sci-fi novels. Is that it?
        
               | efficientsticks wrote:
               | I agree, listening to the podcast I think the answer is
               | that "yes" that is it: faith in technological progress is
               | the axiom and the conclusion. Joined by other key
               | concepts like compound growth, the thinking isn't deep
               | and the rest is execution. Treatment of the concept of
               | 'a-self' in the podcast was basically just nihilistic
               | weak sauce.
        
               | jasmer wrote:
               | AI is not an abstraction. It's rational to be hand wavy
               | about future value, it's already materialized. AI is
               | basically an applied reseaarch project, he should be more
               | like a Dean herding researchers and we should take him as
               | that. In a previous era, that's what it would be: a PhD
               | from Berkley in charge of some giant AT&T government
               | funded research Lab thing. He'd be on TV with a suit and
               | tie, they'd be smoking and discussing abstract ideas.
        
         | f38zf5vdt wrote:
         | [flagged]
        
         | ninth_ant wrote:
         | What specifically in that article was vilification of Sam or
         | clickbait, or statements taken out of context?
        
           | schiffern wrote:
           | In these early days of a smear campaign (even an
           | unintentional one that's just about chasing clicks), the game
           | is mostly about plausibly deniable innuendo.
           | 
           | The headline is a great start. Contradictions are bad. Altman
           | has contradictions. Therefore Altman is bad. They don't say
           | it, but they also know _they don 't need to_. They lead the
           | audience to water and trust that enough of them will drink.
           | 
           | The closing paragraph is another great example. It
           | intentionally leaves the reader hanging on the question "so
           | why did Altman do AI if there are moral downsides," without
           | resolving the question by giving Altman's context when he
           | said it.
           | 
           | Trust me or don't, but what you see here is just the
           | beginning. In 6 month's time Altman will be (in the public's
           | eye) evil incarnate.
        
             | ninth_ant wrote:
             | They discussed the why earlier in the article, specifically
             | a fear of AI being primarily developed in private labs
             | outside of public view -- the partners feeling they could
             | help bring an alternative approach.
             | 
             | I feel they left it on that point not as part of some grand
             | conspiracy theory, but because the potential for this to be
             | good or bad is a question taking place around the world
             | right now.
             | 
             | Overall this piece feels positive towards Sam, despite what
             | you feel is a negatively loaded headline. He's walking a
             | delicate balance between profit and nonprofit, between
             | something that could be harmful or helpful to society --
             | these things are in contradiction and he's making those
             | choices deliberately. This is an interesting subject for an
             | article.
             | 
             | I find it deeply unlikely he will be viewed like Musk in 6
             | months. Musk is a fairly special case as he's unhinged and
             | unstable more than evil. If someone wanted to paint Sam
             | with an evil stick, Zuckerberg would be a more apt
             | comparison -- playing with something dangerous that affects
             | all of us.
        
               | schiffern wrote:
               | I genuinely hope that you're right and I'm totally wrong,
               | but my experience watching the media landscape says
               | otherwise. It would seem I have less faith in our
               | journalistic institutions than you.
               | 
               | The media operates on a "nearest cliche" algorithm, and
               | the Mad/Evil Genius cliche is so emotionally appealing
               | here that they'll find it irresistible. Even if it's not
               | true, _they 'll make it true._
               | 
               | Don't say I didn't warn you. :)
        
       | okareaman wrote:
       | This is a very fluid and chaotic situation so I'd be more
       | concerned if he said one thing and stuck to it
       | 
       |  _When the Facts Change, I Change My Mind. What Do You Do, Sir?_
       | - John Maynard Keynes
       | 
       |  _A foolish consistency is the hobgoblin of little minds, adored
       | by little statesmen and philosophers and divines. With
       | consistency a great soul has simply nothing to do. He may as well
       | concern himself with his shadow on the wall_ - Ralph Waldo
       | Emerson
       | 
       |  _Do I contradict myself? / Very well then, I contradict myself.
       | / (I am large, I contain multitudes)_ - Walt Whitman's "Song of
       | Myself"
        
       | balls187 wrote:
       | When will we get the "Contradictions of Dang?"
        
         | roflyear wrote:
         | I hope dang gets a small piece of the YC pie
        
         | maxbond wrote:
         | "The Lonely Work of Moderating Hacker News"
         | 
         | https://www.newyorker.com/news/letter-from-silicon-valley/th...
        
       | gamesbrainiac wrote:
       | I think there are some interesting questions:
       | 
       | - Sam does not have equity in OpenAI. Does this mean he can
       | potentially be removed at any point in time?
       | 
       | - OpenAI's profit arm will funnel excess profit to its non-profit
       | wing. If this is the case, who determines excess profit?
       | 
       | - OpenAI's founding charter commits the company to abandoning
       | research efforts if another project nears AGI development. If
       | this happens, what happens to the profit arm?
        
         | whitepoplar wrote:
         | I refuse to believe that Sam doesn't have equity in OpenAI. It
         | _must_ be some 4D-chess-style ownership structure, which I 'm
         | guessing is for tax avoidance.
        
           | brookst wrote:
           | There's plenty of evidence that he has no equity. I'd love to
           | see contradictory evidence, but without that, just refusing
           | to believe things based on intuition isn't great.
        
             | ryanSrich wrote:
             | Why is that not great? It makes absolutely zero sense for
             | him to have no equity, or at least some agreement in place
             | that equity is coming. Or some other terms that essentially
             | amount to equity. You don't need evidence to be skeptical
             | of the situation.
        
               | peripitea wrote:
               | Why does it make zero sense? It makes perfect sense to
               | me.
        
               | brookst wrote:
               | He was wealthy before and has other means to parlay
               | openai to further wealth.
               | 
               | You're doing the "only the true messiah would deny his
               | divinity" argument -- if he was going to profit, that's
               | bad. If he's not going to profit, obviously he's lying
               | and is going to profit, so that's bad.
               | 
               | IMO arguments are only meaningful if they can be
               | falsified. Your argument can't be falsified because
               | you're using a lack of evidence as proof.
        
             | graeme wrote:
             | With no equity comes no control. I would find it very
             | surprising he has no control over the project.
             | 
             | And if he does have control that has value whether you
             | label it equity or not.
             | 
             | It is possible he literally has no control and no financial
             | upside but who would turn down control over what they
             | believed to be a world shaping technology?
        
               | peripitea wrote:
               | The parent is a non-profit, hence no equity is required
               | to have control. Seems pretty straightforward to me.
        
               | somsak2 wrote:
               | I mean, most non-founder company CEOs don't have a
               | significant % of total equity and they still have control
               | over the company.
        
           | [deleted]
        
           | xiphias2 wrote:
           | It's nothing special, there's a company under the foundation,
           | he doesn't have share in the company, he's ceo and board
           | member of the foundation.
           | 
           | It's just this one non-important detail is now being repeated
           | over and over.
        
           | boeingUH60 wrote:
           | > It must be some 4D-chess-style ownership structure, which
           | I'm guessing is for tax avoidance.
           | 
           | How would this even work? If only I got a dollar for someone
           | suggesting that there are Magic ways to avoid tax..."they
           | just write it off!"
        
             | namaria wrote:
             | Magic is just a word for things we don't understand. As a
             | poor wage slave sap, I'm 100% sure the world is run by
             | magic guilds, i.e. a bunch of powerful people conspiring
             | stuff I could never fathom. Whatever gets to my eyes and
             | ears has been approved for public disclosure. I don't know
             | shit, everything is magic to me. I kinda know how to
             | survive. So far.
        
         | fauigerzigerk wrote:
         | _> OpenAI's founding charter commits the company to abandoning
         | research efforts if another project nears AGI development. If
         | this happens, what happens to the profit arm?_
         | 
         | I think the definition of AGI is sufficiently vague that this
         | will never happen. And if it did happen, abandoning research
         | efforts could take the form of selling the for-profit arm to
         | Microsoft.
        
           | gamesbrainiac wrote:
           | I think you have a point there. AGI doesn't have a straight-
           | forward litmus test.
        
         | ren_engineer wrote:
         | he admits in the article to having equity via his investment
         | fund, they are using semantics because he doesn't "personally"
         | have equity. He also tries to downplay by saying it's an
         | "immaterial" amount, but in reality that could be billions of
         | dollars.
         | 
         | There's also nothing preventing Microsoft from gifting him
         | billions in Microsoft stock so they can claim he's not
         | motivated by profit with OpenAI despite indirectly making money
         | off it.
         | 
         | You'd have to be extremely naive to look at the decisions he's
         | made at OpenAI and think it was all purely out of good will.
         | Google and AWS both offer credits for academic and charity
         | projects, why did Altman choose to go all in with Microsoft if
         | it wasn't for money?
        
           | peripitea wrote:
           | Do you honestly think that AWS or Google Cloud would have
           | given them billions in credits just because they're a
           | nonprofit? I'm all for being skeptical of powerful people's
           | motives but that suggests a major disconnect from reality
           | somewhere in your thinking.
        
       | davidgerard wrote:
       | didn't know Thiel had helped start this here Y Combinator
        
         | dang wrote:
         | He didn't. That sentence was a bit confusingly written but "co-
         | founded" binds only to the second name.
        
       | mimd wrote:
       | He's begging for regulatory capture. "I can destroy the world but
       | I won't. My competitor's will, so regulate them." A shrewd plan
       | considering he's not offering something beyond what another
       | company with a large nvidia cluster could offer.
        
         | [deleted]
        
         | dmix wrote:
         | The odds that is the end game of the AI ethics movement is
         | pretty likely: A mega-monopoly AI firm with a wall of gov
         | policy that will cripple any upstart who can't jump through
         | "safety" hoops written for and by the parent company. So any
         | talented dev who wants to do great AI work either has to work
         | for parent company or build a startup designed to get acquired
         | by them (aka don't rattle the cage).
        
           | sebzim4500 wrote:
           | I think that's the goal but there is a reasonable chance that
           | they completely fail and no serious regulation ever gets
           | passed.
           | 
           | That's the thing with technology, people get used to it and
           | then trying to ban/control it makes you look ridiculous. It's
           | like how now that Tesla has made it normal to have driving
           | assistance (and calling it FSD) there is little appetite
           | outside of contrarian circles for serious regulation. If,
           | however, regulation was proposed before Tesla shipped then it
           | might have passed.
        
         | sebzim4500 wrote:
         | >A shrewd plan considering he's not offering something beyond
         | what another company with a large nvidia cluster could offer.
         | 
         | It's been 4 months, no one has released anything nearly as good
         | as the initial release of ChatGPT. Meanwhile OpenAI has
         | released GPT-4 and is trialing plugins and 32k context.
         | 
         | Either their competition is incompetent or OpenAI is doing
         | something right.
        
         | zmmmmm wrote:
         | > A shrewd plan considering he's not offering something beyond
         | what another company with a large nvidia cluster could offer
         | 
         | So why does Bard seem inferior to GPT-4?
        
           | jamaliki wrote:
           | It'll get better fast.
        
         | skybrian wrote:
         | All that may be true, but it doesn't help us decide whether
         | more AI regulation is a good idea or not.
         | 
         | As with most things, it probably depends on how it's done.
        
           | mimd wrote:
           | That's a difficult question. I'm just pointing out that Sam's
           | "contributions" are unhelpful to solving that question.
           | 
           | You're limited by two prisoners and separate, sometimes
           | antagonistic, countries. For example, we have little
           | agreement on nuclear weapons, at best we've gotten
           | concessions on testing, a few types of missiles, and so on.
           | Same with current climate legislation. So getting global
           | agreement is hard outside of the bare minimum. Most of the
           | inner county approaches seem to be either panic, political
           | distraction, or like Sam, regulatory capture, as they are
           | ignoring that it means nothing if another country pursues it.
           | 
           | So I'd focus on what simple agreements we could get
           | worldwide.
        
         | jasonhansel wrote:
         | An excellent case of doing the right thing for the wrong
         | reasons.
        
       | breck wrote:
       | I love what OpenAI has built and it's awesome to see them succeed
       | on the AI front (also I'm forever amazed to see SamA live up to
       | the Lebron James early level hype about his entrepreneurial
       | skills).
       | 
       | This article sheds more light on how the non-profit front failed.
       | I find that to be a very hard and interesting problem. IMO, it
       | points to a larger problem with our current laws, where trying to
       | do good and compete fairly is made much harder (near impossible?)
       | when you compete against companies that exploit unfair
       | monopolistic laws.
        
       | mellosouls wrote:
       | There seems a lot of negative perception of the guy here, and
       | OpenAI definitely deserve criticism for some stuff (so as the
       | CEO, so does he), but - even if it was built on the work of
       | others, and with the obvious caution about what may come next -
       | he and they deserve immense respect and credit for bringing in
       | this new AI age.
       | 
       | They did it. Nobody else.
        
         | [deleted]
        
         | BarryMilo wrote:
         | The expression "on the shoulders of giants" has never been so
         | relevant.
        
           | mellosouls wrote:
           | Ha! To be fair though, Isaac Newton is not a bad person to be
           | implicitly compared to. :)
        
         | anileated wrote:
         | I for one have nothing against Sam as a person (not knowing him
         | well enough), but I question the sentiment that he and the
         | company deserve respect for what they're doing--much less _by
         | default_ , for some self-evident reason that doesn't even
         | require explanation.
         | 
         | Do people mean it in a sarcastic sense--and if not, why does
         | OpenAI deserve respect again?
         | 
         | -- Because it is non-trivial (in the same way, say, even Lenin
         | deserves respect by default--even if the outcome has been
         | disastrous, the person sure had some determination and done
         | humongous work)?
         | 
         | -- Because this particular tech is somehow inherently a good
         | thing? (Why?)
         | 
         | -- Because they rolled it out in a very ethical way with utmost
         | consideration for the original authors (at least those still
         | living), respecting their authorship rights and giving them the
         | ability to opt in or at least opt out?
         | 
         | -- Because they are the ones who happen to have 10 billions of
         | Microsoft money to play with?
         | 
         | -- Because they don't try to monetize a brave new world in
         | which humans are verified based on inalienable personal traits
         | like iris scans, which they themselves are bringing about[0]?
         | 
         | This is me stating why they shouldn't have respect _by default_
         | and counting to get a constructive counter-argument in return.
         | 
         | [0] https://news.ycombinator.com/item?id=35398829
        
           | cvalka wrote:
           | >Because this tech is somehow >inherently a good thing
           | Without technology humans are just unworthy bugs.
        
             | anileated wrote:
             | It is generally accepted that some applications of
             | technology are good and some are not, or at least not self-
             | evidently so (weapons of mass destruction, environmentally
             | disastrous things like PFAS, packaging every single product
             | into barely-recyclable-once plastics, gene editing humans,
             | addictive social media/FB/TikTok, etc.)
             | 
             | Is _this_ particular application of technology good, and
             | even self-evidently so?
        
         | nice_byte wrote:
         | > he and they deserve immense respect and credit for bringing
         | in this new AI age.
         | 
         | Why? Who asked for it? I think that if openAI's breakthroughs
         | never happened, we would not be any worse off (actually, we'd
         | probably be better off).
        
       | nadermx wrote:
       | Meh, the contradiction seems to be that creating a source of
       | power, be it via a physical or virtual means is different. It is
       | not. A tool, is and always will be, a tool.
        
       ___________________________________________________________________
       (page generated 2023-04-01 23:00 UTC)