[HN Gopher] OpenAI is building a social network?
___________________________________________________________________
OpenAI is building a social network?
Author : noleary
Score : 295 points
Date : 2025-04-15 16:08 UTC (1 days ago)
(HTM) web link (www.theverge.com)
(TXT) w3m dump (www.theverge.com)
| jsnider3 wrote:
| With all the other social networks trying to keep their data
| private because they all want to try their own AIs, it makes
| sense that OpenAI would want to have its own social network that
| wouldn't charge them for the data. I still doubt they actually
| launch it.
| bhouston wrote:
| I've always thought that the social networks like X and BlueSky
| are sort of like the distributed consciousness of society. It is
| what society, as a whole / in aggregate, is currently thinking
| about and knowing its ebbs and flows and what it responds to are
| important if you want to have up to date AI.
|
| So yeah, AI integrated with a popular social network is valuable.
| ahartmetz wrote:
| Social networks tend to reflect the character of their
| founders. Do you _really_ want to see what Sam Altman can do?
| bhouston wrote:
| > Social networks tend to reflect the character of their
| founders.
|
| I would say "owners" rather than "founders", but I agree with
| you. I think Sam Altman's couldn't be worse than Elon Musk's
| X, no?
| daqhris wrote:
| Both are founders of a so-called non-profit and are suing
| each other. Their legal arguments are public at this point.
| By reading them, one may understand that it's hard to
| choose between 'yes' and 'no' as an answer. Maybe, we could
| request and take into account the opinion of what they
| 'created' that might outlast them and their conflict,
| namely AI.
| ahartmetz wrote:
| I don't use X neither. Looks like it won't be around for
| much longer anyway, except as American Pravda (even though
| "Truth" Social already exists).
| newaccountlol wrote:
| Make sure to hide your little sisters from it.
| beloch wrote:
| >One idea behind the OpenAI social prototype, we've heard, is to
| have AI help people share better content. "The Grok integration
| with X has made everyone jealous," says someone working at
| another big AI lab. "Especially how people create viral tweets by
| getting it to say something stupid."
|
| This would be a decent PR stunt, but would such a platform offer
| anything of value?
|
| It might be more valuable to set AI to the task of making the
| most _human_ social platform out there. Right now, Facebook,
| TikTok, Reddit, etc. are all rife with bots, spam, and generative
| AI junk. Finding good content in this sea of noise is becoming
| increasingly difficult. A social media platform that uses AI to
| filter out spam, bots, and other AI with the goal of making human
| content easy to access might really catch on. Set a thief to
| catch thieves.
|
| Who are we kidding. It's going to be Will Smith eating spaghetti
| all the way down.
| add-sub-mul-div wrote:
| No, nothing of value. If you ever want to lose faith in the
| future of humanity search "@grok" on Twitter and look at all
| the interactions people have with it. Just total infantilism,
| people needing tl;drs spoon-fed to them, needing summarization
| and one-word answers because they don't want to read, arguing
| with it or whining to Musk if they don't get the answer they
| want to confirm what they already believe.
| Centigonal wrote:
| the worst is like a dozen people in the replies to a post
| asking Grok the exact same obvious follow-up question.
| Somehow, having access to an LLM has completely annihilated
| these commenters' ability to scroll down 50 pixels.
| rudedogg wrote:
| I bookmarked this example where it is confidently incorrect
| about a movie frame/screenshot:
|
| https://x.com/Pee159604/status/1909445730697462080
| exodust wrote:
| Your example doesn't appear to contain a reply from grok,
| only a question.
| input_sh wrote:
| It does, you just can't see it without logging in because
| Twitter is shit now.
|
| https://xcancel.com/Pee159604/status/1909445730697462080
| golergka wrote:
| > people needing tl;drs spoon-fed to them, needing
| summarization and one-word answers because they don't want to
| read
|
| It's bad that this need exists. However, introducing this
| feature did not create the need. And if this need exists,
| fulfilling it is still better, because otherwise these kind
| of people wouldn't get this information at all.
| _Algernon_ wrote:
| This is worse because the AI slop is full of hallucinations
| which they will now confidently parrot. No way in hell does
| this type of person verify or even think critically about
| what the LLMs tell them. No information is better than bad
| information. Less information while practicing the ability
| to critically use it is better than bad information in
| excess.
| a_bonobo wrote:
| All decent people I know have deleted their Twitter accounts
| - the kind of people you now see on twitter in the mentions
| are... not good people.
| exodust wrote:
| > _needing summarization_
|
| Before we get too excited with disparaging those seeking
| summaries, it's common for people of all levels to want
| summary information. It doesn't mean they want everything
| summarized or are bad people.
|
| I'm not particularly interested in "tariffs, what are they
| good for, what's the history and examples good or bad"... so
| I asked for a summary from grok. It gave me a decent summary.
| Concise and structured. I asked a few follow-ups, then went
| on with my life knowing a little more than nothing about
| tariffs. A win for summarized information.
| huvarda wrote:
| "@gork explain this tweet"
| ein0p wrote:
| You also can get Grok to fact check bullshit by tagging @grok
| and asking it a question about a post. Unfortunately this is
| not realtime as it can sometimes take up to an hour to respond,
| but I've found it to be pretty level headed in its responses. I
| use this feature often.
| exodust wrote:
| True. I see that too. It's a good addition to community
| notes. It can correctly evaluate "partially true" posts and
| those lacking details, so it's great at spotting cherry-
| picked information.
| dom96 wrote:
| Why would AI be any better at filtering out spam than
| developers have so far been with ML?
|
| The only way to avoid spam is to actually make a social network
| for humans, and the only way to do so is to verify each account
| belongs to a single human. The only way I've found that this
| can be done is by using passports[0].
|
| 0 - https://onlyhumanhub.com
| omneity wrote:
| How do you handle binationals who might not have the same
| details (or even name) on each of their passports?
| sampullman wrote:
| You can always get around identification requirements, for
| example by purchasing a fake passport in this case. The
| idea is to increase the cost/friction of doing so as much
| as possible.
|
| A fake ID is a lot harder to get your hands on than a new
| email, burner phone, etc.
| dom96 wrote:
| 1 passport = one human
|
| Yes, this does mean that dual nationals can have two
| separate human accounts. But it's still better than an
| infinite number of accounts, which is the case for social
| networks right now.
| dayvigo wrote:
| So you have to just trust them to permanently delete the data
| after verifying you?
| edaemon wrote:
| That's interesting. Is there a social network where you can
| only connect with people you meet in real life?
| kikoreis wrote:
| (Stretching a definition of social network.)
|
| Not strictly but Debian, where member inclusion is done
| through an in person chain of trust process so you have
| clusters of people who know each other offline as a basis.
|
| Also, most WhatsApp contacts have been exchanged IRL, I
| presume.
| _heimdall wrote:
| I've never been comfortable with this idea that people should
| use their real identity online. Sure they can if they choose
| to, but IMO it absolutely shouldn't be required or expected.
|
| The idea that I would give a copy of my passport to a social
| media company just to sign up, and that the social media
| company has access to verify the validity of the passport
| with the issuing government, just feels very wrong to me.
| dom96 wrote:
| I agree. That's why onlyhumanhub doesn't expect you to
| share your name. The passport verification is there to
| ensure you are a unique human, but the name of that human
| is not stored.
|
| I'm perfectly happy talking to someone without knowing
| their real name. I just want to be more confident that
| they're a unique human, and not just another sock puppet
| account run by some Russian agent (or evil corporation)
| trying to change people's beliefs at scale.
| TheOtherHobbes wrote:
| An interesting use for AI right now would be using it as a
| gatekeeping filter, selecting social media for quality based on
| customisable definitions of quality.
|
| Using it as a filter instead of a generator would provide
| information about which content has real social value, which
| content doesn't, and what the many dimensions of "value" are.
|
| The current maximalist "Use AI to generate as much as possible"
| trend is the opposite of social intelligence.
| falcor84 wrote:
| It's a nice idea in principle, but would probably immediately
| become a way by the admins to promote some views and
| discourage others with the excuse of some opinions being of
| lower quality.
| _Algernon_ wrote:
| That's what moderation is and is perfectly fine. Dang does
| that here on HN and for good reason.
| numpad0 wrote:
| It's not moderation, for one thing it never will be used
| with moderation.
| petesergeant wrote:
| I think that's right. Twitter without ads, showing you
| content you _do_ want to see using some embeddings magic,
| with decent blocking mechanisms, and not being run as a
| personal mouthpiece by the world's most unpopular man ...
| certainly not the worst idea.
| timeon wrote:
| > This would be a decent PR stunt, but would such a platform
| offer anything of value?
|
| Like all those start-ups that are on the 'mission' to save the
| world with an app. Not sure if it is PR for users or VCs.
| ceroxylon wrote:
| Sam's last social media project included users verifying their
| humanity, so there is hope that something like that slips into
| the new platform.
| numpad0 wrote:
| A 4chan but images can be prompt generated? Makes sense.
| Everything's going back to early 2000s, it seems.
| kittikitti wrote:
| I would try to make a platform like Deviantart or Tumblr except
| OpenAI pays you to make good content that the AI is trained on.
| malux85 wrote:
| Nice in theory but don't know how practical it is to actually
| do.
|
| How do you define "good"? Theres obvious examples at the
| extremes but a chasm of ambiguity between them.
|
| How do you compute value? If an AI takes 200 million images to
| train, wait let me write that out to get a better sense of the
| number:
|
| 200,000,000
|
| Then what is the value of 1 image to it? Is it worth the 3
| hours of human labour time put into creating it? Is it worth 1
| hour of human labour time? Even at minimum wage? No, right?
| paxys wrote:
| You really think an OpenAI-sponsored social network is going to
| attract people who create and share original content?
| pjc50 wrote:
| How do you stop people gaming this by feeding it the output of
| other AIs?
|
| (not to mention defining "good")
| blitzar wrote:
| This is just part of the ongoing feud between Sama and Musk.
| andrewstuart wrote:
| They should use their resources to make OpenAI good at coding.
| siva7 wrote:
| Sam got a jawline lift, anyone noticed?
| dlivingston wrote:
| Did he? Flipping back and forth between old vs. new photos of
| him, his facial structure seems roughly the same.
| beeflet wrote:
| Yes, I've been cataloging the mewing and lookmaxxing progress
| of hundreds of public figures
| labrador wrote:
| It'd be cool to see Google+ resurrected with OpenAI branding.
| Google+ was actually a pretty well designed social network
| WJW wrote:
| Not well designed enough to live, though.
| AlienRobot wrote:
| Not well designed to live under Google*
|
| Tumblr is still alive. LiveJournal is still alive. Newgrounds
| is still alive and Flash doesn't even exist anymore.
| int_19h wrote:
| It doesn't matter how well-designed it is if people aren't
| there. Social graph lock-in is the single biggest issue with
| any contender.
| bluetux01 wrote:
| that would be cool, google+ was very unique and i was kinda sad
| google killed it off
| swyx wrote:
| what did you like about it?
| labrador wrote:
| I liked the UX. I liked Circles. There were other nice
| options that I can't remember but I thought Google+ was a big
| improvement over Facebook.
| piva00 wrote:
| I don't believe it was well designed, it felt clunky to use,
| concepts weren't intuitive enough to understand after a few
| uses.
|
| I tried to use it for a few months after release, always got
| frustrated to the point I didn't feel like reaching out to
| friends to be part of it.
|
| The absurd annoyance of its marketing, pushing it into every
| nook and cranny of Google's products was the nail in the
| coffin. I'm starting to feel as annoyed by the push with
| Gemini, it just keeps popping up at annoying times when I want
| to do my work.
| chazeon wrote:
| I think a social network is not necessarily a timeline-based
| product, but an LLM-native/enabled group chat can probably be a
| very interesting product. Remember, ChatGPT itself is already a
| chat.
| sho_hn wrote:
| What's a "LLM-native/enabled group chat"?
| simple10 wrote:
| Telegram and slack bots are probably the best example so far.
| Bot gets added to a chat and can respond when mentioned in
| the group chat.
| sho_hn wrote:
| Gotcha, the NLP-enabled version of the good old IRC
| weatherbot.
|
| For a moment I had a funnier mental image of a chat app
| with an input field that treats every input as a prompt,
| and everyone's chatting through the veil of an LLM
| verbosity filter.
|
| There might be something chat RPG-like there worth trying
| though ...
| simple10 wrote:
| Yes, this. That's my bet if OpenAI follows through with social
| features.
|
| Extend ChatGPT to allow multiple people / friends to interact
| with the bot and each other. Would be interesting UX challenge
| if they're able to pull it off. I frequently share chats from
| other platforms, but typically those platforms don't allow
| actual collaboration and instead clone the chat for the people
| I shared.
| thomasfromcdnjs wrote:
| I am building this with a team currently and we are launching
| in a couple days.
|
| Would love an alpha tester or two if anyone wants to test it.
|
| My email/twitter is in my profile, shoot me a message and I
| will be in touch.
| sdwr wrote:
| Yeah, the dream is the AI facilitating "organic" human
| connection
| candiddevmike wrote:
| What else are they going to spend billions on to turn a profit?
| grg0 wrote:
| I don't know, but a weight bench goes under $200 and Sam needs
| some chest gains fast.
| swyx wrote:
| https://web.archive.org/web/20250415160251/https://www.theve...
| outside1234 wrote:
| Aren't they unprofitable enough already?
| pedalpete wrote:
| https://archive.is/qU4am
| pontus wrote:
| Is this just a data play? Need more data. Start a social network.
| Own said data.
| sva_ wrote:
| I think its more likely that they're desperate to find a
| profitable business model.
| 000ooo000 wrote:
| Seems telling that an org had arguably the leading AI, as the
| planet knows it at least, and still can't exist without
| putting ads in front of eyes. So much for the hype.
| guywithahat wrote:
| Honestly I wonder if it's because Altman loves X and is
| threatened by Grok
| prvc wrote:
| Is making yet another twitter clone really the way to build a
| path towards super-intelligence? A worthy use of the
| organization's talent?
| arcatech wrote:
| Collecting millions of people's thoughts and interactions with
| each other IS probably on the path to better LLMs at least.
| sho_hn wrote:
| I'd love for my agents to be created in the image of
| humanity's best side, its interactions on social media.
|
| Perhaps then we can all let LLMs take care of tweeting
| outrage for us, and go outside to find each other rolling
| around on the grass.
| blitzar wrote:
| Another twitter clone will help the decline of human
| intelligence, the dumber humans are the smarter the Ai appears.
| philipov wrote:
| Imagine that, a social network where _all_ of the participants
| are bots.
| tiffanyh wrote:
| My guess ... it's probably less of a "social network" and more of
| a "they are trying to build a destination (portal) where users go
| to daily".
|
| E.g. old days of Yahoo (portal)
| sho_hn wrote:
| They just want the next wave of Ghibli meme clicks to go to
| them, really.
|
| This will be built on the existing thread+share infra ChatGPT
| already has, and just allow profiles to cross-post into
| conversations, with UI and features more geared toward remixing
| each other's images.
| herpdyderp wrote:
| That was my thought: a meme-sharing platform.
| beepbopboopp wrote:
| The answer seems more obvious to me. They dont even care if its
| competitive or scales too much. xAI has a crazy data advantage
| firehousing Twitter, llama FB/IG and CGPT just has, well, the
| internet.
|
| Id hope they have some clever scheme to acquire users, but
| ultimately they want the data/
| latency-guy2 wrote:
| I actually would love this. I hate having to go to another
| website to share some thoughts I had using tools in a platform.
|
| I miss the days when experiences would actually choose to
| integrate other platforms into their experiences, yes I was
| sort of a fan of the FB/Google share button and Twitter side
| feed (not the tracking bits though).
|
| I wasn't a fan of LLM and the whole chat experience a few years
| ago, I'm a very mild convert now with the latest models and I'm
| getting some nominal benefit, so I would love to have some kind
| of shared chat session to brain storm, e.g. on a platform
| better than Figma.
|
| The one integration of AI that I think is actually neat is
| Teams + AI Note taking. It's still a hit or miss a lot of the
| time, but it at least saves and notes something important 30%
| of the time.
|
| Collaboration enhancements would be a wonderful outcome in
| place of AGI.
| mushufasa wrote:
| Sounds like they are thinking about instagram, which originated
| as a phone app to apply filters to a camera and share with
| friends (like texting or emailing them or sending them a link to
| a hosted page), and evolved into a social network. Their new
| image generation feature has enough people organically sharing
| content that they probably are thinking about hosting that
| content on pages, then adding permissions + follow features to
| all of their existing users' accounts.
|
| honestly it's not a terrible idea. it may be a distraction from
| their core purpose, but it's probably something they can test and
| learn from within a ~90 day cycle.
| CharlieDigital wrote:
| Sounds like some crossover with Civit.ai
| clonedhuman wrote:
| AI bots already make up a significant percentage of users on most
| social networks. Might as well just take the mask off completely
| --soon, we'll all be having conversations (arguments, most
| likely) with 'users' with no real human anywhere near them.
| api wrote:
| I've been saying for a while that the next innovation beyond
| TikTok, Instagram, and YouTube is to get rid of human creators
| entirely. Just have a 100% AI-generated slop-feed tailor made
| for the user.
|
| There's already a ton of AI slop on those platforms, so we're
| like half way there, but what I mean is eliminating the entire
| idea of humans submitting content. Just never-ending hypnotic
| slop guided by engagement maximizing algorithms.
| janalsncm wrote:
| An idea which sounds horrifying but would probably be pretty
| popular: a Facebook like feed where all of your "friends" are
| bots and give you instant gratification, praise, and support no
| matter what you post. Solves the network effect because it scales
| from zero.
| samcgraw wrote:
| I'm sorry to say this exists: https://socialai.co
| einrealist wrote:
| This is Altman increasing the mass of the investment black hole
| that OpenAI is.
| paulvnickerson wrote:
| Sam Altman is retaliating against Musk for Grok and Musk's
| lawsuit against OpenAI, trying to ride the wave of anti-Musk
| political heat, and figure out a way to pull in more training
| data due to copyright troubles.
|
| If they launch, expect a big splash with many claiming it is the
| X-killer (i.e. the same people that claimed the same of Mastadon,
| Threads, and Bluesky), especially around here at HN, and then
| nobody will talk about it anymore after a few months.
| AlienRobot wrote:
| Here's how to kill Twitter and Bluesky AND Mastodon:
|
| 1: use an LLM to extract the text from memes and relatable
| comics.
|
| 2: use an LLM to extract the transcriptions of videos.
|
| 3: use an LLM to censor all political speech.
|
| OpenAI, I believe in you. You can do it. Save the Internet.
|
| If you can clean my FYP of current events I'll join your social
| media before you can ask a GPT how to get more users.
| paulvnickerson wrote:
| ^-- This comment proves my point.
| terminatornet wrote:
| computer: show me jingling keys
| zombiwoof wrote:
| Just use an LLM to verify only humans are on the social
| network and no bots and you win
| mcmcmc wrote:
| > 3: use an LLM to censor all political speech.
|
| And who gets to decide what is political? Are human rights
| political? Is a trans person merely existing political? Is
| calling for genocide political?
| AlienRobot wrote:
| The LLM decides it. That's what the AI is for.
|
| There is a lot of stuff on the Internet, so I think the AI
| can just censor 80% of it and we're still going to have
| enough to have a social media.
| pjc50 wrote:
| Really saying "please, let the robot run humanity", huh.
| mcmcmc wrote:
| LLMs are guessing machines, they don't "decide" anything.
| It would be decided by the people programming it and
| putting in alignment guardrails.
| pfraze wrote:
| wonder if the LLM would censor this post
| AlienRobot wrote:
| If it works the way I want it absolutely should. Mentioning
| politics is political content.
| numpad0 wrote:
| Not-exactly-devil's-advocate: you're trying to sort content
| by quality. That's elitist. Also, those those filtered
| contents are worth more. You can't have only premium
| contents.
|
| Someone should do it anyway and make it dominant anyway ASAP.
| frabona wrote:
| Feels like a natural next step, honestly. If they already have
| users generating tons of content via ChatGPT, hosting it natively
| and adding light social features might just be a way to keep
| people engaged and coming back. Not sure if it's meant to compete
| with Twitter/Instagram, or just quietly become another daily
| habit for users
| pclmulqdq wrote:
| This would be a natural step if it were 2010. In 2025, it
| sounds like a lack of imagination to me.
| 9rx wrote:
| Perhaps, but currently OpenAI is stuck sharecropping with
| existing social networks - producing the content but not
| deriving the value. It is hard to move grander visions
| forward when you don't own the land.
| rnotaro wrote:
| FYI there's already an (early) social feed in OpenAI Sora.
|
| https://sora.com/explore?type=videos
| Nijikokun wrote:
| ngl building a social network isn't hard, getting people to use a
| social network is the hard part
| abc-1 wrote:
| Hahaha they're cooked. GPT 4.5 was a massive flop. GPT 4.1 is
| barely an improvement after over a year. Now they're grasping at
| straws. Anyone actually in this field who wasn't a grifter knew
| improvements are sigmoidal.
|
| All the original talent has already left too.
| Apocryphon wrote:
| We've got gen AI now and no ZIRP yet this is all they can think
| of, Web 2.0 will never die.
| basisword wrote:
| I can't think of anything less appealing or interesting. AI
| content as a destination has zero appeal.
| randomor wrote:
| Controversial opinion: it's not about the generator of the
| content, human or not, but about the originality of the content
| itself. Human with the help of AI will generate more good quality
| as a result.
|
| Humans are just as good as bots in generating rubbish content, if
| not more so.
|
| Twitter reduced content production cost significantly, AI can
| take it another step down.
|
| At minimum, a social network where people share good prompt
| engineering techniques will be valuable to people who are on the
| hunt for prompts. Just like the Midjourney website, except
| creating a high quality image is no longer a trip to the beach,
| but a thought experiment. This will also significantly cut down
| the cold start friction and in combination with some free
| credits, people may have more reasons to stay, as the current
| chat based business model may reach it's limit for revenue
| generation and retention, as it's just single player mode.
| godelski wrote:
| > but about the originality of the content itself
|
| Your metric is too ill-defined. Here, have some highly unique
| content gZbDrttzP6mQC5PoKXY2JNd9VIIxBUsV
| ClRF73KITgz5DVnSO0YUxMB6o7P9gh8I
| 1ttcQiNdQuIs4axdAJvjaFXXkxq0EvGq
| Pd0qwVWgSvaPw8volLA0SWltnqcCNJiy
|
| If we need unique valid human language outputs I'll still
| disagree. Most human output is garbage. Good luck on your two
| tasks: 1) searching for high quality content 2) de-duplicating.
| Both are still open problems and we're pretty bad at both. De-
| duping images is still a tough task, before we even begin to
| address the problem of semantic de-duplication.
| visarga wrote:
| The idea is to let humans be humans, make a mess, debate,
| have their opinions, and AI comes after that and removes the
| herp derp from the useful parts.
|
| As a test of concept copy paste this whole page, put it in a
| LLM and ask for an article. It will come out without junk,
| but will reflect a greater diversity of opinion, more
| arguments, will do debunking, and generally have better
| grounding in our positions than the original content.
|
| So it's careful synthesis over human chats that is the end
| value. Humans provide that novelty and lived experience LLMs
| lack, LLMs provide consistent formatting and synthesis. The
| companies that understand that users are the source of
| entropy and novelty will stop trying to own the model and
| start trying to host the question.
|
| Wondering why reddit doesn't generate thousands of articles
| per day from comment pages. It would crush traditional media
| in both diversity and quality. It would follow the
| interesting topics naturally.
| plaidfuji wrote:
| This is why LLMs are so helpful in writing research
| proposals. If I'm writing a proposal, I basically need to
| vibe on a concept without worrying about it sounding
| entirely professional / buttoned-up and concise - otherwise
| I get bogged down in word smithing. So I tell it about the
| concept I'm trying to get across, and it converts that into
| a more concise and punchy paragraph, while also providing
| critical technical feedback (if I ask) and making sure I'm
| using terms of art correctly. As you put so eloquently, it
| removes the herp derp.
|
| And then, kind of like how LLMs are good for boilerplate
| code but get less helpful as you get to more specific
| problems, this approach works for the first few paragraphs
| but deteriorates the deeper in you get.
| gorgoiler wrote:
| The analogy is with Iain Banks' _The Culture_.
|
| Anyone can be anything and do anything they want in an abundant,
| machine assisted world. The connections, cliques, friends and
| network you cultivate are more important than ever before if you
| want to be heard above the noise. Sheer talent has long fallen by
| the wayside as a differentiator.
|
| ...or alternatively it's not _The Culture_ at all. Is live
| performance the new, ahem, rock star career? In fifty years time
| all the lawyers and engineers and bankers will be working two
| jobs for minimum wage. The real high earners will be the ones who
| can deliver live, unassisted art that showcases their skills with
| instruments and their voice.
|
| Those who are truly passionate about the law will only be able to
| pursue it as a barely-living-wage hobby while being advised to
| "not give up the night job" -- their main, stable source of
| income -- as a cabaret singer. They might be a journalist or a
| programmer in their twenties for fun before economics forces them
| to settle down and get a real, stable job: starting a rock band.
| comrade1234 wrote:
| Naah... in the culture you could change your sex at will,
| something soon to be illegal.
| retransmitfrom wrote:
| The Culture is about a post-capitalist utopia. You're
| describing yet another cyberpunk-esque world where people have
| still have to do wage-labor to not starve.
| gorgoiler wrote:
| You're right so I made a slight edit to separate my two
| ideas. Thanks for even reading them at all! I try to
| contribute positively to this site when I can, and riffing on
| the overlap between fiction and real-life -- _a la Doctorow_
| -- seems like a good way to be curious.
| idiotsecant wrote:
| The culture presents such a tempting world view for the type of
| people who populate HN.
|
| I've transitioned from strongly actually believing that such a
| thing was possible to strongly believing that we will destroy
| ourselves with AI long before we get there.
|
| I don't even think it'll be from terminators and nuclear wars
| and that sort of thing. I think it will come wrapped in a
| hyper-specific personalized emotional intelligence, tuned to
| find the chinks in our memetic firewalls _just_ so. It 'll sell
| us supplements and personalized media and politicians and we'll
| feel enormously emotionally satisfied the whole time.
| t0lo wrote:
| That's why it's so important to reduce all of your personal
| data points online. Imagine what they can reconstruct based
| on their modeling and comparing you to similar users. I have
| 60 years of involuntary data collection ahead of me. This is
| not going to be fun.
| mrandish wrote:
| Brings to mind the title of a Roger Waters album:
|
| "Amused to Death"
|
| Great title and an even better album.
| https://en.wikipedia.org/wiki/Amused_to_Death
| latexr wrote:
| First a book.
|
| https://en.wikipedia.org/wiki/Amusing_Ourselves_to_Death
| overfeed wrote:
| > It'll sell us supplements and personalized media and
| politicians and we'll feel enormously emotionally satisfied
| the whole time.
|
| Which is why we'll need to acquire the drug gland technology
| before AGI - no mind can sell me anything if I can feel
| content on demand.
| wongarsu wrote:
| Alternatively they can sell you anything if they can make
| you feel content or euphoric on their command. Get your new
| drug gland today, free of charge, sponsored by Blackrock
| lucianbr wrote:
| A brave new world of AI, one might say.
| baq wrote:
| > I've transitioned from strongly actually believing that
| such a thing was possible to strongly believing that we will
| destroy ourselves with AI long before we get there.
|
| I think we'll just die out. Everyone will be too busy having
| fun to have kids. It's already started in the West.
| Nursie wrote:
| While the west has gone this way, there also seems to be a
| strong undercurrent (at least here in Australia) of "we
| can't afford to have kids (yet)".
|
| As housing has moved further out of reach of young people,
| some don't seem to feel their lives are stable enough to
| make the leap. The trend was down anyway, but the housing
| crisis seems to be an aggravating factor.
| Urahandystar wrote:
| Yep people blame technology but its really basic
| economics and would be fixed by building more affordable
| homes.
| munksbeer wrote:
| This subject has been investigated a lot. In many
| countries governments make it much easier and more
| affordable to have children, and it doesn't seem to make
| any difference.
| baq wrote:
| I agree, using housing as a source of wealth has broken a
| whole generation. When the boomers of the world start to
| massively die out (any year now), housing will deflate,
| but not spectacularly without a crisis (people don't want
| to settle where the cheap houses are in a bull market).
| bravetraveler wrote:
| I wouldn't call my kid-skipping activites fun, but go off.
|
| Spending a life on the treadmill doesn't encourage more
| walks. It encourages burning it down. All I've known is
| work. Pass on more, thanks. I hear you/others now:
| But past generations managed...
|
| That's exactly my point. Despite all of our proclaimed
| progress, we're still _" managing"_. Maintaining this
| circus/baby-crushing machine is a tough sell.
|
| To get where I could afford to have kids, I became both
| unprepared and uninterested.
|
| What's more: I'm one of the lucky ones. I was given a fancy
| title and great-but-not-domicile-ownership-great pay for my
| sacrifice. Plenty do more work for _even less_ reward.
|
| There's a sucker born every minute, right?
| Lio wrote:
| That'll be great for the world's natural outsiders. Those
| that hate pop music and dislike even taylored ads because of
| the creepy feeling of influence. Or who don't follow any
| politicians because they're all out to hoodwink you.
|
| Oh, a subset will be at risk of being artificially satisfied
| but your hardcore grouch will always have a special "yeah,
| yeah, fuck off bot" attitude.
| immibis wrote:
| Hasn't that already happened?
| munksbeer wrote:
| > I don't even think it'll be from terminators and nuclear
| wars and that sort of thing
|
| I do. And I don't even think the issue is a hostile AI. There
| are 8 billion people in the world. Millions of those people
| have severe mental issues and would destroy the world if they
| could. It seems highly likely to me that AI will eventually
| give at least one of those people the means.
| Tryk wrote:
| This is explored in the Vulnerable world Hypthesis-argument
| by Nick Bostrom [1][2]
|
| [1]
| https://en.wikipedia.org/wiki/Vulnerable_world_hypothesis
|
| [2] https://doi.org/10.1111%2F1758-5899.12718
| dsign wrote:
| There is a bias there in action: we are assuming that the
| entire world is like this thing we just happen to be thinking
| about.
|
| It is not.
|
| Even if it were just a minority, there are plenty of people
| outside "this thing" that will profit from the ((putative)
| majority's) anesthesia. Or which at least will try to set the
| world on fire (anybody remember the elections in USA a few
| months ago? That was really dumb. But sometimes a dumb feat
| shows that one is alive, which is better than doing nothing
| and being taken for dead. Or it is at least good-enough
| peacocking to attract mates and pass on the genes, which is
| just an extravagant theory of mine that I'm almost certainly
| sure is false. And do not take this as an endorsement of
| DJT). I'm not being an optimist here; I've seen firsthand the
| result of revolutions, but it may be the least-bad outcome.
| Nursie wrote:
| > The real high earners will be the ones who can deliver live,
| unassisted art that showcases their skills with instruments and
| their voice.
|
| We already have so many of those that it's very hard to make
| any sort of living at it. Very hard to see a world in which
| more people go into that market and can earn a living as
| anything other than a fantasy.
|
| Cynically - I think we'd probably end up with more influencers,
| people who are young, good looking and/or charismatic enough to
| hold the attention of other people for long enough to sell them
| something.
| ur-whale wrote:
| > Those who are truly passionate about the law will only be
| able to pursue it as a barely-living-wage hobby while being
| advised to "not give up the night job" -- their main, stable
| source of income -- as a cabaret singer. They might be a
| journalist or a programmer in their twenties for fun before
| economics forces them to settle down and get a real, stable
| job: starting a rock band.
|
| Controversial stance probably, but this very much sounds like a
| world I'd love to live in.
| tomrod wrote:
| Maybe, but Substack is building a much mote engaging social
| network. I'm frankly amazed at how good it is.
| shaftoe444 wrote:
| Logical conclusion of AI is to generate slop for slop feeds so
| why not own your own slop feed.
| rglover wrote:
| I speculated a ways back [1] that this was why Elon Musk bought
| Twitter. Not to "control the discourse" but to get unfettered
| access to real, live human thought that you can train an AI
| against.
|
| My guess is OpenAI has hit limits with "produced" content (e.g.,
| books, blog posts, etc) and think they can fill in the gaps in
| the LLMs ability to "think" by leveraging raw, unpolished social
| data (and the social graph).
|
| [1] https://news.ycombinator.com/item?id=31397703
| godelski wrote:
| But collecting more data is just a naive task. The reason scale
| works is because of the way we typically scale. By collecting
| more data, we also tend to collect a wider variety of data and
| are able to also collect more good quality data. But that has
| serious limits. You can only do this so much before you become
| equivalent to the naive scaling method. You can prove this
| yourself fairly easily. Try to train a model on image
| classification and take one of your images and permute one
| pixel at a time. You can get a huge amount of scale out of this
| but your network won't increase in performance. It is actually
| likely to decrease.
| chewbacha wrote:
| If that were the case he (Musk) wouldn't have turned it into a
| Nazi-filled red pilled echo chamber.
| danity wrote:
| Just what the world needs, another social network!
| Duanemclemore wrote:
| I haven't been happier online in the last 10 years than after I
| stopped checking social media. And in that miserable time it
| wasn't even a naked beg for training data like this.
|
| But I really don't see why anyone would even use an open ai
| "social network" in the first place.
|
| It does allow one thing for open ai. Other than training data
| which admittedly will probably be pretty low quality. It is a
| natural venue for ad sales.
| Duanemclemore wrote:
| Oh I get one thing - other than ads. So the idea of an LLM
| filter to algorithmically tailor your own consumption has some
| utility.
|
| The logical application would be an existing social network
| -using- chat gpt to do this.
|
| But all the existing ones have their own models, so if they
| can't plug in to an existing one like goooooogle did to yahoo
| in the olden days, they have to start their own.
|
| That makes a certain amount of (backward) sense for them. I
| don't think it'll work. But there's some logic if you're
| looking from -their- worldview.
| 8n4vidtmkvmk wrote:
| Isn't the selling point behind Blue sky is that you can
| customize your feed your way? I don't know the tech behind
| that but the feed is "open" isn't it? Can they plug into
| that?
| rcpt wrote:
| The selling point of Bluesky is that you don't get
| bombarded with a mob of blue checks every time you post
| something political
| SecretDreams wrote:
| Social media is a plague, including LinkedIn. Anything that
| lets you follow others and/or erodes your anonymity is just
| different degrees of cancer waiting to happen.
|
| The best I ever enjoyed the internet was the sweet spot between
| dial up and DSL where I was gaming in text based/turn based
| games, talking on forums, and chatting using IRC.
| Duanemclemore wrote:
| Agreed. I wasn't particularly hooked, didn't use it very much
| already. As an architect, designer, and professor I had ig,
| and for the last five years basically only for work. But the
| feeling of freedom in its absence these past few months has
| been palpable.
|
| Early fb reconnecting with people I hadn't seen since high
| school was okay. The blog / Google Reader era happening at
| the same time was the real golden age for me. And it's been
| all downhill since.
| saltysalt wrote:
| Strongly agree. It's fascinating to me how faster broadband
| and selfie cameras led to more slop content.
| SecretDreams wrote:
| We can think of fast internet and phones as like
| supercharging progress. Except, in this case, it just
| accelerated how quickly humans ruin it.
| notyourwork wrote:
| Reducing the effort to produce content enabled a larger
| audience to contribute. This degrades average content.
| pjc50 wrote:
| The term "eternal september" dates back to the 90s,
| referring to the phenomenon where new undergraduates would
| arrive and suddenly have access to USENET to make bad
| posts.
| imafish wrote:
| Agreed. This is where HN and reddit still smells a little bit
| like the good old times ;)
| SecretDreams wrote:
| I use both, and even Reddit has issues. Likewise, HN is
| topically more like a single, insufficiently modded,
| subreddit. Still enjoyable, but easily brigades on certain
| topics. This is most readily seen for anything political.
| throwaway743 wrote:
| This brings me back to the days of Nukezone.
| bufferoverflow wrote:
| LOL, you're on a social network right now. HN is one. Yeah,
| it's semi-anonymous, but there are many users with known names
| here.
| gloosx wrote:
| Wrong, social network is centered around the concept of "you"
| and your "friends", where the content itself is not as
| important.
|
| There is no concept of "friends" on a forum like HN, since
| people purely gather to discuss topics of interest here.
| rchaud wrote:
| > Wrong, social network is centered around the concept of
| "you" and your "friends", where the content itself is not
| as important.
|
| This post was brought to you by the year 2012.
| croes wrote:
| But I can't follow them. I don't get notifications when they
| post new links or comments, I can't send them specifically my
| links and comments. I have no groups or circles.
|
| HN is more of a discussion forum and not for connecting with
| others.
| rchaud wrote:
| Nobody is "connecting" on social media anymore, they are
| just liking and commenting on whatever random thing the
| algo throws at them.
|
| You and I are the doing same thing, commenting on posts
| made by strangers.
| whiplash451 wrote:
| HN skips many of the dark patterns that other social medias
| have:
|
| - no infinite scroll (you have to click on "More") - no
| personal recommendation - no feedback loop between your
| upvotes and the feed - no messaging or following between
| users
|
| HN looks a lot more like news groups from back in the days.
| rchaud wrote:
| The comment threads are basically "infinite scroll".
| They're paginated once they hit 500 or something. Scanning
| a SM post takes 2 seconds, HN comments much longer.
| graemep wrote:
| Its not the pagination that matters, it is the lack of a
| feed that you can keep scrolling. The front page is
| paginated at 30.
|
| Longer comments are a good thing.
| interludead wrote:
| Stepping away from social media can feel like getting your
| brain back
| stef25 wrote:
| There's no better life hack
| amelius wrote:
| We should start a campaign, and everybody who hates sm can
| use the campaign's logo as their profile photo. Maybe it will
| catch on.
| latexr wrote:
| Profile photo for what? Social media? Then you need to be
| there, which is self-defeating. If you leave your account
| dormant, no one will find it anyway but the owners of the
| platforms will still use you to count the number of users.
| amelius wrote:
| I mean 90% of users want to quit, but can't. I'm just
| proposing a way to fight back.
|
| The more people change their profile photos into the
| campaign logo, the less attractive sm becomes. Maybe at
| some point even the very addicted users will quit.
| latexr wrote:
| > I mean 90% of users want to quit
|
| 90% is implausibly high. If that many people wanted to
| quit social media, social media would no longer exist.
|
| > I'm just proposing a way to fight back.
|
| I propose an alternative is to just quit yourself and get
| on with your life. As the people around you understand
| you are happy without social media, they too can become
| interested and quit. Social media only works while there
| are people on it. The fewer of us there are, the less
| interesting it is.
|
| How many times have there been campaigns to delete social
| media accounts? They never last, and often even the
| organisers come back. I don't see a reason why this time
| it would work.
|
| All that said, far from me to discourage you. If you feel
| it's a worthy goal, I encourage you to do it. I genuinely
| hope I'm wrong and that you'll succeed where others have
| failed.
| amelius wrote:
| > 90% is implausibly high. If that many people wanted to
| quit social media, social media would no longer exist.
|
| This is how addiction works. Most smokers want to quit,
| but can't, etc., etc. Cigarettes still exist.
| coldpie wrote:
| Social media is this generation's cigarettes. It feels good
| to use it for a bit, and there's an enormous amount of
| advertising for it and social pressure to use it, but it's
| extremely addictive and the long-term personal and public
| health consequences are absolutely crushing.
|
| I hope one day we can strictly regulate social media & make
| pariahs of the people who built it, as we did with tobacco.
| Instead, we just did the equivalent of handing the entire
| federal government into Phillip Morris's control, so my hopes
| are not high.
| timeon wrote:
| > I haven't been happier online in the last 10 years than after
| I stopped checking social media. And in that miserable time it
| wasn't even a naked beg for training data like this.
|
| Meta/Twitter/etc. are drug dealers.
|
| > But I really don't see why anyone would even use an open ai
| "social network" in the first place.
|
| I really don't see why anyone would even use Heroin yet they
| do.
| latexr wrote:
| I bet we could draw several parallels.
|
| "It feels good."
|
| "I can quit whenever I want."
|
| "I was on it the whole night instead of sleeping. I felt
| awful in the morning."
|
| "I can't stop. All my friends are on it and I don't want to
| be alone."
| throw_m239339 wrote:
| What would be the point? Why would it even need real members?
| paxys wrote:
| Ads
| uptownfunk wrote:
| It's all whatever will maximize valuation. They can do it until
| antitrust comes for them.
| lukev wrote:
| This kind of news should be a death-knell for OpenAI.
|
| If you've built your value on promising imminent AGI then this
| sort of thing is purely a distraction, and you wouldn't even be
| considering it... unless you knew you weren't about to shortly
| offer AGI.
| pyfon wrote:
| It is a Threads. How is that doing?
| Nuzzerino wrote:
| > If you've built your value on promising imminent AGI then
| this sort of thing is purely a distraction, and you wouldn't
| even be considering it... unless you knew you weren't about to
| shortly offer AGI.
|
| I'm not a big fan of OpenAI but this seems a little unfair.
| They have (or at least had) a pretty kick ass product. Great
| brand value too.
|
| Death-knell? Maybe... but I wouldn't read into it. I'd be
| looking more at their key employees leaving. That's what kills
| companies.
| smt88 wrote:
| - Product is not kickass. Hallucinations and cost limit its
| usefulness, and it's incinerating money. Prices are too high
| and need to go much higher to turn a profit.
|
| - Their brand value is terrible. Many people loathe AI for
| what it's going to do for jobs, and the people who like it
| are just as happy to use CoPilot or Cursor or Gemini.
| Frontier models are mostly fungible to consumers. No one is
| brand-loyal to OpenAI.
|
| - Many key employees have already left or been forced out.
| stef25 wrote:
| > Product is not kickass
|
| It might not be the best but people of whom you'd have
| never thought that they would use it, are using it. Many
| non technical people in my circle are all over it and they
| have never even heard of Claude. They also wouldn't
| understand why people would loathe AI because they simply
| don't understand those reasons.
| DJHenk wrote:
| Still, it burns money like nothing has in history.
| tokioyoyo wrote:
| My dad uses ChatGPT for some excel macros. He's ~70, and
| not really into tech news. Same with my mom, but for more
| casual stuff. You're underestimating how prevalent the
| usage is across "normies" who really don't care about
| second order effects in terms of employment and etc.
| OccamsMirror wrote:
| I loathe AI for what it's doing the job market.
|
| But I'd be stupid not to use it. It has made boilerplate
| work so much easier. It's even added an interesting
| element where I use it to brain storm.
|
| Even most haters will end up using it.
|
| I think eventually people will realize it's not replacing
| anyone directly and is really just a productivity
| multiplier. People were worried that email would remove
| jobs from the economy too.
|
| I'm not convinced our general AIs are going to get much
| better than they are now. It feels like we're we are at
| with mobile phones.
| frm88 wrote:
| > People were worried that email would remove jobs from
| the economy too.
|
| And it did. Together with other technology, but yes:
|
| https://archive.is/20200118212150/https://www.wsj.com/art
| icl...
|
| https://www.wsj.com/articles/the-vanishing-executive-
| assista...
| spacebanana7 wrote:
| > No one is brand-loyal to OpenAI.
|
| Sam Altman is incredibly popular with young people on
| TikTok. He cured homework - mostly for free - and has a
| nice haircut. Videos of him driving his McLaren have
| comment sections in near total agreement that he deserves
| it.
| pjc50 wrote:
| > He cured homework - mostly for free
|
| People argue about the damage COVID lockdowns did to
| education, but surely we're staring down the barrel of a
| bigger problem where people can go through school without
| ever having to think or work for themselves.
| Duralias wrote:
| Not entirely sure what they meant but chatgpt and the
| likes have forced schools to stop relying on homework for
| grades and are instead shifting over to assignments done
| more as an mini exam, more work for the school, but you
| can't substitute your knowledge for chatgpt in such
| cases, you actually need to know to succeed.
| dminik wrote:
| Well, that may be, but it's entirely possible that this
| outlook might change.
|
| In the worst case he's poisoned an entire generation. If
| ChatGPT doctors, architects and engineers are anything
| like ChatGPT "vibe-coders" we're fucked.
| jappgar wrote:
| Consumers may not be brand loyal, but companies are.
|
| If you're doing deep deals with MSFT you're going to be
| strongly discouraged from using Gemini.
| smt88 wrote:
| Microsoft isn't even loyal to OpenAI! You can use
| multiple models in Azure and CoPilot
| Topfi wrote:
| Counterpoint, ChatGPT as a brand has insane mindshare and
| buy in. It is synonymous with LLMs/"aI" in the mind of many
| and has broken through like few brands before it. That
| ain't nothing.
|
| Counter-Counterpoint, I still feel investors priced in a
| bit more than that. Yahoo! Had major buy in as well and AGI
| believers were selling investors not just on the next
| Unicorn, but rather the next industry, AGI not being merely
| a Google in the 90s, but rather all of the internet and
| what that would become over the decades to this day.
| Anything less than delivering that, is not exactly what a
| large part of investors bought. But then again, any bubble
| has to burst someday.
| latexr wrote:
| > I'm not a big fan of OpenAI but this seems a little unfair.
| They have (or at least had) a pretty kick ass product. Great
| brand value too.
|
| Even if you believe all that to be true, it in no way
| contradicts what you quoted or makes it unfair. Having a kick
| ass product and good brand awareness in no way correlates to
| being close to AGI.
| robotresearcher wrote:
| AGI is a technology or a feature, not a product. ChatGPT is a
| product. They need some more products to pay for one of the
| most expensive technologies ever (to not be delivered yet).
| leptons wrote:
| It's a shame that $1 trillion is being poured into AI so
| quickly, but fusion research has only seen a fraction of that
| over many decades.
| andsoitis wrote:
| Isn't that a reflection, to some (major?) extent, of the
| general sense of likelihood of breakthrough?
| xmprt wrote:
| I used to think like this but after seeing the amount of
| money invested into crypto companies which most average
| people could have quickly dismissed as irrelevant, I'm
| not sure VCs are a good judge of value.
| j_maffe wrote:
| No it's the _perception_ of likelihood of breakthrough
| caseyy wrote:
| It's why we have warnings to investors saying past
| performance is not an indicator of future returns.
| plorg wrote:
| Maybe it means investors think it will happen, maybe it
| means investors simply think it will be profitable. It
| could even be investors seeking a market that has juice
| long after ZIRP dried up.
| rchaud wrote:
| Capital gains on AI investment doesn't need a
| breakthrough. I'm sure pharma companies and others are
| building their own custom models to assist R&D, but those
| aren't the ones sinking the truly huge sums into
| consumer-facing LLMs.
| leptons wrote:
| We already know fusion exists and happens and has been
| happening for billions of years - it's well understood.
| It may not be simple to reproduce _and sustain_ on earth,
| but there is a fairly clear path to get there.
|
| There has never been an AGI, and there's no clear path to
| get there. And no, LLMs are not bringing us any closer to
| a real AGI, no matter how much Sam Altman wants to try to
| redefine what "AGI" means so that he can claim he has it.
|
| The difference between AGI funding and fusion funding is
| the people hawking AGI are better liars and the people
| funding are far more gullible than the people trying to
| make fusion power a reality.
| Nevermark wrote:
| We already know fusion has been a grind. For decades.
|
| We already know that computation has grown exponentially
| in scale and cost, for decades. With breakthrough
| software tech showing up reliably as it harnesses greater
| computational power, even if unpredictably and sparsely.
|
| If we view AI across decades, it has reliably improved.
| Even through its "winters". It just took time to hit
| thresholds of usefulness making relatively smooth
| progress look very random from a consumers point of view.
|
| As computational power grows, and current models become
| more efficient, the hardware overhang for the next major
| step is growing rapidly.
|
| That just hasn't been the case for fusion.
| imafish wrote:
| Unfortunately VC investments rely on FOMO, and there is no
| current huge breakthrough in fusion research that they are
| afraid of missing out on.
| Tenoke wrote:
| Much, much less was being poured into AI until it started
| to have some returns.
| leptons wrote:
| What returns? OpenAI is still not profitable.
| csharpminor wrote:
| If you believe the AGI thesis, this is a stepping stone to
| unlocking fusion and other scientific advances.
| immibis wrote:
| Once we unlock fusion, humanity starts down a millenia-long
| path to making the planet permanently uninhabitable (not
| just temporarily like with climate change) by turning all
| the water into bitcoins.
| orbifold wrote:
| I don't think that is true, there is only so much heavy
| water on the planet and doing hydrogen fusion is not all
| that economical.
| bookofjoe wrote:
| Gemini 10.0
|
| Wait for it
| pjc50 wrote:
| If its proponents are taken at face value, AGI is a _threat_.
| parhamn wrote:
| There could be too-many-cooks in the AI research part of their
| work.
|
| Also, I don't think Sama thinks like a typical large org
| managers. OpenAI has enough money to have all sorts of
| products/labs that are startup like. No reason to standby
| waiting for the research work.
| smt88 wrote:
| Altman doesn't think like a typical large org manager because
| he's never successfully built or run one. He failed upward
| into this role.
|
| OpenAI doesn't have enough money to even run ChatGPT in
| perpetuity, so building internal moonshots is an
| irresponsible waste of investor funds.
| make3 wrote:
| this might just be a way to generate data
| ChuckMcM wrote:
| Alternative is that OpenAI is being quickly locked out of
| sources of human interactions because of competition, one way
| to "fix" that is build you're own meadow for data cows.
|
| xAI isn't allowing people to use the Twitter feed to train AI
|
| Google is keeping it's properties for Gemini
|
| Microsoft, who presumably _could_ let OpenAI use it 's data
| fields appears (publicly at least) to be in a love/hate
| relationship with OpenAI these days.
|
| So you plant a meadow of tasty human interaction morsels to get
| humans to sit around and munch on them while you hook up your
| milking machine to their data teats and start sucking data.
| Springtime wrote:
| They also have a contract with Reddit to train on user data
| (a common go-to source for finding non-spam search results).
| Unsure how many other official agreements they have vs just
| scraping.
| safety1st wrote:
| Good heavens, I'd think that if anything could turn an AI
| model into a misanthrope, it would be this.
| Springtime wrote:
| One distinctive quality I've observed with OpenAI's
| models (at least with the cheapest tiers of 3,4 and o3)
| are their human-like face-saving when confronted with
| things they've answered incorrectly.
|
| Rather than directly admit fault they'll regularly
| respond in subtle (moreso o3) to not so subtle roundabout
| ways that deflect blame rather than admit direct fault,
| even when it's an inarguable factual error about even
| conceptually non-heated things like API methods.
|
| It's an annoying behavior of their models and in complete
| contrast to say Anthropic's Claude which ime will
| immediately and directly admit to things it had responded
| incorrectly about when the user mentions it (perhaps
| _too_ eagerly).
|
| I have wondered if this is something its learned based on
| training from places like Reddit, or if OpenAI
| deliberately taught it or instructed via system prompts
| to seem more infallible or if models like Claude were
| made to deliberately reduce that aspect.
| wodenokoto wrote:
| > It's an annoying behavior of their models and in
| complete contrast to say Anthropic's Claude which ime
| will immediately and directly admit to things it had
| responded incorrectly about when the user mentions it
|
| I don't know whats better here. ChatGPT did have a
| tendency to reply with things like "Oh, I'm sorry, you
| are right that x is wrong because of y. Instead of x, you
| should do x"
| cameronh90 wrote:
| _> Rather than directly admit fault they 'll regularly
| respond in subtle (moreso o3) to not so subtle roundabout
| ways that deflect blame rather than admit direct fault_
|
| Human-level AI is closer than I'd realised... at this
| rate it'll have a seat in the senate by 2030.
| spencerflem wrote:
| They are already passing ChatGPT written laws
|
| https://hellgatenyc.com/andrew-cuomo-chatgpt-housing-
| plan/
| bayindirh wrote:
| They also have syndication agreement with The Guardian [0].
| I lost all my respect for The Guardian after seeing this.
|
| [0]: https://www.theguardian.com/gnm-press-
| office/2025/feb/14/gua...
| KoolKat23 wrote:
| Why? I can't see any issues with this at all, whatsoever.
| bayindirh wrote:
| Everybody is free to have their own opinions. I don't
| like how AI companies and mostly OpenAI "fair-uses" whole
| internet, so there's that.
| dylan604 wrote:
| Again, like others have contested with you, how is this
| The Guardian's fault to have issue with? They convinced
| ClosedAI to give them money in a licensing deal to use
| their content as training data without having it scraped
| for free.
|
| Your sense of injustice or whatevs you want to call it is
| aimed in the opposite direction.
| KoolKat23 wrote:
| This isn't even fair-use defence, they're paying to use
| it expressly for this purpose.
| teruakohatu wrote:
| Why shouldn't they license their content? It's theirs and
| it's a non-profit that needs revenue.
| bayindirh wrote:
| It's not that they shouldn't license their content. I'm
| not a fan of OpenAI and their "fair use" of things, to be
| honest.
| freedomben wrote:
| But this is the opposite of fair use. They're licensing
| the content, which means they're paying for it in some
| fashion, not just scraping it and calling it fair use.
|
| If you don't like the fair use of open information, I
| would expect you to be cheering this rather than losing
| respect for those involved.
| silisili wrote:
| I cannot think of a worse future for AI than parroting
| Reddit comments.
| lucianbr wrote:
| The assumption that you can just build a successful social
| network as an aside because you need access to data seems
| wildly optimistic. Next will be Netflix announcing working on
| AGI because lately show writers have been not very
| imaginative, and they need fresh content to keep subscribers.
| pjc50 wrote:
| Netflix will almost certainly try to push a vizslop or at
| least AI-written show at some point to see how bad the
| backlash is.
| bn-l wrote:
| There's a guy who does these weird alien talking head /
| vox pop videos on YouTube. They're pretty funny. I can
| see the potential.
| littlestymaar wrote:
| Yes, but this misses the point ade by the comment above.
| Using AI in your workflow [?] attempting to build AGI.
| pjc50 wrote:
| If you have AGI, what do you do with it in your workflow?
| Or is the question what does it do with _you_?
| DJHenk wrote:
| 'Wildly optimistic' is a very fitting categorization for
| Altman/OpenAI.
| scyzoryk_xyz wrote:
| Mmmm nice try hooking me up.
|
| Instead, I'm just going to hang out here in this hacker
| meadow and on FOSS social networks where something like that
| would _never_ happen!
| dylan604 wrote:
| Why oh why did I take the red pill? Plug me back in.
| baq wrote:
| > Microsoft, who presumably could let OpenAI use it's data
| fields appears (publicly at least) to be in a love/hate
| relationship with OpenAI these days.
|
| sama probably would like to take Satya's seat for what he no
| doubt sees as unblocking the path to utopia. The slight
| problem is he's becoming a bit lonely in that thinking.
| freehorse wrote:
| I was always wondering if such sociopaths actually believe
| they are doing sth for noble reasons, or they consciously
| use it as a trick in their quest for power.
| bravetraveler wrote:
| Must consider the illusory truth effect
| fullshark wrote:
| Sociopathy isn't enough, you have to mix in some
| megalomania and that's the output
| anshumankmr wrote:
| But don't they have ChatGPT, the fifth or whatever most
| popular website on the planet? And deals with Reddit. Sure
| that can't touch the treasure trove Google is sittig on, xAI
| sure won't give them access and Github could perhaps sell
| their data (but that's a maybe)
| m_fayer wrote:
| I would just like to appreciate your imagery and wordplay
| here, it's spot on and I think should be our standard for
| conceptualizing this corporate behavior.
| everythingisfin wrote:
| If this were their plan, they'd be discounting that some of
| their users would be controlled by their own AI.
|
| My guess is that they're trying other things to diversify
| themselves and/or to try to keep investors interested.
| Whether or not it works is irrelevant as long as they can
| convince others it will increase their usage.
| everdrive wrote:
| I came across a quote in a forum which was part of a
| discussion around why corporate messaging and pandering has
| gotten so crazy lately. One comment stuck out as especially
| interesting, and I'll quote it in full below:
|
| ---
|
| C suites are usually made of up really out of touch and weird
| workaholics, because that is what it takes to make it to the
| C suite of a large company. They buy DSS (decision support
| service / software) from vendors, usually marketing groups,
| that basically tell them what is in and what isn't. Many
| marketing companies are now getting that data from twitter
| and reddit, and portraying it as the broad social trend. This
| is a problem because twitter and reddit are both extremely
| curated and censored, and the tone of the conversation there
| is really artificial, and can lead to really bad conclusions.
|
| ---
|
| This is only somewhat related, but if OpenAI did actually
| succeed in building their own successful social media
| platform (doubtful) they would be basing a lot of their model
| on whatever subset of people wanted to be part of the OpenAI
| social media platform. The opportunity for both mundane and
| malicious bias in models there seems huge.
|
| Somewhat related, apparently a lot of English spellings were
| standardized by the invention of the printing press. This
| isn't surprising; it was one of the first technologies to
| really democratize written materials, and so it had a very
| outsized power to set standards. LLMs feel like they could be
| a bit like this, particularly if everyone continues with
| their current trends of intentionally building reliance on
| them into their products / companies / workflows. As a real
| life example, someone at work realized you could ask co-pilot
| to rate the professionalism of your communication during a
| meeting. This seems quite chilling, since you're not really
| rating your professionalism, but measuring yourself against
| whatever weird bell curve exists in co-pilot.
|
| I'm absolutely baffled that LLMs are seeing broad adoption,
| and absolutely baffled that people intentionally adopting and
| integrating them into their lives. I'm in my early 40s now.
| I'm not sure if I can get out of the tech field at this
| point, but I'm seriously thinking about options at this
| point.
| Workaccount2 wrote:
| I lowkey believe that the twitterification of journalists
| is probably one of the worst things to happen to the
| country in the last 25 years.
|
| The social harm inflicted by journalists thinking "Damn, I
| can just go on twitter to find out what is going on and how
| people feel about it!"
| jtwoodhouse wrote:
| This metaphor nails it. In this day and age, the way around
| walled gardens is to build your own walled garden apparently.
| saltysalt wrote:
| Indeed! Ultimately, all online business models end at ad click
| revenue.
| westoncb wrote:
| I think it might just be about distribution. Grok gets a lot of
| interesting opportunities for it over X, then throw in the way
| people reacted to new 4o image gen capabilities.
| ben_w wrote:
| OpenAI's idea of "shortly" offering AGI is "thousands" of days,
| 2000 days is just under 5.5 years.
| kromem wrote:
| Don't underestimate the importance of multi-user human/AI
| interactions.
|
| Right now OAI's synthetic data pipeline is very heavily
| weighted to 1-on-1 conversations.
|
| But models are being deployed into multi-user spaces that OAI
| doesn't have access to.
|
| If you look at where their products are headed right now, this
| is very much the right move.
|
| Expect it to be TikTok style media formats.
| pjc50 wrote:
| Someone down below mentioned ads, and I think that might well
| be the route they're going to try: charging advertisers to
| influence the output of the AI.
|
| As for whether it will work, I don't know how they're possibly
| going to get the "seed community" which will encourage others
| to join up. Maybe they're hoping that all the people making
| slop posts on other social networks want to cut out the
| middleman and have communities of people who actually enjoy
| that. As always, the sfw/nsfw censorship line will be an
| important definer, and I can't imagine them choosing NSFW.
| rpastuszak wrote:
| Called it!
|
| https://tidings.potato.horse/about
| weatherlite wrote:
| > If you've built your value on promising imminent AGI then
| this sort of thing is purely a distraction, and you wouldn't
| even be considering it... unless you knew you weren't about to
| shortly offer AGI.
|
| Or even if you did come up with AGI, so would everyone else.
| Gemini is arguably better than ChatGPT now.
| klabb3 wrote:
| Bingo. The secret sauce is never a sustainable long term
| moat. Things leak, competitors copy, or they make even better
| things, employees switch jobs. AI looks less vulnerable,
| since it's academically difficult and expertise is still
| limited. But time and time again, new secret sauce
| ingredients last for months at a maximum, before they are
| exceeded oftentimes by hobbyists or other small actors.
|
| I remember at Google they said well if source code leaks
| nobody is actually worried about stealing tech, the vast
| majority of code is open to all employees, with some
| exceptions like spam-, ranking, etc. It's still protected,
| but not considered moat.
|
| The moat comes from other things, such as datacenter & global
| networking infrastructure, marketing new products by pushing
| them through existing products (put Gemini in search, add
| chrome to Android etc). Most importantly you can use data you
| already have to bootstrap new products, say Gmail and
| calendar integrated with personalized assistants.
|
| If you play your cards right, yes there's some first-mover
| advantages, but they are more superficial than your average
| Twitter hype thread makes you think. It can give you the
| ability to set unofficial standards and APIs, like
| Kubernetes, S3 - (maybe OpenAI APIs?). And you can set
| certain agendas and market your name for recognition and
| trust. But all that can slip through your fingers if a
| behemoth picks up where you left off. They have so many
| advantages, except for being the fastest.
| Topfi wrote:
| In fairness, the AGI definition predicted by doomsday safety
| experts who don't have time for such unimportant concerns as
| copyright or misinformation via this tech, everyone who is
| most certainly not merely hyping to get investor cash and the
| utterly serious and scientifically grounded research
| happening at MIRI, is essentially that one company will
| achieve AGI, shortly thereafter that will lead to a
| singularity and that's that. No one else could create a
| second because that scary AGI is so powerful and would
| prevent anyone else from shutting it down, including other
| AGI. And no, this is totally no a sci-fi plot, but rigorous
| research. Incidentally, I'm looking for someone to help me
| prevent OAI from killing my own grandfather, cause that
| scenario is also incredibly likely and should be taken
| seriously.
|
| If it's not obvious already, I believe we are far away from
| that with LLMs and that the attention those working in model
| safety give to AGI over current day concerns like Meta just
| torrenting for model data, are not very serious people, but I
| have accepted that this isn't a popular opinion amongst
| industry professionals.
|
| Not least because letting laws prevent them from model
| training is bad according to them, either for the same weird
| logic Musk uses to justify testing FSD Betas on an unwilling
| public due to the potential to prevent future deaths, or
| because they genuinely took RB seriously. No idea what's
| worse for serious adults...
| sevensor wrote:
| Adding social media to your thing is so 2018. Is the next big
| thing really just a warmed over version of the last big thing?
| Is sama just completely out of ideas to save his money-burner?
| rchaud wrote:
| It's an easy thing to slap on to a service with lots of
| users. Back in the day this would be called a 'message
| board'. "Social media" requires the use of iframes that can
| be embedded on 3rd party sites. OpenAI is a login-only
| environment so I can see them going for a Discord-type of
| platform rather than something that spreads to the open web.
| jacobsenscott wrote:
| Everything devolves to ad sales. Do you know the minute details
| about their lives people type into chat gpt prompts? It's a
| gold mine for ads.
| NewUser76312 wrote:
| I think it was a strategic mistake for Sam et al to talk about
| "AGI".
|
| You don't need some mythical AI to be a great company. You need
| great products, which OpenAI has, and they keep improving them.
|
| Now they've hamstrung themselves into this AGI nonsense to try
| and entice investors further, I guess.
| diggan wrote:
| > Now they've hamstrung themselves into this AGI nonsense to
| try
|
| AFAIK, they've been on the AGI hype-train for a very long
| time, before they reached mainstream popularity for sure.
| From their own blog (2020 -
| https://openai.com/index/organizational-update/), here is a
| mention of their "mission":
|
| > We're proud of these and other research breakthroughs by
| our team, all made as part of our mission to achieve general-
| purpose AI that is safe and reliable, and which benefits all
| humanity.
|
| I'm not sure OpenAI trying to reach AGI is a "strategic
| mistake" as much as "the basis for the business" (which, to
| be fair, was a non-profit organization initially).
| jug wrote:
| AI as we know it (GPT based LLM's) have peaked. OpenAI noticed
| this sometime autumn last year when would-be GPT-5 was
| unimpressive despite the huge size. I still think ChatGPT 4.5
| was GPT-5, just rebranded to set expectations.
|
| Google Gemini 2.5 Pro was remarkably good and I'm not sure how
| they did it. It's like an elite athlete doing a jump forward
| despite harsh competition. They probably have excellent
| training methodology and data quality.
|
| DeepSeek made huge inroads in affordability...
|
| But even with those, intelligence itself is seeing diminishing
| returns while training costs are not.
|
| So OpenAI _needs_ to diversify - somehow. If they rely on
| intelligence alone, then they're toast. So they can't.
| Topfi wrote:
| I tentatively agree that LLMs have reached somewhat of a
| ceiling at this stage and diversifying would make sense at
| this stage, in any other industry. But as others pointed out,
| OAI and others have attached their valuation directly to
| their definition of achieving "AGI". Any pivot from that, if
| it were realistic in the coming years (my opinion: it isn't),
| would be foolhardy and go against investors, so in turn, this
| is clearly admitting that even sama doesn't see AGI as
| possible in the near term.
| 9rx wrote:
| On the other hand, if you knew AGI was on the near horizon,
| you'd know that AGI will want to have friends to remain happy.
| You can give AGI a physical form so it can walk down to the bar
| - or you can, much more simply, give it an online social
| network.
| beambot wrote:
| Makes me (further) believe that Reddit is heavily undervalued...
| alphazard wrote:
| Alright, I'll bite. What's a reasonable price for Reddit?
| Aren't most of their users bots?
| pyfon wrote:
| Doesn't matter. Subreddits create vast islands of value. A
| single sub overrun with bots is quarantined effectively.
|
| That is why Reddit is one of my favourite social sites. It is
| algorithmic but if you go to r/assholedesign you get asshole
| design. (and an anal mod who keeps it like that) Etc.
|
| Value $44bn ;)
| blitzar wrote:
| Discord is the real play.
| robbomacrae wrote:
| I'm came to the comments here wondering why OpenAI can
| seemingly fund massive new AI chip data centers but can't
| acquire Discord.
| blitzar wrote:
| Buy discord (for 44bn) - skin a version to be a
| slack/teams/zoom/google thingy clone for corporates -
| integrate ChatGPT into all of it.
|
| Discord will probably IPO when conditions allow - might be
| an easy buy for a stock / cash offer.
| Nevermark wrote:
| A social network that faithfully and intelligently curated posts
| according to my own continuously updated (explicit) direction
| would be most excellent.
|
| But it would also juice echo chamber depth and further amplify
| extremist "engagement".
|
| And the monetary incentives for OpenAI to generate most of the
| content, the "people", and the ads, including creative
| hallucinations and novel extremisms, so they directly match each
| of our curation directions, would enshittify the whole thing
| within a short minute.
|
| --
|
| The time has come to outlaw conflict of interest businesses that
| scale (the conflict).
|
| If a startup plan includes "sales" and "customers": Green light
| go.
|
| If it talks about ways to "monetize": Red trash can.
|
| If only.
| misonic wrote:
| this sounds not appealing to me, what extra value would it
| provide? why do I need a new X with AI support, making friends
| with Agents/Bots?
| xpl wrote:
| One interesting benefit is that OpenAI would be able to detect
| bots using their APIs to generate content.
| aussieguy1234 wrote:
| LLM -> Social Media Platform -> Tiktok clone.
|
| That would be an interesting evolution.
| arizen wrote:
| Social media is becoming TikTok's clone army, with algorithms
| hooked on short-form videos for max engagement.
|
| Text, images, and long-form content are getting crushed,
| forcing creators into bite-sized video to be favored by
| almighty algorithm.
|
| It's like letting a kid pick their meals - nothing but sugar
| and candy all day.
| tayo42 wrote:
| your probably not wrong
|
| But personally i don't get it. i hate almost all videos. the
| exception is some thing is best shown as a video, like a how
| to. but I search those out
|
| Maybe im so broken at this point, who has the patience for
| videos? Clearly a lot but how lol
| aussieguy1234 wrote:
| So far, I've refused to watch Tiktok.
|
| Something about mindless garbage doesn't appeal to me.
| mrandish wrote:
| Okay, thinking charitably here... maybe a play at getting
| training data they don't have to steal? (although it does seem
| like rotating the ladder instead of the lightbulb...)
| gerash wrote:
| I believe the play here is:
|
| 1. Look "Studio Ghibli" went viral, let's capitalize
|
| 2. Switching cost for LLMs are low. If we can't be the best let's
| find other ways to lock our users in and make our product super
| sticky
| BrenBarn wrote:
| So Facebook is trying to get into AI (e.g. its chatbot-"user"
| debacle) and OpenAI wants to form its own social network. Our
| world is becoming the recycled shit-food of this technological
| ouroboros.
| tossandthrow wrote:
| The American play book: make some innovation but do a bait and
| switch and focus all energy on value extraction.
| karel-3d wrote:
| AI generated posts and images and nonstop posting about AI? That
| sounds like LinkedIn in 2025.
| Iolaum wrote:
| Maybe before building a social network, you should be able to
| share the result of an answer with another user, even if they
| have not paid/subscribed.
|
| Tried to share an answer to a colleague (who didn't have the paid
| version) and he couldn't see it ...
| pluto_modadic wrote:
| They know AI can be addictive (people will prompt it far too
| often), so mixing it with social media can captivate users even
| more effectively.
| hybrid_study wrote:
| and they can own all the data
| beaugunderson wrote:
| > While the project is still in early stages, we're told there's
| an internal prototype focused on ChatGPT's image generation that
| has a social feed.
|
| isn't it public already? they basically made tumblr but
| everything is AI:
|
| https://sora.com/explore
| interludead wrote:
| Curious to see if this leans more "creative community" or
| "algorithmic content zoo."
| paride5745 wrote:
| It makes no sense to build a social network nowadays.
|
| With Mastodon and Bluesky around, users have free options. Plus X
| and Threads, and you can see how the market is more than
| saturated.
|
| IMHO they should look into close collaboration/minority stake
| with Bluesky or Reddit instead. You have a huge pool of users
| already, without the need to build it up from the ground up from
| scratch.
|
| Heck, OpenAI probably has enough money to just buy Reddit if they
| want.
| b1n wrote:
| Also, what is their USP? "Join our social network so we can
| train our models on your data!"
| Traubenfuchs wrote:
| ...free AI slop right from the tap? Like the latest ghibli
| fad.
| seafoamteal wrote:
| I don't know about Reddit, but Bluesky would never in a million
| years partner themselves publicly with OpenAI. I can't comment
| on the opinions of the team themselves because I just don't
| know, but the users would revolt. Loudly.
| GeorgeCurtis wrote:
| The whole value proposition of a social media is that everyone
| you know (almost) is on it. That's why young people don't use
| Facebook. They'd be better off buying one
| svara wrote:
| I guess where this is all going in the long run is something with
| an interface similar to TikTok, where the user gives rapid
| feedback to train an algorithm to generate content that they
| "love", er, that maximally tickles their reward circuitry.
| buyucu wrote:
| openai is getting crushed by competitors who are offeering more
| cost-effective alternatives.
|
| so sam altman is pressing all buttons to keep the hype train
| going.
| nottorp wrote:
| ... for bots. The "AI" bots are lonely and this will let them
| talk to each other.
| camkego wrote:
| I suspect they are searching for network effects, otherwise they
| know the switching costs are too low for their users
| robert-whiteley wrote:
| Counter opinion - What tech people don't seem to realise about AI
| is that putting it into a user-friendly format for the average
| person is what has led to this current revolution, i.e. ChatGPT
|
| This is just the next step of that. This also doesn't stop them
| working on 'AGI', you need the data as well as the models, as
| most people familiar with the field will known. I will be happy
| to be proven wrong on this prediction.
| INTPenis wrote:
| The aversion to robots in Fediverse suddenly makes a lot of
| sense.
|
| I don't want to live on the internet that is crammed with AI
| content.
| gtirloni wrote:
| Social networks are the TODO list app of rich people, apparently.
| trilbyglens wrote:
| What a dumbfuck article. It's essentially a gossip-rag level
| story based on a few random and meaningless tweets from a couple
| of billionaire douchebags.
| thatgerhard wrote:
| this is starting to feel a lot like the ~15 years ago where
| everyone wanted a social network
| sharathnarayan wrote:
| May be they need the social media data to improve their models? X
| and Meta have an edge here
| rvnx wrote:
| Data quality on social networks like Twitter/Meta, is very low
| compared to what you see in Wikipedia or Reddit
| antirez wrote:
| Isn't Gemini 2.5 the proof you don't need social network alike
| data for training?
| dktp wrote:
| Google has a deal with Reddit to scrape its content for
| training AI. It also has Youtube
| seafoamteal wrote:
| So... one stop shop to generate slop and send it out into the
| ether to be recycled into more slop.
| nbzso wrote:
| Scam Altman is running out of "options". Serial failing is SV
| current business model.
| arjunaaqa wrote:
| If they simply take Chatgpt profiles, a button to share with
| others, and add a follow button for every qa pair.
|
| - They have a quick social network with feed of qna with AI (like
| stackoverflow) immediately.
| anentropic wrote:
| > "The Grok integration with X has made everyone jealous," says
| someone working at another big AI lab. "Especially how people
| create viral tweets by getting it to say something stupid."
|
| It's awesome to see the amazing value for society being created
| by big tech these days.
| sph wrote:
| To think that even a year ago the idea of Instagram-style
| social media where all posts are openly AI-generated sounded
| very dystopian, now I can clearly so it is something people
| would pay for and HN people would gladly build. I wasn't always
| a Luddite, but damn they made me one.
| yapyap wrote:
| Although that is true for Instagram style media it was always
| a risk for text based media.
|
| As PoC I'd say look at the subreddit SubSimulatorGPT2
|
| www.reddit.com/r/SubSimulatorGPT2
| fullshark wrote:
| "Social" media isn't really social anymore, it just means any
| idiot with a computer can generate the media you're
| consuming.
| flir wrote:
| Write-only media.
| dylan604 wrote:
| wouldn't it be WORM media though as the point is to have
| as many others read it as possible?
| coldpie wrote:
| > I wasn't always a Luddite, but damn they made me one.
|
| If this industry didn't pay so well, I would've been gone
| years ago. I'm lucky to work in a job that I think is ethical
| and improving the world, but it's so goddamn embarrassing to
| even be in the same room as the AI and blockchain types and
| the ad hucksters.
| sph wrote:
| It's not gonna pay well for long, unless you find yourself
| a very tight and lucrative niche. My goal on the other hand
| is to buy a house in the woods by end of the year and
| become a woodworker before I have to be assimilated by the
| AI wave, maybe doing some coding by hand just for the fun
| of it on the side. Can't wait for "vibe coding" and "2
| years of Copilot experience" to figure among the average
| list of job requirements.
|
| To quote the young'uns, we are so cooked.
| coldpie wrote:
| > and become a woodworker
|
| Hey we can't ALL have the same plan, man! The tech
| oligarchs that own all of society's wealth will only need
| so much bespoke furniture!
|
| More seriously, my plan had been to build a decent
| savings and go work for the USPS or the local public
| transit company, supplemented with savings interest and
| maybe some woodworking income. But then the 2024 election
| happened so who fucking knows anymore.
| dylan604 wrote:
| Nah, I plan on owning a chunk of acres and start my own
| homestead. Raise a few animals like some sheep, goats,
| rabbits, and of course chickens. Build a nice green house
| and large garden. Just need to find the right plot of
| land before making it a serious go. Oh, and can't forget
| the first step. Gotta win the lottery to pay for it. Not
| all of have those cushy FAANG salaries.
| magicstefanos wrote:
| There are rural land loan companies. Or buy with friends
| or family. I know people who've done the above and don't
| tell anybody but it's working out fine.
| dylan604 wrote:
| >There are rural land loan companies.
|
| The point is to not owe people money so that you are self
| sufficient and not _need_ to make money.
|
| > Or buy with friends or family.
|
| yeah, but i don't like people. /s
| rob wrote:
| I actually want to do everything you're outlining as
| well, but here in the northeast they're trying to sell ~1
| acre for > $100,000 in a lot of desirable "rural" towns.
| Definitely makes it harder to get into unless you want to
| commit to far far north.
| dylan604 wrote:
| I've been watching a lot of TV shows that have revealed a
| lot of pros/cons about different areas of the country.
| The further north means a much shorter growing season
| which makes a greenhouse even more important. It also
| means a lot more infrastructure is required to keep any
| livestock alive during the longer winters. Places like
| Texas goes the other direction where the heat during the
| long summers is brutal, but it means early spring and
| late fall crops. I really wouldn't want to be any further
| north than 40deg.
|
| Got to find that perfect plot of land that has water,
| some trees for wood, some land that can be farmed and
| hopefully a clear sky to the south for solar. Oh, and
| it's gonna need to be far enough from a city so my
| telescope can finally be used the way it was meant.
| jermaustin1 wrote:
| I bought my wife's grandma's house in rural central
| Louisiana so that her grandpa could retire and not have
| to worry about medical bills or anything like that. It
| already has a "woodshop"[1] on it, and when the
| contractor finishes (or at least starts) on repairing the
| years of neglect, I will also be a woodworker in the
| woods.
|
| I'm currently a "woodworker" in suburban Houston, but
| usually only average $500-ish per month (heavily loaded
| toward a couple of times of year - Mid Spring, Back to
| School, Christmas).
|
| 1: https://www.icloud.com/photos/#/icloudlinks/08c7DUB5tx
| bSRiCs...
| bcraven wrote:
| I'm not sure posting a photo with EXIF location data
| under a full name is a good idea.
| 9rx wrote:
| The evolution of the internet has been fun to watch:
|
| 1994-1998 - Don't give out any personal information
| online.
|
| 1998-2004 - Well, maybe just don't give out your real
| name.
|
| 2004-2010 - Never mind. Your full name is fine. Just stay
| away from sharing racy or compromising photos.
|
| 2010-2016 - Actually, the photos are okay as long as your
| "bathing suit area" remains modest.
|
| 2016-2020 - On second thought, bare all if you wish. But,
| please, don't spread misinformation.
|
| 2020-2025 - Ugh, fine. But whatever you do, do not share
| your exact GPS coordinates!
| MisterTea wrote:
| > If this industry didn't pay so well, I would've been gone
| years ago.
|
| That's the root of the problem. Intelligent and want to
| live nicely? Then have your mind exploited for profit! Have
| morals? Compromise on them and get paid.
|
| I know someone who works for a large gaming company who
| told me upon hearing a friend of mine was entertaining a
| position at a Musk company told me "your friend lacks a
| moral compass to work for that man." I reminded them they
| work for a company who was fined half a billion dollars for
| willfully exploiting children. They did not take that well.
| babaceca wrote:
| I've worked at both a casino company and a large mobile
| game studio, and the latter was way shadier in almost all
| aspects.
|
| There's a list of 20-ish large companies we univerally
| agree are evil and openly bash all the time, but the
| reality is 95% of the industry is at best doing
| absolutely meaningless work, and at worst work that's
| deeply negative for the world.
| sizzle wrote:
| Roblox?
| ghfhghg wrote:
| That would be my God. In parts of the gaming industry
| working there can make you a pariah though. Not a huge
| portion but I've seen people get rejected based on
| working there and a couple other choice companies.
| deadbabe wrote:
| What you gotta understand is this world is going straight to
| hell, humanity might not be around much longer. Might as well
| embrace the chaos and enjoy the ride down. No point in being
| a Luddite now, the time for that was decades ago.
| malka1986 wrote:
| > humanity might not be around much longer.
|
| Good riddance. Humanity is hell.
| immibis wrote:
| There was /r/SubSimulatorGPT2, but you're telling me people
| would PAY for that? Maybe if you tricked them - which is
| arguably what Reddit is doing.
|
| Social media's always been about giving people whatever makes
| them come back to the site, no matter how unethical. If an
| army of fake fans makes me think I have an army of real fans
| and keep posting for attention from my fans, they will
| totally do that. Unless it's illegal.
| HPsquared wrote:
| Are you not entertained?
| xyst wrote:
| And at the expense of consuming massive amounts of energy and
| depleting our resources---water, energy---at an alarming rate.
| gbasin wrote:
| pardon my ignorance, in what was does it "consume water"?
| water cooling is closed loop
| 05 wrote:
| Fabs use water in a way that's not closed loop see e.g. [0]
|
| [0] https://www.weforum.org/stories/2024/07/the-water-
| challenge-...
| gbasin wrote:
| I see, thanks. Seems just a matter of cost to reclaim it.
| Maybe regulation is needed
| wegfawefgawefg wrote:
| the planet is a water closed loop. the vapor condenses as
| rain somewhere
| llm_nerd wrote:
| Data centers do use a lot of water on average. Warmer
| climate data centers often use evaporative cooling and can
| run through millions of gallons of water to offload heat.
| Google's data centers combined chewed through some 6
| billion gallons of water last year.
|
| Closed systems are much more water-efficient, and are more
| common in cooler climates.
| danielbln wrote:
| Not to say AI doesn't require a lot of power, but I don't
| know if we need to be alarmist about it:
|
| https://engineeringprompts.substack.com/p/does-chatgpt-
| use-1...
|
| https://simonwillison.net/2025/Jan/12/generative-ai-the-
| powe...
| zombot wrote:
| But "have AI help people share better content" is so
| indispensable! How could humanity ever survive without that?
|
| Even better, soon none of us will have to use social media at
| all, our AI bots will do it for us. Then we will finally find
| peace.
| kccqzy wrote:
| In George Orwell's 1984, there is a machine called the
| versificator that generates music and literature without any
| human intervention, presumably for the "entertainment" of the
| proletarians.
| kookamamie wrote:
| It's also very dangerous, I think. Grok is used on X to
| arbitrating ground-truth for topics I think it has no chance
| assessing.
| thih9 wrote:
| I don't use X/Twitter - does anyone have an example of a viral
| tweet like this?
| moogly wrote:
| I guess YTMND.com would've blown their mind if they had been
| alive and conscious 20 years ago.
| tempodox wrote:
| Each time I think I've seen dystopia and the pinnacle of
| stupidity someone finds a new way to top it. Either that's an
| amazing superpower, or I'm infected with incurable optimism.
| fhd2 wrote:
| I wonder to what degree this move would be genuine company
| strategy, and to what degree it'd just be what the sub headline
| says: "Is Sam Altman ready to up his rivalry with Elon Musk and
| Mark Zuckerberg?"
| paradox242 wrote:
| They have to feed the beast with a never ending supply of input.
| dtj1123 wrote:
| Sounds like OpenAI want to further leverage their platform into a
| tool for directing public opinion. Or maybe this is just a middle
| finger directed at Elon Musk by Sam Altman, who knows.
| chocolateteeth wrote:
| This sounds more like a project intended to be a component of his
| --incredibly--still larger project World. Good times to come,
| everyone.
| kilianinbox wrote:
| If a front figure of a big tech happens to go plant based, that
| simply means a good role model for Ai in this time (and
| humans..). I'm for a big YES. At first I wasn't positive due to a
| sense of bloating the service and risking forms of leaking, but
| considering that most social plattforms needs more ethical
| thinking integrated and that social media can be done in new
| smart way; it's much easier than making a AI-driven Search Engine
| (takes extreme compute to be as good as people likely expect
| without bringing a negative opinion for chatgpt as such; since
| high quality tailored answers will take some compute..) to make a
| meaningful social 'plattform'; interaction with Ai can be done
| much cooler than today in co-exploration :) ..
| andrewstuart wrote:
| If it's true then it flags giant problems at OpenAI and makes it
| clear they don't know what they're doing or where they are going.
| rnotaro wrote:
| FYI there's already an (early) social feed [1] in OpenAI Sora.
|
| [1] https://sora.com/explore?type=videos
| eagerpace wrote:
| I thought they were building a new search engine. Now it's a
| social network. Tomorrow it will be robots. It's all a
| distraction from ClosedAI.
| bsima wrote:
| Also rumored to be building a phone at one point? They are
| playing the media
| empath75 wrote:
| It already is a search engine and has been for a while.
|
| I think you don't recognize it as such because it's
| incorporated into the chat box, but I use chatgpt as my search
| engine 90% of the time and almost never use google any more.
|
| I think the social stuff will also just be incorporated into
| the chat interface in the form of 'share this image', etc, and
| isn't going to be like twitter with a bunch of bots posting.
| vagab0nd wrote:
| Can we just have a public ledger that everyone posts to, and then
| we each have an AI to selectively serve us contents we are
| interested in?
| MagicMoonlight wrote:
| I think people would actually use an all-bot social network.
|
| Imagine tweeting and thousands of "people" reply to you and
| appreciate you. It would be addictive.
| squigz wrote:
| Some of the comments here are so detached from my reality that I
| have to wonder whether I'm the crazy one.
|
| Nobody I know would be interested in a social media platform that
| is just AI. Contrary to what some commenters here seem to think,
| most people are interested in things like upvotes and likes and
| whatnot... because they understand it's (mostly) humans on the
| other end. Without that, it would all be rather hollow, and
| nobody would want to participate in it. I think most people would
| find the insinuation that they're so shallow as to want that a
| bit insulting.
|
| But again, maybe I'm the crazy one.
| colonial wrote:
| A few months ago they were AGI vagueposting. Now they're working
| on SlopBook?
|
| This plus Ed Zitron's excellent investigative writing on their
| finances [0] makes me think that OpenAI is going to fold within a
| year, _maybe_ two.
|
| [0]: https://www.wheresyoured.at/openai-is-a-systemic-risk-to-
| the... (TL;DR - we have apparently learned nothing from 2008.)
| NiloCK wrote:
| Leveraging OpenAI for a Worldcoin proof-of-humanity toehold? Blue
| checks for WorldID holders, and a presumption of dead internet
| everywhere else.
| nilkn wrote:
| What this tells me is that xAI / X / Grok have together become a
| much bigger threat to OpenAI than they anticipated. And I believe
| it's true -- Grok's progress has been _much_ faster than ChatGPT
| 's over the past year, even if we can nitpick evals over which is
| technically "better" at any moment. The fact that it's hard to
| tell is itself a huge statement considering the massive advantage
| OpenAI and ChatGPT had not too long ago. I mean, Grok was
| basically a joke when it first launched!
| mring33621 wrote:
| i said this 3 months ago:
|
| I think OpenAI should split into 2:
|
| 1) B2B: Reliable, enterprise level AI-AAS
|
| 2) B2C: New social network where people and AIs can have fun
| together
|
| OpenAI has a good brand name RN. Maybe pursuing further
| breakthroughs isn't in the cards, but they could still be huge
| with the start that they have.
| datadrivenangel wrote:
| If they nail the memory part, this could basically kill WorkOS
| and Jira and all those project management tools.
| sys32768 wrote:
| I hope they don't apply the same censorship filters to their
| social media as their image generation.
| giancarlostoro wrote:
| I'm genuinely curious how they will achieve this, will they go
| the open source pre-built solution route, or build their own from
| scratch?
___________________________________________________________________
(page generated 2025-04-16 17:01 UTC)