[HN Gopher] Meta AI: "The Future of AI Is Open Source and Decent...
       ___________________________________________________________________
        
       Meta AI: "The Future of AI Is Open Source and Decentralized"
        
       Author : alexandercheema
       Score  : 151 points
       Date   : 2024-09-18 17:40 UTC (5 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | jsheard wrote:
       | Decentralized inferencing perhaps, but the training is very much
       | centralized around Metas continued willingness to burn obscene
       | amounts of money. The open source community simply can't afford
       | to pick up the torch if Meta stops releasing free models.
        
         | leetharris wrote:
         | There's plenty of open source AI out there that isn't Meta.
         | It's just not as good.
         | 
         | The #1 problem is not compute, but data and the manpower
         | required to clean that data up.
         | 
         | The main thing you can do is support companies and groups who
         | are releasing open source models. They are usually using their
         | own data.
        
           | jsheard wrote:
           | > There's plenty of open source AI out there that isn't Meta.
           | It's just not as good.
           | 
           | To my knowledge all of the notable open source models are
           | subsidised by corporations in one way or another, whether by
           | being the side project of a mega-corp which can absorb the
           | loss (Meta) or coasting on investor hype (Mistral,
           | Stability). Neither of those give me much confidence that
           | they will continue forever, especially the latter category
           | which will just run out of money eventually.
           | 
           | For open source AI to actually be sustainable it needs to
           | stand on its own, which will likely require orders of
           | magnitude more efficient training, and even then the data
           | cleaning and RLHF are a huge money sink.
        
             | exe34 wrote:
             | if you can do 100x more efficient training with open
             | source, closeAI can simply take that and train a model
             | that's 100x bigger/longer/more tokens.
        
               | bugglebeetle wrote:
               | AKA why Unsloth is now YC backed for their even better
               | (but closed source) fine-tuning.
        
           | moffkalast wrote:
           | https://huggingface.co/datasets/HuggingFaceFW/fineweb
           | 
           | The #1 problem is absolutely compute. People barely get
           | funding for fine tunes, and even if you physically buy the
           | GPUs it'll cost you in power consumption.
           | 
           | That said, good data is definitely the #2 problem. But
           | nowadays you can just get good synthetic datasets from
           | calling closed model APIs or just using existing local LLMs
           | to sift through trash. That'll cost you too.
        
           | citboin wrote:
           | >The main thing you can do is support companies and groups
           | who are releasing open source models. They are usually using
           | their own data.
           | 
           | Alternatively we could create standardized open source
           | training data like wikipedia, wikimedia as well as public
           | domain literature and open courseware. I'm sure that there
           | are many other such free and legal sources of data.
        
             | KaiserPro wrote:
             | but the training data _is_ one of the key bits that makes
             | or breaks your model 's performance.
             | 
             | There is a reason why datasets are private and the model
             | weights aren't.
        
           | Der_Einzige wrote:
           | Compute is for sure the number one problem. Look at how long
           | it's taking for anything better than Pony Diffusion to come
           | out for NSFW image gen despite the insane amount of demand
           | for it.
           | 
           | Look at how much computer purple AI actually has. It's
           | basically nothing.
        
         | cynicalpeace wrote:
         | One area that's interesting, but easy to dismiss because it's
         | the ultimate cross-section of hype (AI and crypto) is
         | bittensor.
         | 
         | AFAICT it decentralizes the training of these models by giving
         | you an incentive to train models which will mine the crypto if
         | you're improving it.
         | 
         | I learned about it years ago, mined some crypto, lost the keys
         | and now kicking myself cuz I would've made a pretty penny lol
        
           | jsheard wrote:
           | Does it actually work? AIUI the current consensus is that you
           | need massive interconnect bandwidth to train big models
           | efficiently, and the internet is nowhere near that. I'm sure
           | the Nvidia DGX boxes have 10x400Gb NICs for a reason.
        
             | cynicalpeace wrote:
             | I have no idea. The idea is certainly interesting but I've
             | never actually understood how to run inference on these
             | models... the people that run it seem to be unable to just
             | talk simply.
        
               | CaptainFever wrote:
               | I've seen bittensor before. I think it makes sense, as a
               | way to incentivise people to rent their GPUs, without
               | relying on a central platform. But I've always felt it
               | was kind of a scam because it was so hard to find any
               | guides on how to use it.
               | 
               | Also, this doesn't seem to actually solve the issue of
               | fine tuners needing funding to rent those GPUs? One
               | alternative is something like AI Horde, which pays GPU
               | providers with "labour vouchers" that allow them to get
               | priority next time they want GPU. Requires a central
               | platform to track vouchers and ban those who exchange
               | them. Basically a sort of real-life comparison of
               | mutualism (AI Horde) vs capitalism (bittensor).
        
             | bloatedGoat wrote:
             | There are methods that make it feasible to train models
             | over the internet. DiLoCo is one [1] and NousResearch has
             | found a way to improve on that using a method they call
             | DisTro [2].
             | 
             | 1. https://arxiv.org/abs/2311.08105
             | 
             | 2. https://github.com/NousResearch/DisTrO?tab=readme-ov-
             | file
        
         | numpad0 wrote:
         | Centralized production, decentralized consumption.
        
       | pjkundert wrote:
       | The future of _everything_ you depend on is open source and
       | decentralized.
       | 
       | Because all indications are that the powers over you cannot abide
       | your freedoms of association, communication and commerce.
       | 
       | So, if it's something your family needs to survive - it has
       | better be distributed and cryptographically secured against
       | interference.
       | 
       | This includes interference in the training dataset of whatever
       | AIs you use; this has become a potent influence on the formation
       | of beliefs, and thus extremely valuable.
        
         | caeril wrote:
         | It's not the training dataset.
         | 
         | All of these models, including the "open" ones, have been
         | RLHF'ed by teams of politically-motivated people to be "safe"
         | after initial foundation training.
        
           | pjkundert wrote:
           | And I'm not even _remotely_ interested in the "corrections"
           | supplied by some group of right-thinking meddlers!
           | 
           | This corruption _must_ be disclosed as assiduously as the
           | base dataset, if not more so.
        
             | pjkundert wrote:
             | Or, at _least_ package them up as  "personnas" and give
             | them an appropriate name, eg. "Church Lady", "Jr. Marxist
             | Barista", "Undergrad Philosophy Major", ...
             | 
             | Actually, those seem like an apt composite description of
             | the PoV of the typical mass-market AI... 8/
        
           | Der_Einzige wrote:
           | Not mistrals. Mistral large is willing to tell me how to
           | genocide minorities or NSFW without any kind of
           | orthogonalization or fine tuning. Please actually try models
           | instead of pontificating without evidence.
        
             | pjkundert wrote:
             | I wasn't aware that there was any publicly accessible
             | interface to the Mistrals (or any other) models without
             | training-wheels!
        
       | Qshdg wrote:
       | Great, who gives me $500,000,000, Nvidia connections to actually
       | get graphics cards and a legal team to protect against copyright
       | lawsuits from the entities whose IP was stolen for training?
       | 
       | Then I can go ahead and train my open source model.
        
         | riku_iki wrote:
         | you can pick existing pretrained foundational model from corp
         | (google, MS, Meta) and then finetune it(much cheaper) with your
         | innovative ideas.
        
       | monkeydust wrote:
       | Curious but is there a path where llm training or inference could
       | be distributed across the BOINC network:
       | https://en.m.wikipedia.org/wiki/Berkeley_Open_Infrastructure...
        
       | alecco wrote:
       | * pre-trained models
       | 
       | * does not apply to training data
        
       | exabrial wrote:
       | only when it's financially convenient for them...
        
         | dkga wrote:
         | Well, yes. They are a company, with shareholders and all. So
         | while not breaching any law, they should indeed pursue
         | strategies that they think would be profitable.
         | 
         | And for all the negativity seen in many of the comments here I
         | think it's actually quite remarkable that they make model
         | checkpoints available freely. It's an externality, but a
         | positive one. Not quite there yet in terms of the ideal - which
         | is definitely open source - and surely with an abuse of
         | language, which I also note. But overall, the best that is
         | achievable now I think.
         | 
         | The true question we should be tackling is, is there an
         | incentive-compatible way to develop foundation models in a
         | truly open source way? How to promote these conditions, if they
         | do exist?
        
       | nis0s wrote:
       | I like the idea of this! But is there any reason to be concerned
       | about walled gardens in this case, like how Apple does with its
       | iOS ecosystem? For example, what if access to model weights could
       | be revoked.
       | 
       | There is a lot of interest in regulating open source AI, but many
       | sources of criticism miss the point that open source AI helps
       | democratize access to technologies. It worries me that Meta is
       | proposing an open source and decentralized future because how
       | does that serve their company? Or is there some hope of creating
       | a captive audience? I hate to be a pessimist or cynic, but just
       | wondering out loud, haha. I am happy to be proven wrong.
        
       | candiddevmike wrote:
       | Stop releasing your models under a non FOSS license.
        
       | hyuuu wrote:
       | the view of the comments here seems to be quite negative for what
       | meta is doing. Honest question, should they go to the route of
       | openai and closed source + paid access instead? OpenAI or Claude
       | seem to garner more positive views than llama open sourced.
        
         | naming_the_user wrote:
         | The models are not open source, you're getting the equivalent
         | of a precompiled binary. They are free to use.
        
           | RealStickman_ wrote:
           | Free to use with restrictions, so you maybe get 1.5/4 FOSS
           | freedoms.
        
         | meiraleal wrote:
         | Not much would change if they did. Meta intentions and OpenAI
         | intentions are the same: reach monopoly and take all the
         | investment back with a 100x return. Anyone that achieves it
         | will be as evil as the other one.
         | 
         | > OpenAI or Claude seem to garner more positive views than
         | llama open sourced.
         | 
         | that's more about Meta than the others. Although OpenAI isn't
         | that far from Meta already.
        
         | troupo wrote:
         | They use "open source" to whitewash their image.
         | 
         | Now ask yourself a question: where does Meta's data come from?
         | Perhaps from their users' data? And they opted everyone in by
         | default. And made the opt-out process as cumbersome as
         | possible:
         | https://threadreaderapp.com/thread/1794863603964891567.html And
         | now complain that the EU is preventing them from "collecting
         | rich cultural context" or something
         | https://x.com/nickclegg/status/1834594456689066225
        
           | KaiserPro wrote:
           | > Perhaps from their users' data?
           | 
           | nope, not yet.
           | 
           | FAIR, the people that do the bigboi training, for a lot of
           | their stuff cant even see user data, because the place they
           | do the training can't support the access.
           | 
           | Its not like openAI where the lawyers don't even know whats
           | going on, because they've not yet been properly taken to
           | court.
           | 
           | at Meta, the lawyers are _everywhere_ and if you do naughty
           | shit to user data, you are going to be absolutely fucked.
        
         | VeejayRampay wrote:
         | people are just way too invested in the OpenAI hype and they
         | don't want people threatening that in any way
        
       | rkou wrote:
       | And what about the future of social media?
       | 
       | This is such devious, but increasingly obvious, narrative
       | crafting by a commercial entity that has proven itself
       | adversarial to an open and decentralized internet / ideas and
       | knowledge economy.
       | 
       | The argument goes as follows:
       | 
       | - The future of AI is open source and decentralized
       | 
       | - We want to win the future of AI instead, become a central
       | leader and player in the collective open-source community (a
       | corporate entity with personhood for which Mark is the human
       | mask/spokesperson)
       | 
       | - So let's call our open-weight models open-source, and benefit
       | from its imago, require all Llama developers to transfer any
       | goodwill to us, and decentralize responsibility and liability,
       | for when our 20 million dollar plus "AI jet engine" Waifu
       | emulator causes harm.
       | 
       | Read the terms of use / contract for Meta AI products. If you
       | deploy it, some producer finds the model spits out copyrighted
       | content, knocks on Meta's door, Meta will point to you for the
       | rest of the court case. If that's the future for AI, then it
       | doesn't really matter whether China wins.
        
         | foobar_______ wrote:
         | It has been clear from the beginning that Meta's supposed
         | desire for an open source AI, is just a coping mechanism for
         | the fact that got beat out of the gate. This is an attempt to
         | commoditize AI and reduce OpenAI/Google/Whoever's advantage. It
         | is effective, not doubt, but all this wankery about how noble
         | they are for creating an open-source AI future is just
         | bullshit.
        
           | CaptainFever wrote:
           | I feel the same way. I'm grateful to Meta for releasing libre
           | models, but I also understand that this is simply because
           | they're second in the AI race. The winner always plays dirty,
           | the underdog always plays nice.
        
             | KaiserPro wrote:
             | but they've _always_ released their stuff. Thats part of
             | the reason why the industry uses pytorch, that and because
             | its better than tensorflow.
             | 
             | In the same way that detectron and Segment anything is an
             | industry standard.
             | 
             | Sure, for LLMs openAI released a product first. but its not
             | unusual for meta to release useful models.
        
           | ipsum2 wrote:
           | You're wrong here. Meta has released state of the art open
           | source ML models prior to ChatGPT. I know a few successful
           | startups (now valued at >$1b) that were built on top of
           | Detectron2, a best-in-class image segmentation model.
        
         | Calvin02 wrote:
         | Doesn't Threads and Fediverse indicate that they are headed
         | that way for social as well?
        
           | redleader55 wrote:
           | The last time we had a corporate romance between an open
           | source protocol/project, "XMPP + Gtalk/Facebook = <3", XMPP
           | was crappy and it was moving too slowly to the mobile age.
           | Gtalk/Messenger gave up on XMPP and evolved their own
           | protocols and stopped federating with the "legacy" one.
           | 
           | I think the success of the "Threads + Fediverse = <3" relies
           | on the Fediverse not throwing the towel and leaving Threads
           | as the biggest player in the space. That would mean fixing a
           | lot of problems that that people have with Activity Pub
           | today.
           | 
           | I don't want to say the big tech are awesome and without
           | fault, but at the end of the day big-techs will be big-techs.
           | Let's keep the Fediverse relevant and Meta will continue to
           | support it, otherwise it will be swallowed by the bigger
           | fish.
        
             | bee_rider wrote:
             | For some reason, this has made me wonder if we just need
             | more non-classical-social-media fediverse stuff. Like of
             | course people will glom on to Threads, it means they can
             | interact with the network while still being inside
             | Facebook's walled garden...
             | 
             | I wonder if video game engines could use it as an
             | alternative to Steam or Discord integration.
        
             | LtWorf wrote:
             | The problem was not that it was not evolving. The problem
             | was that they decided they had trapped all the users of
             | other networks they could trap.
             | 
             | Slack did the same killing xmpp and irc bridge. I don't see
             | them making a matrix bridge.
        
         | bee_rider wrote:
         | > Read the terms of use / contract for Meta AI products. If you
         | deploy it, some producer finds the model spits out copyrighted
         | content, knocks on Meta's door, Meta will point to you for the
         | rest of the court case. If that's the future for AI, then it
         | doesn't really matter whether China wins.
         | 
         | As much as I hate Facebook, I think that seems pretty...
         | reasonable? These AI tools are just tools. If somebody uses a
         | crayon to violate copyright, the crayon is not to blame, and
         | certainly the crayon company is not, the person using it is.
         | 
         | The fact that Facebook won't voluntarily take liability for any
         | thing their users' users' might do with their software means
         | that software might not be useable in some cases. It is a
         | reason to avoid that software if you have one of those use
         | cases.
         | 
         | But I think if you find some company that says "yes, we'll be
         | responsible for anything your users do with with our product,"
         | I mean... that seems like a hard promise to take seriously,
         | right?
        
           | rkou wrote:
           | AI safety is expensive, or even impossible, by releasing your
           | models for local inference (not behind API). Meta AI shifts
           | the responsibility of highly-general highly-capable AI models
           | to smaller developers, putting ethics, safety, legal, and
           | guard-rails responsibility on innovators who want to innovate
           | with AI (without having the knowledge or resources to do so
           | by themselves) as an "open-source" hacking project.
           | 
           | While Mark claims his Open Source AI is safer, because fully
           | transparent and many eyes make all bugs shallow, the latest
           | technical report makes mention of an internal, secret,
           | benchmark that had to be developed, because available
           | benchmarks did not suffice at that level of capabilities. For
           | child abuse generation, it only makes mention that it
           | investigated this, not any results of these tests or
           | conditions under which it possibly failed. They shove all
           | this liability on the developer, while claiming any positive
           | goodwill generated.
           | 
           | It completely loses their motivation to care for AI safety
           | and ethics if fines don't punish them, but those who used the
           | library to build.
           | 
           | Reasonable for Meta? Yes. Reasonable for us to nod along when
           | they misuse open source to accomplish this? No.
        
             | bee_rider wrote:
             | I think this could be a somewhat reasonable argument for
             | the position that open AI just shouldn't exist (there are
             | counter arguments, but I'm not interested enough to do a
             | back and forth on that). If Facebook can't produce
             | something safe, maybe they shouldn't release anything at
             | all.
             | 
             | But, I think in that case the failing is not in not taking
             | the liability for what other people do with their tool. It
             | is in producing the tool in the first place.
        
               | rkou wrote:
               | Perhaps Open AI simply can't exist (too hard and
               | expensive to coordinate/crowd-source compute and
               | hardware). If it can, then, to me, it should and would.
               | 
               | OpenAI produced GPT-2, but did not release it, as it
               | couldn't be made safe under those conditions, when not
               | monitored or patch-able. So it put it behind an API and
               | owned its responsibility.
               | 
               | I didn't take issue with Meta's business methods and can
               | respect its cunning moves. I take issue with things like
               | them arguing "Open Source AI improves safety", so we
               | can't focus on the legit cost-benefits of releasing
               | advanced, ever-so-slightly risky, AI into the hands of
               | novices and bad actors. It would be a failure on my part
               | if I let myself get rigamaroled.
               | 
               | One should ideally own that hypothetical 3% failure rate
               | to deny CSAM request when arguing for releasing your
               | model still. Heck, ignore it for all I care, but they
               | damn well do know how much this goes up when the model is
               | jailbroken. But claiming instead that your open model
               | release will make the world a better place for children's
               | safety, so there is not even a need to have this
               | difficult discussion?
        
         | eli_gottlieb wrote:
         | If it was really open-source you'd be able to just train one
         | yourself.
        
           | nicce wrote:
           | Only if you were a billionaire. These models are starting to
           | be so out of reach for single researchers or even traditional
           | academic research groups.
        
           | echelon wrote:
           | This sort of puts the whole notion of "open source" at risk.
           | 
           | Code is a single input and is cheap to compile, modify, and
           | distribute. It's cheap to run.
           | 
           | Models are many things: data sets, data set processing code,
           | training code, inference code, weights, etc. But it doesn't
           | even matter if all of these inputs are "open source". Models
           | take millions of dollars to train, and the inference costs
           | aren't cheap either.
           | 
           | edit:
           | 
           | Remember when platforms ate the open web? We might be looking
           | at a time where giants eat small software due to the cost and
           | scale barriers.
        
         | bschmidt1 wrote:
         | It's especially rich coming from Facebook who was all for
         | regulating everyone else in social media after they had already
         | captured the market.
         | 
         | Everyone tries this. Apple tried it with lawsuits and patents,
         | Facebook did it under the guise of privacy, OpenAI will do it
         | under the guise of public safety.
         | 
         | There's almost no case where a private company is going to be
         | able to successfully argue "they shouldn't be allowed but we
         | should" I wonder why so many companies these days try. Just
         | hire better people and win outright.
        
         | doe_eyes wrote:
         | No major tech corporation is interested in openness for the
         | sake of it, period. It's useful when it undermines your
         | competition. It's why Google was pouring money into open source
         | to hurt Microsoft while not particularly interested in open-
         | sourcing their own flagship services. And it explains the bulk
         | of Facebook-Google-OpenAI dynamics right now, where one group
         | of actors is desperately trying to secure a moat through
         | exclusive content deals and regulation... and the other side is
         | trying to ruin their plans for the lolz.
         | 
         | Facebook is pretty universally hated by techies, but as it
         | happens, their incentives align with the incentives of non-
         | commercial internet. Why make them pass purity tests? It's OK
         | to root for what they're doing, it doesn't mean we have to
         | marry the company.
        
       | abetusk wrote:
       | This is the modern form of embrace, extend and extinguish.
       | "Embrace" open source, "extend" the definition to make it non
       | open/libre and finally extinguish the competition by shoring up
       | the drawbridge to the moat they've just built.
        
       | troupo wrote:
       | I've had as a comment to a comment, but I'll repost it at the top
       | level:
       | 
       | They use "open source" to whitewash their image.
       | 
       | Now ask yourself a question: where does Meta's data come from?
       | Perhaps from their users' data? And they opted everyone in by
       | default. And made the opt-out process as cumbersome as possible:
       | https://threadreaderapp.com/thread/1794863603964891567.html And
       | now complain that the EU is preventing them from "collecting rich
       | cultural context" or something
       | https://x.com/nickclegg/status/1834594456689066225
        
         | rkou wrote:
         | Also known as https://en.wikipedia.org/wiki/Openwashing
         | 
         | > In 2012, Red Hat Inc. accused VMWare Inc. and Microsoft Corp.
         | of openwashing in relation to their cloud products.[6] Red Hat
         | claimed that VMWare and Microsoft were marketing their cloud
         | products as open source, despite charging fees per machine
         | using the cloud products.
         | 
         | Other companies are way more careful using "open source" in
         | relation to their AI models. Meta now practically owns the term
         | "Open Source AI" for whatever they take it to mean, might as
         | well call it Meta AI and be done with it:
         | https://opensource.org/blog/metas-llama-2-license-is-not-ope...
        
       | dzonga wrote:
       | the reason - i'm a little bearish on AI is due to its cost. small
       | companies won't innovate on models if they don't have billions to
       | burn to train the models.
       | 
       | yet when you look back at history, things that were
       | revolutionary, it was due to low cost of production. web,
       | bicycles, cars, steam engine cars etc.
        
         | rafaelmn wrote:
         | > yet when you look back at history, things that were
         | revolutionary, it was due to low cost of production.
         | 
         | Nuclear everything, rockets/satellites, tons of revolutionary
         | things that are very expensive to produce and develop.
         | 
         | Also software scales differently.
        
         | zwijnsberg wrote:
         | yet if the weights are made public, smaller companies can
         | leverage these pretrained models can't they?
        
         | miguelaeh wrote:
         | The first cars, networks, and many other things were not
         | unexpensive. They became so with time and growing adoption.
         | 
         | Cost of compute will continue decreasing and we will reach that
         | point where it is feasible to have AI everywhere. I think with
         | this particular technology we have already reached a no return
         | point
        
         | farco12 wrote:
         | I could see the cost of licensing data to train models
         | increasing significantly, but the cost of compute for training
         | models is only going to drop on a $/PFLOP basis.
        
       | CatWChainsaw wrote:
       | Facebook promised to connect the world in a happy circle of
       | friendship and instead causes election integrity controversies,
       | bizarre conspiracy theories about pandemics and immigrants to go
       | viral, and massive increases in teen suicide. Not sure why anyone
       | would trust them with their promises of decentralized-AI and
       | roses.
        
         | atq2119 wrote:
         | Good. Now compare to OpenAI. Clearly what Meta is doing is
         | better than OpenAI from the perspective of freedom and
         | decentralization.
        
           | CatWChainsaw wrote:
           | Cool. What Meta is doing is better than cigarettes from the
           | perspective of addiction. If Meta is the best we have, then
           | we'd better create something better, or prepare for the
           | inevitable enshittification.
        
             | jazzyjackson wrote:
             | I love it. TOBACCO PRODUCTION MUST BE DECENTRLIZED.
        
       | menacingly wrote:
       | Decentralized on centralized hardware?
        
         | latchkey wrote:
         | Evidence is showing that AMD MI300x are proving to be a strong
         | contender.
        
       | mrkramer wrote:
       | Yea, I believe you Zuck, it's not like Facebook is closed
       | centralized privacy breaking walled garden.
        
       | pie420 wrote:
       | IBM Social Media Head: "The Future of Social Media is OPEN SOURCE
       | and DECENTRALIZED"
       | 
       | This must be a sign that Meta is not confident in their AI
       | offerings.
        
       | jmyeet wrote:
       | Two things spring to mind:
       | 
       | 1. Open source is for losers. I'm not calling anyone involved in
       | open source a loser, to be clear. I have deep respect for anyone
       | who volunteers their time for this. I'm saying that when
       | companies push for open source it's because they're losing in the
       | marketplace. Always. No companiy that is winning ever open
       | sources more than a token amount for PR; and
       | 
       | 2. Joel Spolsky's now 20+ year old letter [1]:
       | 
       | > Smart companies try to commoditize their products' complements.
       | 
       | Meta is clearly behind the curve on AI here so they're trying to
       | commoditize it.
       | 
       | There is no moral high ground these companies are operating from.
       | They're not using their vast wisdom to predict the future.
       | They're trying to bring about the future the most helps them. Not
       | just Meta. Every company does this.
       | 
       | It's why you'll never see Meta saying the future of social media
       | is federation, open source and democratization.
       | 
       | [1]: https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
        
       | uptownfunk wrote:
       | It's marketing to get the best researchers. The researchers want
       | the meta pay and they want to hedge their careers to continue to
       | publish. That's the real game, it's a war for talent. Everything
       | else is just secondary effects.
        
       | croes wrote:
       | Also Meta: The future is VR
        
       | stonethrowaway wrote:
       | I'll link to my comment here from approx. 52 days ago:
       | https://news.ycombinator.com/item?id=41090142
       | 
       | This is chess pieces being moved around the board at the moment.
        
       ___________________________________________________________________
       (page generated 2024-09-18 23:01 UTC)