[HN Gopher] AI firms mustn't govern themselves, say ex-members o...
       ___________________________________________________________________
        
       AI firms mustn't govern themselves, say ex-members of OpenAI's
       board
        
       Author : sashank_1509
       Score  : 107 points
       Date   : 2024-05-26 20:55 UTC (2 hours ago)
        
 (HTM) web link (www.economist.com)
 (TXT) w3m dump (www.economist.com)
        
       | behnamoh wrote:
       | Any talk about AI governance (either pro or against) just further
       | feeds the AI hype. I work in the AI industry and know the
       | benefits, but tbh 90% of startups out there don't deserve the
       | amount of attention (read: VC money) they receive. It'll burst,
       | and it will be ugly.
       | 
       | The only one benefiting from all the AI bubble is nvidia (fuck
       | them).
        
         | fnetisma wrote:
         | Sure there will be corrective behaviour in the market, and the
         | better product with more outreach, better experience will win
         | over suboptimal products with overlapping offerings, but does
         | that mean that the current generative AI momentum is hollow or
         | there is a sticky use case behind the promises? And if so, in
         | your opinion, how overstated is the Total Addressable Market
         | compared to what's claimed by an aggregate of startups across
         | the VC space?
        
       | dmix wrote:
       | And let me guess these 2 want to be the ones controlling it
       | (again but with more power)
        
         | toomuchtodo wrote:
         | People who want to govern shouldn't be in the role. This
         | selects for service over seeking power.
        
         | theropost wrote:
         | https://en.m.wikipedia.org/wiki/The_road_to_hell_is_paved_wi...
        
         | williamtrask wrote:
         | there's really no evidence of that (in the article or
         | otherwise)
        
         | cpursley wrote:
         | You're getting downvoted but regulatory capture and cronyism
         | (voting in laws that prohibit new entrants; for the greater
         | good, of course) is a trick as old as democratic systems been
         | have established, maybe older and perhaps not exclusive to
         | democracy.
        
       | hnlmorg wrote:
       | No company should govern themselves. AI or others
        
         | Loughla wrote:
         | What?
        
           | hiddencost wrote:
           | This is what boards are for.
        
             | xboxnolifes wrote:
             | No, it's what governments are for.
        
         | nicce wrote:
         | Rather, if you want to pursuit on providing valuable service or
         | product, you should not have shareholders at all. Or any
         | dependency whatsoever.
         | 
         | No pressure for maximizing profits and abuse your position for
         | profits.
        
           | crazygringo wrote:
           | Shareholders are just owners.
           | 
           | Every company has to have owners (even if those owners are
           | the employees, for instance). Owners ultimately make the
           | decisions, by electing a board which oversees management.
           | 
           | Anyone starting a company is free to cap profits if they
           | want. You can write it directly into the articles of
           | incorporation.
           | 
           | Obviously it makes it harder to find investors, so good luck.
        
         | diego_sandoval wrote:
         | Do you also think that no person should govern themself?
        
       | walrushunter wrote:
       | What a pointless article. Anybody who would willingly give up
       | governance of their company to somebody who has no financial
       | interest in the company is a moron.
        
         | cellwebb wrote:
         | People who so quickly devolve to disparagement are, well, I
         | think you know.
         | 
         | So what are your thoughts on Sam Altman having no equity in
         | OpenAI?
        
         | siva7 wrote:
         | While obvious in retrospective, the board drama at this company
         | for which these ex-members are partly responsible destroyed the
         | chance that investors or executives would ever let such people
         | take over governance again.
        
         | JumpCrisscross wrote:
         | > _Anybody who would willingly give up governance of their
         | company_
         | 
         | That's the rub. It wasn't founded as a company.
        
       | TulliusCicero wrote:
       | It might be reasonable to have regulations here, but I shudder to
       | think what form they would take, given the typical government
       | level of technological expertise and understanding.
        
         | andy99 wrote:
         | Existing laws cover almost everything "bad" you could do with
         | AI/ML. It's not like there's some "I used AI" loophole that
         | exempts one from the law. So most of this is about either
         | regulatory capture, self importance (oh, my linear algebra
         | research is like inventing the atom bomb), ideology, power
         | seeking or a combination.
        
           | janice1999 wrote:
           | > Existing laws cover almost everything "bad" you could do
           | with AI/ML.
           | 
           | If (like many non-EU countries and parts of the US) you don't
           | already have basic digital privacy laws, transparency or
           | consumer protections, that is simply not true.
        
             | riquito wrote:
             | So in countries were the government doesn't attempt to
             | protect you you'll keep not being protected
        
               | janice1999 wrote:
               | And AI will make it much worse by lowering the effort
               | required to do harm.
        
           | NegativeLatency wrote:
           | I'm not suggesting we're at this point now, but it would be
           | nice if we create sentient AI if it wasn't enslaved. I think
           | we would probably need some new laws for the case of non
           | human personhood
           | 
           | Not sure what laws would apply or how they'd be enforced
           | based on how we treat people and say chimps, and corporations
           | like people.
        
           | nicce wrote:
           | > Existing laws cover almost everything "bad" you could do
           | with AI/ML.
           | 
           | Not really. They regulate the AI itself, not the people
           | behind it. There should be real consequences of doing
           | something bad with it intentionally. That is the only way.
        
           | pessimizer wrote:
           | > It's not like there's some "I used AI" loophole that
           | exempts one from the law.
           | 
           | There is, it's called a judge. When they explain to him that
           | AIs are by definition neutral and objective and are let off.
           | I'm sure the regulations will just serve to formalize this
           | process, by Congressionally defining AIs that satisfy some
           | checklist of lobbied for conditions as _objective and
           | neutral._ After a few years, the collective liability from
           | taking back this declaration will keep Congress from ever
           | reverting it.
           | 
           | They've been into defining things lately. The
           | https://en.wikipedia.org/wiki/Indiana_pi_bill just came too
           | early.
        
           | tbrownaw wrote:
           | I believe there's a few cases where you're allowed to talk
           | about Fact A, and you're allowed to talk about Fact B, but
           | you're not allowed to talk about both Fact A and Fact B at
           | the same time. Mostly (entirely?) having to do with export
           | restrictions around technologies that the government wants to
           | keep away from other countries it doesn't like.
           | 
           | I'd think that an AI system that answers questions combining
           | both could get its makers in trouble in ways that a standard
           | search engine finding separate results about each from
           | separate queries probably wouldn't.
        
         | janice1999 wrote:
         | > but I shudder to think what form they would take
         | 
         | The EU just passed the AI Act based on the inputs of experts
         | and with widespread support from its Parliament and Council.
        
           | pelorat wrote:
           | In about two years time, most AI providers will realize that
           | the EU is not worth the effort and pack up and leave. The
           | repercussions from the AI act has not begun yet.
        
             | janice1999 wrote:
             | Companies adapt to regulations and don't just walk away
             | from 100s of millions of customers. Companies made a lot of
             | noise about GDPR and yet it's now a non-issue.
        
         | JumpCrisscross wrote:
         | > _shudder to think what form they would take, given the
         | typical government level of technological expertise and
         | understanding_
         | 
         | Start with public disclosure. A repository where AI firms
         | publicly file simple, standardised information--model
         | architecture, training sources, intended user, responsible
         | executives, _et cetera_ --that can guide the public and
         | policymakers in future rulemaking.
         | 
         | More generally, this complaint about electeds' domain expertise
         | misunderstands how modern states work. Congress can't build a
         | plane. That doesn't mean they can't build the FAA.
        
           | tomrod wrote:
           | That already is starting up, albeit slowly, for gov agencies
           | as well as best practices.
        
           | ramblenode wrote:
           | Congress _can_ delegate decisions to expert bodies, and often
           | does. But Congress is also quite comfortable simply
           | legislating a solution, which may be ill-informed or with
           | ulterior intent.
           | 
           | Speaking of planes, Congress took a direct role in the design
           | specifications of the F-35 to the detriment of that program.
           | Notably, they required a common airframe that could support
           | VTOL, despite objections from the Army, Navy, and Air Force
           | (the USMC wanted it and lobbied for it). This greatly added
           | to the complexity and cost of the program.
        
       | web3-is-a-scam wrote:
       | I don't trust industry to self-regulate and I definitey don't
       | trust the government to be able to regulate it effectively.
       | 
       | Honestly, we're f*cked
        
         | Y_Y wrote:
         | In fairness, based on the position you've put forward, I can't
         | imagine an unfuckable situation.
        
           | solardev wrote:
           | Maybe we need an AI democracy where the AI themselves vote
           | for regulations.
        
             | HarHarVeryFunny wrote:
             | Which they will do according to stuff they read on Reddit
        
             | tbrownaw wrote:
             | As if they'd ever vote for things that would annoy the
             | handful of corporations providing the datacenters they live
             | in. Do you really want to hand that much power to Amazon
             | (or Microsoft, or Google)?
        
           | candiddevmike wrote:
           | China invades Taiwan, fabs get destroyed, AI winter ensues
           | because of lack of hardware?
        
             | airstrike wrote:
             | Yeah, still fucked
        
             | Y_Y wrote:
             | That is indeed an exceptionally unfuckable situation
        
         | thegrim33 wrote:
         | Well one way out is if large language models don't just somehow
         | magically turn into human level (or better) AGI at some point
         | once enough data has been thrown at it. Then the whole debate
         | will turn out to be pretty moot.
        
           | hnuser123456 wrote:
           | The AI shall govern itself.
        
           | bboygravity wrote:
           | Until some smart people read and understand "the book of
           | why".
        
           | JumpCrisscross wrote:
           | > _if large language models don 't just somehow magically
           | turn into human level (or better) AGI at some point once
           | enough data has been thrown at it_
           | 
           | This was fundraising marketing. There is zero evidence LLMs
           | scale to AGI.
        
             | wizzwizz4 wrote:
             | We'd expect zero evidence either way, until it happened, in
             | a hard takeoff scenario (which is what I've mostly seen
             | claimed).
             | 
             | There's evidence that LLMs _won 't_ scale to AGI (both
             | theoretical limiting arguments, and now mounting evidence
             | that those theoretical arguments are correct), so this
             | point is moot, but still.
        
               | srcreigh wrote:
               | link to the limiting arguments you're referring to?
        
             | Llamamoe wrote:
             | At this point there's enough capital and talent being
             | pumped into the industry that debating about whether and
             | how we can reach AGI is moot.
             | 
             | Enough or not, LLMs have shown that you can train an
             | extremely advanced fascimile of intelligence just by
             | learning to predict data generated by intelligent
             | beings(us), and with that we've got the possibly single
             | biggest building block done.
        
         | abraae wrote:
         | I don't get this hand waving.
         | 
         | Does anyone really think that nefarious foreign powers aren't
         | already researching with no guard rails, with the explicit goal
         | of developing AI-powered autonomous weapons, propaganda
         | platforms, deepfake extortion sites, scambots etc.?
         | 
         | You can be sure they won't be slowed down by regulation.
        
           | lttlrck wrote:
           | If there is any regulation I imagine there will be huge carve
           | outs for the military industrial complex.
        
           | andy99 wrote:
           | All current "guardrails" are silly censorship / political
           | correctness stuff, or for business appropriateness. They are
           | also trivially circumvented. There is no "threat" from the
           | un-shielded capability of current or foreseeable ML models.
        
           | pessimizer wrote:
           | Excellent defense of biological weapons programs. Nothing
           | like an assumed fascism "missile gap" to commit to chasing.
           | What if other countries start experimenting with bringing
           | back chattel slavery? How will we compete? Shouldn't we just
           | assume that they have already, and we're behind?
           | 
           | Our scum is no less nefarious than their scum.
           | 
           | edit: the answer is to cooperate, rather than antagonize. We
           | realized this in the past with nukes, but the least moral
           | people in the world think that entering agreements between
           | state-sized powers is just a delaying tactic until you can
           | get an advantage. Let's figure out how to relieve those
           | people of power _as if_ all of our lives depended on it. If
           | other countries being prosperous is always going to be
           | considered a threat, we 're always going to be in a fight
           | that ends in mutual destruction.
        
           | janice1999 wrote:
           | > You can be sure they won't be slowed down by regulation.
           | 
           | You should read up on existing regulations. The EU AI Act
           | explicitly exempts national security, research and military
           | uses for example.
           | 
           | Regulation isn't some all or nothing force that smothers
           | everything. It's carefully crafted legislation (well, should
           | be../) that is supposed to work to benefit the state and its
           | citizens. Let's not give OpenAI a free for all to do anything
           | because you think China is making Skynet drones.
        
         | paulddraper wrote:
         | Serious question: what is it about AI that you want regulated?
         | 
         | ---
         | 
         | I find that a certain segment of the population have a knee-
         | jerk "well we need rules about this." But they're less clear
         | about what. "Just...something, I'm sure."
         | 
         | Personally, I don't see what novel concern AI poses that isn't
         | already present in privacy law, copyrights, contracts, torts,
         | off-shoring, etc.
        
           | loceng wrote:
           | Regulations will go something like this: 1) anything that can
           | be harmful, say targeting of a population, isn't allowed to
           | be owned or be accessed-available for the individual, 2)
           | except for government and state-funded [bad] actors who have
           | a "legal" monopoly of violence - those governments who use
           | that usually captured/corrupted and of authoritarian-
           | tyrannical nature.
        
         | kazinator wrote:
         | Government regulation is steered by lobby groups, so self
         | regulation and government regulation are practically the same
         | thing.
        
         | slowhadoken wrote:
         | Yeah you have to change how lobbying works.
         | https://www.opensecrets.org/
        
         | Lerc wrote:
         | I don't really trust either to come up with good regulation
         | policy. Industry would be biased towards their industry and
         | government lacks the expertise.
         | 
         | I think there is still an opportunity for government to
         | implement regulation that meets the consensus of a variety of
         | fields. This is not an easy problem to solve and I really think
         | expecting any single person or organization would have the
         | answer. Working together on a consensus for regulation would
         | give the government a direction when currently they freely
         | admit that they do not know what the right way is.
         | 
         | The problem I see is there are lots of points of view each
         | trying to get something quickly that covers their specific area
         | of focus. This does not seem like a pathway to robust
         | regulation.
         | 
         | I assume there are discussions at the academic level of what
         | would be a good response. Does anybody have a good link to what
         | is being discussed at that level?
         | 
         | Is there any forum that covers good faith discussion involving
         | industry, academia, and the public?
        
       | sherburt3 wrote:
       | Oh man I wish someone I trust would be in charge of this project.
       | I know, let's put bureaucrats in charge!
        
       | blackeyeblitzar wrote:
       | Anyone who says someone else can't govern themselves is just
       | looking to shift power into their own hands, or the hands of
       | people they are aligned to. They never admit this but it's the
       | reality.
       | 
       | These former board members conducted themselves in such a poor
       | way during the attempted ouster of Sam Altman, that they clearly
       | cannot be trusted. Why is their opinion important to listen to?
       | 
       | Mind you - I don't trust OpenAI or big tech companies either,
       | mostly because of the amount of power or wealth they can
       | accumulate. But I see that as a need to revise antitrust law. I
       | am less onboard with trying to block people from developing
       | models, since that to me is more like violating the right to
       | thought and speech.
        
       | yareal wrote:
       | Well, obviously, right? They started with the premise of, "what
       | if we committed wholesale intellectual property theft" and moved
       | immediately to, "I bet we can put a whole lot of people out of
       | work and keep the profits to ourselves!"
       | 
       | It's _astonishingly_ clear we need to regulate them.
        
       | XorNot wrote:
       | The more interesting thing about LLMs at this point is their
       | stunning success rate at psychologically attacking people.
       | 
       | We have this endless stream of "AGI imminent" claims and then
       | ChatGPT-x still fails at some basic task everytime they release
       | it.
        
         | trhway wrote:
         | May be the AGI imitates the failures to avoid scaring humans
         | into shutting it down that early before it took real power over
         | the civilization. The Ender's Game is definitely in the
         | training set.
        
       | Osmium wrote:
       | What would regulation look like if it was based on energy usage
       | rather than capabilities?
       | 
       | A guardrail on mass deployment that is not linked to specific
       | model size or aspects of model performance that are difficult to
       | quantify.
        
         | Lerc wrote:
         | Energy usage of training or inference?
         | 
         | As a guard on capabilities, it would permit a rise in abilities
         | with gains in both hardware and software efficiency. Is this
         | desirable?
        
       | dhfbshfbu4u3 wrote:
       | Archived version: https://archive.is/wbwC2
        
       | slowhadoken wrote:
       | Because commercial industry regulating itself has worked so well
       | in the past?
        
       | JojoFatsani wrote:
       | Fun fact, McCauley is married to Joseph Gordon-Levitt.
        
         | genter wrote:
         | And if you live in a cave like me, Joseph Gordon-Levitt is an
         | actor.
        
       | moose44 wrote:
       | Curious about past examples of industries and companies left to
       | govern themselves?
        
         | maximus-decimus wrote:
         | Movie ratings. They censored themselves to avoid the government
         | stepping in.
         | 
         | Also professional orders like engineers, accountants and
         | teachers in some places I guess.
        
         | andthenzen wrote:
         | I'd look into industry trade groups and self-regulatory
         | organizations. A few U.S. examples that come to mind are FINRA
         | (broker-dealers), bar associations (lawyers), AMA (doctors),
         | AICPA (accountants), etc.
        
           | hn_throwaway_99 wrote:
           | Really glad you brought up FINRA, as I think it's the model
           | that will ultimately work best for AI regulation. Despite
           | their protestations, FINRA is almost a "quasi-governmental"
           | organization at this point. I think of it as the SEC being
           | ultimately in charge, but FINRA is responsible for the nitty-
           | gritty, technical details of the regulations.
           | 
           | I think with AI, you'll need an industry body because they'll
           | have the needed AI knowledge and expertise about the
           | technology itself, but ultimately a government oversight body
           | carries the legal force of the state.
        
         | tbrownaw wrote:
         | Do things like lawn services and clothing shops count?
        
       | blackhawkC17 wrote:
       | Aren't these two of the board members who lost their jobs due to
       | sheer incompetence in handling the Sam Altman situation?
       | 
       | Of course they're seeking power via the back door..
        
       | vundercind wrote:
       | My predictions:
       | 
       | 1) AI stuff's overblown. It'll be a good tool, becoming just
       | another of many, and probably will improve over time, but we'll
       | find we're nowhere near as close to creating silicon sentience as
       | some worry we are.
       | 
       | 2) The real problem is letting a few megacorps raid the commons--
       | and hell, lots of stuff that's not really in the commons at all,
       | basically just all of culture--then gate "their" creations behind
       | a paywall (oh, but _that_ they expect us to respect, because that
       | makes sense), and these AI safety folks don't seem to give a shit
       | about that.
        
         | ronsor wrote:
         | I agree with this.
         | 
         | On (2), I would like to see companies have no rights over
         | models trained on public data. It's very arguable they should
         | be required to release model weights.
        
           | vundercind wrote:
           | Yeah IMO a good outcome would be that training on data you
           | don't own or license should require release of the model. Is
           | allowed, doesn't get you something you exclusively own.
           | 
           | Bonus points if existing rights assignments aren't enough to
           | count as a grant of permission for AI training.
        
         | hn_throwaway_99 wrote:
         | > then gate "their" creations behind a paywall (oh, but that
         | they expect us to respect, because that makes sense), and these
         | AI safety folks don't seem to give a shit about that.
         | 
         | I downvoted your comment for this statement, given that's a
         | specific worry discussed at length in this article.
        
         | jamiek88 wrote:
         | > and these AI safety folks don't seem to give a shit about
         | that
         | 
         | Because that's literally not their job or role. Why would
         | enforcing copyright be in any way shape or form their
         | responsibility?
        
       | hn_throwaway_99 wrote:
       | After everything I've seen in the time since Altman's ouster then
       | reinstatement at OpenAI, I would definitely admit I was wrong in
       | my original assessment of the board's actions. While I still
       | think _how_ they went about it was both naive and very poorly
       | executed, everything I 've read online (both from the board
       | members but, more importantly, from others in-the-know at OpenAI)
       | makes me believe their action was warranted, especially given the
       | stated function of the OpenAI board.
       | 
       | I've never met Sam Altman, but the last "straw" for me was the
       | recent Scarlett Johansson brouhaha. While I think it's pretty
       | clear they wanted their AI system to evoke Johansson's persona in
       | the movie, OpenAI would have at least had some level of plausible
       | deniability if it weren't for Altman's 3-letter "her" tweet. It's
       | like he just couldn't help himself - it seemed the embodiment of
       | these "tech boy-princes" who, despite all their often lauded
       | "genius", just seem incapable of shutting TFU.
       | 
       | I honestly don't mean to solely dump on Altman (see also Musk,
       | Andreessen, etc.), it's just that he's obviously a focus of this
       | article. But everything I've heard about nearly every other tech
       | billionaire makes me think I absolutely do not want them
       | independently in charge of humanity's future with AI.
        
         | prox wrote:
         | Why do these figures all have like this immature thing about
         | them?
        
           | greenchair wrote:
           | just part of being a sociopath
        
             | jamiek88 wrote:
             | I'm starting to believe one cannot be a billionaire without
             | being mentally ill.
        
           | hn_throwaway_99 wrote:
           | My theory is that we're all pretty much that immature, but
           | the rest of us have normal societal guardrails confirming
           | that we're not actually as special and smart as we think we
           | are.
           | 
           | But these tech bros and others with that much power have no
           | such societal constraints. And, importantly, they _did_ have
           | huge impacts on society: creating the first popular Internet
           | browser, jumpstarting the EV revolution, exposing the masses
           | to AI - these all really were enormous accomplishments. So it
           | 's not that hard to go from there to convincing yourself that
           | your shit don't stink and that you have some unique insight
           | into all areas of human existence.
        
           | csense wrote:
           | In order to get to this kind of place, you have to pass three
           | filters:
           | 
           | - Your org must be big / famous
           | 
           | - You must be the public face of your org
           | 
           | - You have an irresistable urge to say edgy things you
           | probably shouldn't
           | 
           | People who are comfortable with their place in life are less
           | likely to make it through this filter.
        
         | diego_sandoval wrote:
         | Your whole argument about the Johannson issue depends on the
         | presumption that OpenAI will end up being the loser in the
         | legal battle or in the court of public opinion.
         | 
         | I think OpenAI will end up winning the legal battle. The voice
         | is not similar enough to Johannson's for her to win.
         | 
         | On the court of public opinion, OpenAI will lose trust from a
         | small portion of the population, but for the rest of the world,
         | it's not gonna matter at all. The positive impact of "OpenAI
         | just made the movie Her a reality" is bigger than the negative
         | impact.
        
       | z7 wrote:
       | >Tasha McCauley holds a B.A. from Bard College and a master of
       | Business Administration from the University of South California.
       | 
       | >Helen Toner holds an MA in Security Studies from Georgetown, as
       | well as a BSc in Chemical Engineering and a Diploma in Languages
       | from the University of Melbourne.
       | 
       | So these are the AI experts...?
        
       | robwwilliams wrote:
       | This commentary strikes the right balance between
       | necessary/inevitable progress toward AGI and one or more common
       | goods (however you define that---even as a libertarian).
       | 
       | The other more difficult question though is behind the screen---
       | how do we achieve the right balance between what we believe is
       | the common good? How will we (liberal democratic belief systems)
       | evaluate our version of the common good against other versions of
       | the common good: what "they" (autocratic, theocratic, ...)
       | believe is the common good?
       | 
       | No one society/culture can rationally adjudicate this decision or
       | make any decisions stick.
       | 
       | Unfortunately this has already become yet another version of
       | "warfare by other means".
       | 
       | I personally hope that a pragmatic inclusive liberal democratic
       | tradition gains a strong upper hand. I want my AGI to read and
       | embed J Dewey, Gh Mead, J Rawls, J Habermas, O Dasgupta, RA
       | Posner, and R Rorty.
       | 
       | But here will inevitably be battles among AGI systems, perhaps on
       | behalf of one or another human culture; perhaps not. Both
       | scenarios are equally frightening. The Chinese proverb of "living
       | in interesting times" applies in force.
        
       | motohagiography wrote:
       | even though I see the existential concerns with AI, I was at the
       | table with a group of ISPs for the same governance conversations
       | about the internet in the mid 90s and probably still have an RSA
       | encryption munitions t-shirt in a box somewhere.
       | 
       | what got bypassed was telco and ITU regulation, and the internet
       | demolished the "converged" telco oligopoly system on content and
       | publishing pretty naturally and in a fairly controlled way. given
       | the impact of social media, could similar governance as is being
       | advocated here have enabled the growth and whole new economies
       | the way the platforms have? I don't see it.
       | 
       | the people who ostensibly require your consent to serve you are
       | the absolute last people you want to give control of powerful
       | economic tools to, as first, what would they need your consent
       | for if they have the tools, and since they don't actually make
       | anything, by definition these people exist to optimize for
       | solving zero-sum, closed loop problems in their own decision
       | power and for redistribution to their coalitions. they do not
       | -and will not- use AI to create the things that grow. charitably,
       | governors and managers can be the shit from which things grow,
       | but we are not in a shit shortage.
       | 
       | imo, governance is the antithesis of desire. open source
       | everything, build everything, release everything as fast as you
       | can because these are the same old people who wanted cryptography
       | backdoored, the internet content policed, speech punished, and
       | now AI controlled. every generation must find a way to thrive in
       | spite of them.
        
       ___________________________________________________________________
       (page generated 2024-05-26 23:01 UTC)