[HN Gopher] "Accelerationism" is an overdue corrective to years ...
       ___________________________________________________________________
        
       "Accelerationism" is an overdue corrective to years of gloom
        
       Author : jseliger
       Score  : 49 points
       Date   : 2024-02-12 15:59 UTC (7 hours ago)
        
 (HTM) web link (www.thenewatlantis.com)
 (TXT) w3m dump (www.thenewatlantis.com)
        
       | quantified wrote:
       | The investor class makes the most money when there is a new
       | growth area. Ergo, getting more of new growth areas is in their
       | interest, and it makes sense to be putting new echoes into the
       | chamber, maybe some will escape to the wider world. Perhaps AI
       | will be as impactful as splitting the atom, who can tell.
        
       | JohnFen wrote:
       | > [...] but to widespread public concerns about the risks posed
       | by the tech industry at large. Effective accelerationists worry
       | that these concerns have become so entrenched that they threaten
       | to extinguish the light of tech itself.
       | 
       | Those "years of gloom" (which aren't very many years -- has
       | everyone forgotten when the tech industry was widely seen in
       | optimistic terms?) have been brought on by the behavior of the
       | tech industry itself, in large part because of the misapplication
       | of the idea "move fast and break things" (which is, unless I'm
       | misunderstanding, the very essence of e/acc that this article
       | discusses).
       | 
       | Our industry has been breaking a lot of things that people don't
       | want broken, then tends to shame people for being upset about
       | that. The problem isn't some inherent fear of tech itself, it's a
       | (supportable) fear of the tech industry and what other things it
       | may damage as time goes on.
       | 
       | If the industry wants to assuage these fears, the solution isn't
       | to move _even faster_ and break _even more_ things, it 's to
       | start demonstrably acting in a way that doesn't threaten people
       | and the things they hold dear.
        
         | fallingknife wrote:
         | There is no possible path for advancement that doesn't threaten
         | people and the things they hold dear. It never worked that way
         | in the past, and it won't now.
        
           | toomuchtodo wrote:
           | Tech is like a fission reactor: powerful, elegant, delivering
           | value through leverage, but requires strong controls and
           | protections (moderators, containment) for humans so it
           | doesn't ruin us all.
           | 
           | People worry about AI paperclip maximizing, but Tech is
           | already that in some ways (find or build moats, blitz
           | scaling, no concerns for the harm incurred). It's just fuzzy
           | cohorts of tech workers and management doing the paperclip
           | maximizing, for comp and shareholder value respectively. Not
           | much different than AI reward functions.
        
           | JohnFen wrote:
           | Yes, but there's a critical difference now. Now, the tech
           | industry breaks many things at an unprecedented pace, and
           | largely doesn't offer a reasonable replacement for the things
           | that have been broken.
           | 
           | People can only handle a limited amount of loss within a
           | given period of time before they start pushing back hard
           | against further loss and consider those causing them harm to
           | be forces of evil.
           | 
           | There's also another factor that the tech industry is largely
           | blind to: tech people tend to think that "we know best" and
           | that pushing our ideas on the general public against their
           | will is a Good Thing. But it's not a Good Thing, it's a Bad
           | Thing.
           | 
           | Another thing we need to be doing is allying with the general
           | public rather than dictating to them.
        
             | fallingknife wrote:
             | Who is pushing anything on the public? The tech industry
             | wouldn't exist in the form that it does now except that it
             | gives people something they want, not the other way around.
             | 
             | Disruption from tech advancement is caused by tech changes
             | displacing existing industries and it hurts the people
             | currently making money from those industries. But to be
             | against that disruption you would have to believe that
             | those people have some sort of right to make that money and
             | continue doing the things that make them those profits when
             | the public wants the more efficient tech. So really it's
             | the anti tech people who are pushing things on the public.
             | 
             | E.g. people often complain about Amazon displacing small
             | retailers, but really it's just that given the choice, most
             | people choose Amazon.
        
               | JohnFen wrote:
               | > except that it gives people something they want, not
               | the other way around.
               | 
               | That used to be true. Now, though, a very common thing
               | I've noticed with people is that they use tech not
               | because they want to or because it solves a problem for
               | them, but because they are disadvantaged if they don't.
               | 
               | It's an important difference. If people willingly choose
               | to use a thing, then they'll be inclined to think about
               | it positively. If they use a thing because they feel they
               | have no choice, then that thing is more likely to be
               | viewed as adversarial, because it is.
               | 
               | I think that's largely where the tech industry has
               | arrived at. Further, the tech industry shows little to no
               | empathy to those whose lives are worse because of what it
               | does.
        
               | fallingknife wrote:
               | People may feel that way, and I'm sure in some cases they
               | really mean it. But the reason they always give for why
               | they have to use it is some form of "because every one
               | else does." And it had to get to that point because
               | people wanted it in the first place. Otherwise it just
               | wouldn't have sold in the market when it came out.
        
           | hooverd wrote:
           | Leaded gasoline is a good example of an advancement where the
           | naysayers were right.
        
           | itishappy wrote:
           | Kinda the point, no? If history shows progress is disruptive,
           | then accelerationism seems likely to accelerate disruptions.
           | Many people can connect these dots, and not everyone sees
           | this as positive.
        
           | jwond wrote:
           | "Advancement" implies improvement. Just because things are
           | changing does not mean they are improving.
        
             | the_snooze wrote:
             | Yeah, it's a misguided and naive way of thinking. Deciding
             | whether a technological development is good (and for whom,
             | and to what extent, and with what trade-offs, and on what
             | time horizons) is a really difficult task. So some folks
             | will replace it with a much easier question: "Is this new?"
             | 
             | https://en.wikipedia.org/wiki/Attribute_substitution
        
         | nonrandomstring wrote:
         | I agree mostly, though I think the "break things" bit got
         | twisted and misunderstood.
         | 
         | We were supposed to break; limits, barriers, status-quos,
         | ossified ideas... Instead we broke; treasured social norms,
         | privacy, mutual respect and dignity. There's a difference
         | between benevolent innovation and reckless iconoclasm. I think
         | it started the day Peter Thiel gave money to Mark Zuckerberg.
        
           | trgn wrote:
           | I understood it much smaller fwiw. As long as you can add
           | useful features really quickly, it's fine if your website
           | crashes every once in a while.
        
             | Kye wrote:
             | Yep. It came from Facebook, and it was changed to favor
             | stability while moving fast almost a decade ago.
             | 
             | https://en.wikipedia.org/wiki/Meta_Platforms#History
             | 
             | >> '"On May 2, 2014, Zuckerberg announced that the company
             | would be changing its internal motto from "Move fast and
             | break things" to "Move fast with stable
             | infrastructure".[40][41] The earlier motto had been
             | described as Zuckerberg's "prime directive to his
             | developers and team" in a 2009 interview in Business
             | Insider, in which he also said, "Unless you are breaking
             | stuff, you are not moving fast enough."[42]"'
        
             | __s wrote:
             | Last night I changed some solid-js ui code to replace
             | mutating game in ui state with updating ui state with
             | mutated clones _(cloning is efficient & shares most data,
             | optimizations made for AI efficiency long ago)_
             | 
             | ofc, with these stale game references around, I soon got
             | reports of broken things: targeting was broken, pvp was
             | broken, fade out animations were broken
             | 
             | A few hours later these issues were resolved. The players
             | are used to these things happening sometimes. It's fine
             | since the stakes are low. It's just a game after all. &
             | being free, the active playerbase understands that they're
             | QA
        
             | chmod600 wrote:
             | And, crucially, you'd generally be around to help fix the
             | website.
        
           | vlovich123 wrote:
           | No? You're projecting what you want it to mean. The "break
           | things" is don't be afraid to break
           | functionality/features/infrastructure in the process of
           | improving it (new features, new scaling improvements, etc
           | etc). That's why it was renamed "Move fast with stable
           | infrastructure".
           | 
           | > The earlier motto had been described as Zuckerberg's "prime
           | directive to his developers and team" in a 2009 interview in
           | Business Insider, in which he also said, "Unless you are
           | breaking stuff, you are not moving fast enough."
           | 
           | It's about growth at all costs and then once Facebook got big
           | enough they had to balance growth against other factors (+
           | the things people were doing that were causing breakages
           | weren't actually helping to grow).
           | 
           | https://en.wikipedia.org/wiki/Meta_Platforms#History
        
             | iakov wrote:
             | Mottos like that live their own life. Take google's "dont
             | be evil" - people remember that, and see all the evil shit
             | google does now, of course they are going to recall the
             | motto and laugh at the irony. Whatever Sergey meant when he
             | coined the phrase is irrelevant imo.
        
             | nonrandomstring wrote:
             | > You're projecting what you want it to mean
             | 
             | Maybe true. But then if it's just about development it's a
             | rather mundane old chestnut about reckless engineering
             | versus good software engineering etc. Granted, that's a
             | different discussion and we can see the tide turning now in
             | terms of regulation and mandated software quality.
             | 
             | Sure, the Post-Office/Fujitsu scandal, Boeing etc, show how
             | bad software actually ruins lives, but for the most-part
             | the externality imposed by the reckless software engineer
             | is measured in "hours of minor inconvenience".
             | 
             | That said.. I wonder if you did a ballpark calculation of
             | how much harm lies behind the Google Graveyard [0], whether
             | the cost of what is broken outweighs the benefits of it
             | ever having been made?
             | 
             | [0] https://killedbygoogle.com/
        
               | vlovich123 wrote:
               | Engineering was literally taught to me in a well
               | respected engineering university as making an appropriate
               | cost/reward trade off and being careful in taking that
               | risk. But the economics of the business were important
               | too as it was part of the competition of driving more
               | efficiency into a system. In classical engineering, there
               | can be more risk because you're dealing with people's
               | lives and so you have to be more careful and add extra
               | margins of error even if more expensive.
               | 
               | One person's recklessness is another person's calculated
               | risk. The consequences of FB engineering mistakes are
               | minimal in both impact to customers and FB's business. As
               | FB scaled, the impact to individual people is still
               | largely minimal (perhaps even beneficial) but the impact
               | to their own business is larger and same for their
               | customers if their ads aren't getting eyeballs. So they
               | shifted as big companies do. It's kind of the best case
               | of thoughtful risk taking - we're rolling out a new
               | system and we don't know what could go wrong at scale and
               | we put in monitoring of what we think we need. If there's
               | problems we'll catch it with our monitoring/alerting and
               | rollback or fix. You see the outages but not 99% of
               | changes that go in without anything going wrong which
               | lets the business resolve issues quickly and cheaply.
               | 
               | As for Boeing and Fujistsu, I'd say those are very
               | different situations and aren't an engineering problem
               | nor do they indicate a move fast and break things
               | mentality. As with many things like that, the engineering
               | mistakes are a small detail within the overall larger
               | picture of corruption. Boeing wanted to escape being
               | classified as a new aircraft and met a perfect storm of
               | skimping on hardware and corrupting the FAA through
               | regulatory capture. I don't fully understand Boeing's
               | role with the recent failures as a subcontractor is
               | involved, but my hunch is that they're nominally
               | responsible for that subcontractor anyways. Same goes for
               | Fujitsu - bad SW combined with an overly aggressive
               | prosecution mandate and then cover ups around having made
               | mistakes based on the assumption that the SW was correct
               | rather than assuming new SW that hadn't run anywhere
               | before may contain bugs (not really sure whether Fujitsu
               | hid the bugs or if politicians did or what happened but
               | certainly the Post Office officials hid the reports of
               | the auditors that found bugs in the sw and continued with
               | prosecutions anyway).
               | 
               | Btw in engineering classes, all the large scale failures
               | we were taught about involved some level of corruption or
               | chain of mistakes. A contractor not conforming to the
               | engineering specs to save on costs (valid optimization
               | but should be done extra carefully), overlooking some
               | kind of physical modeling that wasn't considered industry
               | standard yet, kickbacks, etc.
        
               | nonrandomstring wrote:
               | We probably had similar rigorous educations at that
               | level. In SE we studied things like the '87 Wall St.
               | crash versus Therac-25. The questions I remember were
               | always around what "could or should" have been known, and
               | crucially... when. Sometimes there's just no basis for
               | making a "calculated risk" within a window.
               | 
               | The difference then, morally, is whether the harms are
               | sudden and catastrophic or accumulating, ongoing,
               | repairable and so on. And what action is taken.
               | 
               | There's a lot about FB you say that I cannot agree with.
               | I think Zuckerberg as a person was and remains naive. To
               | be fair I don't think he ever could have
               | foreseen/calculated the societal impact of social media.
               | But as a company I think FB understood exactly what was
               | happening and had hired minds politically and
               | sociologically smart enough to see the unfolding
               | "catastrophe" (Roger McNamee's words) - but they chose to
               | cover it up and steer the course anyway.
               | 
               | That's the kind of recklessness I am talking about.
               | That's not like Y2K or Mariner-I or any of those very
               | costly outcome could have been prevented by a more
               | thoughtful singular decision early in development.
        
               | vlovich123 wrote:
               | I'm talking strictly about the day to day engineering of
               | pushing code and accidentally breaking something which is
               | what "move fast and break things" is about and how it was
               | understood by engineers within Facebook.
               | 
               | You now have raised a totally separate issue about the
               | overall strategy and business development of the company
               | which you'd be right about - if it were required to have
               | a PE license to run an engineering company, Zuckerberg
               | would have to have had his PE license revoked and any PEs
               | complicit in what they did with tuning for addictiveness
               | should similarly be punished. But the lack of regulation
               | in any engineering projects that don't deal directly with
               | human safety and how businesses are allowed to run is a
               | political problem.
        
               | nonrandomstring wrote:
               | I see we agree, and that as far as day-to-day engineering
               | goes I'd probably care very little about whether a bug in
               | Facebook stopped someone seeing a friends kitten pics.
               | 
               | But on the issue I'm really concerned about, do you think
               | "tuning for addictiveness" on a scale of about 3 billion
               | users goes beyond mere recklessness, and what do we do
               | about this "political problem" that such enormous diffuse
               | harms are somehow not considered matters of "human
               | safety" in engineering circles?
               | 
               | Is it time we formalised some broader harms?
        
           | mquander wrote:
           | Picture of two little identical castles, towns, and armies,
           | caption:
           | 
           | Their barbarous "barriers", "status quo", "ossified ideas"
           | 
           | vs.
           | 
           | Our blessed "privacy", "treasured social norms", "dignity"
        
             | samatman wrote:
             | The alternative to describing the meme here is to call it
             | by name: a Russell conjugation.
        
           | waynesonfire wrote:
           | I always thought move fast and break things used at FB was to
           | empower the ambitious, talented, fresh crop of ivy-college
           | grads with confidence to move forward with poor decisions due
           | to lack of experience.
        
         | reissbaker wrote:
         | IDK, I think a big part of the "years of gloom" was an official
         | (but secret) NYTimes policy of only publishing negative stories
         | about tech, as confirmed by Vox journalist Kelsey Piper. [1]
         | 
         | 1: https://twitter.com/KelseyTuoc/status/1588231892792328192
        
         | captainbland wrote:
         | Ultimately it's the purse string holders who want to move fast
         | and break things. Investors are the people who would rather try
         | to shove ten figures into undercutting taxi markets everywhere
         | to try to build a monopoly. Imagine if instead they'd put that
         | into cancer treatments and diagnostics or novel forms of energy
         | generation. Move fast and break things is shit compared to
         | building new things at the centre of human need and at the edge
         | of human understanding.
        
       | nonrandomstring wrote:
       | This other thread [0] "Easy to criticise, hard to create" and my
       | remark here [1] about the demise of market research have
       | something in common with this topic.
       | 
       | It's about the gravity of the mob. Risk taking in business (which
       | is now being called "acceleration" AFAICS) is about the courage
       | not to think about "what everybody wants". I observe it's really
       | hard for the SV tech mindset to escape that gravity. Downvote me
       | all you like for saying such uncomfortable things.
       | 
       | [0] https://news.ycombinator.com/item?id=39346374
       | 
       | [1] https://news.ycombinator.com/context?id=39344991
        
         | ToucanLoucan wrote:
         | "Finding what the market wants and providing it" only worked
         | when the market had wants, and wants are finite. Once those are
         | all taken care of, everything past that is engineered desire,
         | which is where consumer culture comes into play.
         | 
         | You don't need to explain to someone why they want food. Of
         | course they want food, they're hungry. You do need to explain
         | to them why they want a pizza covered in gold leaf. You don't
         | need to explain why they want a car: they want to get around in
         | the United States, and a car is more or less mandatory: you
         | need to explain why they want a $100,000 SUV that gets worse
         | fuel economy than a comparable van while holding less cargo and
         | has such terrible visibility there's a non-insignificant chance
         | they will run over and kill one of their own children with the
         | thing. You don't need to convince them they want a smart phone,
         | you need to convince them they want a new smart phone that's
         | 11% faster than the old one even though their current one works
         | fine.
         | 
         | Tons and tons of business, not even remotely isolated to the
         | tech sector, has nothing at all to do with meeting consumer
         | demand or fulfilling wants in the market; it has to do with
         | building slightly different versions of products that already
         | exist, and then spending millions if not billions of dollars so
         | you can scream at the market's ear as loud as possible until
         | they think that voice is coming from inside their own heads,
         | and they'll buy it to make it shut up. And then repeat.
        
       | mistrial9 wrote:
       | as a contrast to "accelerate" let's consider "addiction" cycles.
       | Business-driven tech seeks addiction cycles in consumers due to
       | orders of magnitude larger results.
        
       | mathgradthrow wrote:
       | If you're writing a nostalgic tech "thinkpiece" please be
       | specific about what you're nostalgic for.
        
       | debacle wrote:
       | In political circles, "accelerationists" believe that the demise
       | of the US is inevitable, and that the faster it happens the
       | better the world will be.
       | 
       | I see similar parallels to the tech sector. Is the death of SV
       | inevitable? If so, is the world better off if it dies quickly or
       | slowly?
       | 
       | Kind of a tangent to the core article, but it seems to me that we
       | are in a large-scale transition of society in many ways, and the
       | redemocratization of technology is a critical aspect of a bright
       | future..
        
         | namlem wrote:
         | That's a different type of accelerationism.
        
       | lm28469 wrote:
       | Accelerationism is the best way to go at it since it ensures
       | either a quick death or a fix and not the ugly in-between we're
       | all starting to slowly catch a glimpse of
        
         | JohnFen wrote:
         | So in that view, accelerationism is about placing a high-
         | stakes, binary bet that such actions will result in a utopia
         | rather than utter destruction.
         | 
         | But what gives those people the right to gamble with the lives
         | of the rest of us in that way? The entire line of thinking is,
         | in my view, not only horribly egotistic and authoritarian, but
         | antagonistically so.
        
       | pfdietz wrote:
       | My attitude about safety of AI is this: if AI is an existential
       | risk dangerous enough to justify draconian measures, we're
       | ultimately fucked, since those measures would have to be perfect
       | for all future times and places humans exist. Not a single lapse
       | could be allowed.
       | 
       | And it's just not plausible humanity could be that thorough. So,
       | we might as well assume AI is not going to be that dangerous and
       | move ahead.
        
         | downWidOutaFite wrote:
         | I guess but its similar how diplomqcy and international
         | agreements need to be perfect forever to prevent nuclear war
         | but so far it has worked and its worth it to keep trying.
        
         | Mistletoe wrote:
         | Your first paragraph is exactly how I feel about nuclear
         | weapons, to put it into context. I don't think the logical
         | conclusion from that viewpoint is that nuclear weapons aren't
         | that dangerous so we should just move ahead.
        
           | pfdietz wrote:
           | I don't think nuclear weapons are the kind of existential
           | risk that AI doomsters imagine for AI.
        
             | thrill wrote:
             | Other than those that have called for nuking AI
             | datacenters.
        
               | pfdietz wrote:
               | That presumably demonstrates they think nuclear war is
               | less dangerous than AI.
        
               | PoignardAzur wrote:
               | I feel obligated to point out that nobody has argued for
               | nuking datacenters; the most radical AI existential-
               | safety advocates have argued for is "have a ban on
               | advanced training programs, enforced with escalating
               | measures from economic sanctions and embargos to, yes,
               | war and bombing datacenters". Not that anybody is
               | optimistic on that idea working.
        
           | Vegenoid wrote:
           | I think it has been empirically demonstrated that lapses in
           | regards to the control and use of nuclear weapons can occur
           | without the destruction of humanity.
           | 
           | (I am not an AI doomer, nor do I feel that nuclear weapons
           | are not dangerous/should be less controlled)
        
         | PoignardAzur wrote:
         | I think that's the same kind of attitude that makes a lot of
         | people not take global warming seriously.
         | 
         | It's a way to process ideas you don't want to be true, sure,
         | but it's not a sensible or cost-effective way to deal with
         | potential threats.
         | 
         | (And yeah, you can argue AI x-risk isn't a potential threat
         | because it's not real or whatever. That's entirely orthogonal
         | to the "if it's true we're fucked so don't bother" line of
         | argument.)
        
       | bparsons wrote:
       | The author lacks any context for what accelerationism is.
       | 
       | The original idea comes from Marxists advocating for the adoption
       | of free market capitalism as a means of bringing about the
       | alienation of the working class, which is a necessary
       | precondition for socialist revolution.
       | 
       | The idea of effective accelerationism (which was a joke making
       | fun of people like Musk and SBF) is the rapid, uncontrolled
       | promotion of AI as a means of destroying the entire tech industry
       | and all that surrounds it.
        
       | dghlsakjg wrote:
       | Am I the only one that associates the term "accelerationism"
       | mostly with right-wing extremism and terrorism?
       | https://en.wikipedia.org/wiki/Accelerationism#Far-right_acce...
       | 
       | If you are a techno-optimist, you should ask your AI to come up
       | with something less associated with race war and neo-nazi
       | ideology
        
         | UtopiaPunk wrote:
         | I mostly associate the term with the original left-wing version
         | of accelerationism (which is a little further up in the
         | Wikipedia article you linked to). In both versions, the view is
         | essentially that the society we have now is so bad as to be
         | practically unredeemable, so the next best course of action is
         | to accelerate this society's downfall so the new "good" society
         | can be built instead. Obviously, the vision of the "good" thing
         | that comes next is, uh, wildly different.
         | 
         | In either case, seems very different than what proponents of
         | e/acc are about, but in a cynical sense, not totally unrelated.
         | The Wikipedia does include, "It has been regarded as an
         | ideological spectrum divided into mutually contradictory left-
         | wing and right-wing variants, both of which support the
         | indefinite intensification of capitalism and its structures as
         | well as the conditions for a technological singularity, a
         | hypothetical point in time where technological growth becomes
         | uncontrollable and irreversible."
        
           | devmor wrote:
           | e/acc seems very similar in premise to the original left wing
           | accelerationism, but with the very important caveat that its
           | proponents trust implicitly that whatever the current capital
           | market decides is worthy of funding will somehow be the
           | appropriate technology to accelerate.
        
       | keiferski wrote:
       | It's a bit funny, a bit annoying, and a bit scary how these e/acc
       | people don't seem to understand the slightest thing about Nick
       | Land's philosophy, which is ultimately where the term
       | _accelerationism_ came from. Probably because it 's extremely
       | dense, filled with "Deleuzoguattarian schizoanalysis", and
       | because Land himself is hard to follow or understand. He gets
       | described as a neoreactionary, which is pretty accurate to my
       | reading, but...it definitely has little to do with, quoting from
       | the article: "Do something hard. Do it for everyone who comes
       | next. That's it. Existence will take care of the rest."
       | 
       | I am by no means an expert or huge fan of Land, but he's
       | definitely more along the lines of, "the machines are going to
       | eat everything and there's basically nothing you can do about
       | it."
       | 
       | So, in some sense, this is just another round of "economic elites
       | co-opting someone else's culture."
       | 
       | Edit: I did some searching on Twitter and found this great quote,
       | which really does sum it up:
       | 
       | > _E /ACC is a fitting end to Accelerationism. After having been
       | passed around by dissident intellectuals and online deviants it
       | can finally settle into retirement as another kitschy pastel MS
       | Powerpoint Californian grindset aesthetic stripped of all its
       | substantial insights._
       | 
       | https://twitter.com/augureust/status/1691893969678913692
        
         | ultra_nick wrote:
         | Actually, the e/acc community has stated it rejects Nick Land's
         | more recent work.
         | 
         | If you read older posts, you'll find the earlier e/acc members
         | got started with Fanged Noumena.
        
           | keiferski wrote:
           | Interesting. I don't doubt that the earliest online people
           | were familiar with it, but I am extremely doubtful that the
           | big names dropped in the article have even heard of Land.
        
       | VoodooJuJu wrote:
       | I wish they would have explained what the original
       | Accelerationism actually means instead of just giving it a quick
       | nod:
       | 
       | >"Accelerationism is unfortunately now just a buzzword," sighed
       | political scientist Samo Burja, referring to a related concept
       | popularized around 2017.
       | 
       |  _Accelerationism_ is a political reaction that basically works
       | like this: when faced with any kind of dilemma, choose the option
       | that is the most progressively destructive. The idea is that
       | society  & institutions are so tainted, that the only way to fix
       | it is to continue knocking down all those boundaries and
       | Chesterton's Fence's in order to effect some kind of
       | institutional collapse that's being headed toward anyway.
       | 
       | Why slowly implement policies that spell our doom when we can
       | implement them fast - accelerate!
       | 
       | That's the idea. And they should have mentioned that in the
       | article, because the contrast with the Globo Tech notion of
       | "Effective Accelerationism" is just good irony.
        
       | adverbly wrote:
       | Can we just stop with these absolute positions already?
       | 
       | You can justify horrible atrocities if you strongly believe in
       | extreme outcomes.
       | 
       | Effective altruism, accelerationism, doomers... All of these
       | mindsets are toxic for the same reason that extreme religious
       | stances are toxic. Believing that the world is going to end or
       | that it's going to reach salvation or whatever extreme outcome
       | you feel like, and then using that to justify short-term
       | atrocities is not the way.
       | 
       | Stop putting infinity in your forecasts. It breaks everything. We
       | have been down this path before many times and it never ends
       | well.
        
       | sp527 wrote:
       | Beff is a late-stage capitalism incarnation of the useful idiot,
       | sucking off the landed technogentry in hopes of securing a life
       | raft in the economic apocalypse e/acc is attempting to usher in.
       | When 89% of US stock equity is owned by the top 10%, you know
       | that e/acc is a sclerotic and thinly veiled "fuck the poors"
       | ideology being perpetrated by precisely that well-insulated 10%,
       | who have little to nothing to fear from a world where labor value
       | goes asymptotically towards zero.
        
         | astrange wrote:
         | > When 89% of US stock equity is owned by the top 10%, you know
         | that e/acc is a sclerotic and thinly veiled "fuck the poors"
         | ideology
         | 
         | You forgot to control for "older people own more stocks because
         | they spent more time saving for retirement".
        
       | AlexandrB wrote:
       | There's a combination of exploitation and lack of humility that
       | marks recent tech. On the one hand, a lot of newer tech treats
       | you like some kind of tech peasant: no sideloading, endless
       | dialogs that have no option to say "no", arbitration clauses, no
       | ability to fix your own devices, little or no customer support,
       | etc. On the other hand you have proclamations about what the
       | future will look like that turn out to be terribly wrong:
       | blockchain, metaverse, VR, self-driving truck convoys, etc.
       | 
       | After such an atrocious track record of tech leaders being
       | completely wrong about what "the next big thing" is, why should
       | anyone trust these people and why are they so certain they're
       | "accelerating" in the right direction?
       | 
       | Edit: I miss the era when tech companies treated their users as
       | customers, not as marks to be manipulated for the purpose of
       | profit maximization.
        
       | tmaly wrote:
       | 50 year cycle? All of the recent breakthroughs seem to be
       | happening on every shortening cycles.
        
       | Animats wrote:
       | _" Or, perhaps, wanting to be regulated is a subconscious way for
       | tech to reassure itself about its central importance in the
       | world, which distracts from an otherwise uneasy lull in the
       | industry."_
       | 
       | There is that. There hasn't been a must-have consumer electronics
       | thing since the smartphone. 2019 was supposed to be the year of
       | VR. Fail. 2023 was supposed to be the year of the metaverse.
       | Fail. Internet of Things turned out to be a dud. Self-driving
       | cars are still struggling. All those things actually work, just
       | not well enough for wide deployment.
       | 
       | LLM-based AI has achieved automated blithering. It may be wrong,
       | but it sounds convincing. We are now forced to realize that much
       | human activity is no more than automated blithering. This is a
       | big shakeup for society, especially the chattering classes.
        
         | PoignardAzur wrote:
         | Holy crap, I'm realizing it's been 4 years since Half Life
         | Alyx. I really wish it had been the first of many.
        
       | browningstreet wrote:
       | e/acc is just the logical +1 follow-up to hodl.
       | 
       | a tech subculture: creates its own acronym, posts a lot of self-
       | referential coded messages, add exclamation marks, ignore those
       | who decry it as annoying. no whining no complaining no critique.
       | helps to get a tech alpha to engage/retweet you. tolerate a
       | little discourse but mostly exclaim. make sure everyone knows you
       | think X rulez. pro$it.
        
       ___________________________________________________________________
       (page generated 2024-02-12 23:00 UTC)