[HN Gopher] Strategic Wealth Accumulation Under Transformative A...
___________________________________________________________________
Strategic Wealth Accumulation Under Transformative AI Expectations
Author : jandrewrogers
Score : 80 points
Date : 2025-02-22 05:48 UTC (17 hours ago)
(HTM) web link (arxiv.org)
(TXT) w3m dump (arxiv.org)
| yieldcrv wrote:
| Do you have a degree in theoretical economics?
|
| "I have a theoretical degree in economics"
|
| You're hired!
|
| real talk though, I wish I had just encountered an obscure paper
| that could lead me to refining a model for myself, but it seems
| like there would be so many competing papers that its the same as
| having none
| WorkerBee28474 wrote:
| Not worth reading.
|
| > this paper focuses specifically on the zero-sum nature of AI
| labor automation... When AI automates a job - whether a truck
| driver, lawyer, or researcher - the wages previously earned by
| the human worker... flow to whoever controls the AI system
| performing that job.
|
| The paper examines a world people will pay an AI lawyer $500 to
| write a document instead of paying a human lawyer $500 to write a
| document. That will never happen.
| smeeger wrote:
| foolish assumption on your part
| kev009 wrote:
| That's a bit too simplistic; would a business have paid IBM the
| same overheads to tabulate and send bills with a computer
| instead of a pool of billing staff? In business the only
| justification for machinery and development is that you are
| somehow reducing overheads. The tech industry gets a bit warped
| in the pseudo-religious zeal around the how and that's why the
| investments are so high right now.
|
| And to be transparent I'm very bearish on what we are being
| marketed to as "AI"; I see value in the techs flying underneath
| this banner and it will certainly change white collar jobs but
| there's endless childish and comical hubris in the space from
| the fans, engineers, and oligarchs jockeying to control the
| space and narratives.
| gopalv wrote:
| > The paper examines a world people will pay an AI lawyer $500
| to write a document instead of paying a human lawyer $500 to
| write a document
|
| Is your theory that the next week there will be an AI lawyer
| that charges only 400$, then it is a race to the bottom?
|
| There is a proven way to avoid a race to the bottom for wages,
| which is what a trade union does - a union by acting as one
| controls a large supply of labour to keep wages high.
|
| Replace that with a company and prices, it could very well be
| that a handful of companies could keep prices high by having a
| seller's market where everyone avoids a race to the bottom by
| incidentally making similar pricing calls (or flat out
| illegally doing it).
| WithinReason wrote:
| You would need to coordinate across thousands of companies
| across the entire planet
| rvense wrote:
| That seems unlikely - law is very much tied to a place.
| IncreasePosts wrote:
| Yes, but legal documents don't necessarily need to be
| drafted by lawyers accredited in that locale. It usually
| helps though because they are familiar with the local law
| and other processes.
| echelon wrote:
| > There is a proven way to avoid a race to the bottom for
| wages, which is what a trade union does
|
| US automotive, labor, and manufacturing unions couldn't
| remain competitive against developing economies, and the jobs
| moved overseas.
|
| In the last few years, after US film workers went on strike
| and renegotiated their contracts, film production companies
| had the genius idea to start moving productions overseas and
| hire local crews. Only talent gets flown in.
|
| What stops unions from ossifying, becoming too expensive, and
| getting replaced on the international labor market?
| js8 wrote:
| > What stops unions from ossifying, becoming too expensive,
| and getting replaced on the international labor market?
|
| Labor action, such as strikes.
| somenameforme wrote:
| That doesn't make any sense as a response to his
| question. Labor actions just further motivate employers
| to offshore stuff. And global labor unions probably can't
| function because of sharp disparities in what
| constitutions good compensation.
| amanaplanacanal wrote:
| Possibly protectionist tariffs.
| habinero wrote:
| There have been several startups that tried it, and they all
| immediately ran into hot water and failed.
|
| The core problem is lawyers already automate plenty of their
| work, and lawyers get involved when the normal rules have
| failed.
|
| You don't write a contract just to have a contract, you write
| one in case something goes wrong.
|
| Litigation is highly dependent on the specific situation and
| case law. They're dealing with novel facts and arguing for
| new interpretations, not milling out an average of other
| legal works.
|
| Also, you generally only get one bite at the apple, there's
| no do-overs if your AI screws up. You can hold a person
| accountable for malpractice.
| chii wrote:
| > The core problem is lawyers already automate plenty of
| their work, and lawyers get involved when the normal rules
| have failed.
|
| this is true - and the majority of work of lawyers is in
| knowing past information, and synthesising possible futures
| from those information. In contracts, they write up clauses
| to protect you from past issues that have arisen (and may
| be potential future issues, depending on how good/creative
| said lawyer is).
|
| In civil suits, discovery is what used to take enormous
| amounts of time, but recent automation in discovery has
| helped tremendously, and vastly reduced the amount of grunt
| work required.
|
| I can see AI help in both of these aspects. Now, whether
| the newer AI's can produce the type of creativity work that
| lawyers need to do post information extraction, is still up
| for debate. So far, it doesn't seem like it has reached the
| required level for which a client would trust a pure ai
| generated contract imho.
|
| I suspect the day you'd trust an AI doctor to diagnose and
| treat you, would also be the day you'd trust an AI lawyer.
| riku_iki wrote:
| > people will pay an AI lawyer $500 to write a document instead
| of paying a human lawyer $500 to write a document.
|
| there will be caste of high-tech lawyers very soon which will
| be able to handle many times more volume of work thanks to AI,
| and many other lawyers will lose their jobs.
| petesergeant wrote:
| Yes, that is obvious. The point you are replying to is that
| oversupply will mean the cost to the consumer will fall
| dramatically too, rather than the AI owner capturing all of
| the previous value.
| riku_iki wrote:
| It depends. If there will be one/few winners on the market,
| they will dictate price after human labor out-competed
| through the price or quality.
| jezzabeel wrote:
| If prices are determined by scarcity then the cost of
| services will more likely be tied to the price for
| energy.
| sgt101 wrote:
| I know one !
|
| She's got international experience and connections but moved
| to a small town. She was a magic circle partner years ago.
| Now she has a FTTP connection and has picked up a bunch of
| contracts that she can deliver on with AI. She underbid some
| big firms on these because their business model was
| traditional rates, and hers is her cost * x (she didn't say
| but >1.0 I think)
|
| Basically she uses AI for document processing (discovery) and
| drafting. Then treats it as the output of associates and puts
| the polish on herself. She does the client meetings too
| obviously.
|
| I don't think her model will last long - my guess is that
| there will be a transformation in the next 5 years across the
| big firms and then she will be out of luck (maybe not at the
| margin though). She won't care - she'll be on the beach
| before then.
| 6510 wrote:
| This is how it has always been. Automation makes a job
| require less traditionally required knowledge, the tasks less
| complicated and increases productivity. This introduces new
| complexity that machines can't solve.
|
| The funny part is that people think we will run out of things
| to do. Most people never hire a lawyer because they are much
| to expensive.
| cgcrob wrote:
| They also forget the economic model that you have to pay $5000
| for a real lawyer after the fact to undo the mess you got
| yourself in by trusting the output of the AI in the first place
| which made a nuanced mistake that the defending "meat" lawyer
| picked up in 30 seconds flat.
|
| The proponents of AI systems seem to mostly misunderstand what
| you're paying for really. It's not writing letters.
| jjmarr wrote:
| https://www.stimmel-
| law.com/en/articles/story-4-preprinted-f...
|
| Love this story so much I just posted it. Although it's from
| an era in which you'd buy CDs and books containing contracts,
| it's still relevant with "AI".
|
| > "No lawyer writes a clause who is not prepared to go to
| court and defend it. No lawyer writes words and let's others
| do the fighting for what they mean and how they must be
| interpreted. We find that forces the attorneys to be very,
| very, very careful in verbiage and drafting. It makes them
| very serious and very good. You cook it, you eat it. You
| draft it, you defend it."
| bberenberg wrote:
| This is not true in my experience. We had our generic
| contract attorney screw up and then our litigation attorney
| scolded me for accepting and him for him providing advice
| on litigation matters where he wasn't an expert.
|
| Lawyers are humans. They make the same mistakes as others
| humans. Quality of work is variable across skills,
| education, and if they had a coffee or not that day.
| quotemstr wrote:
| > Not worth reading.
|
| I would appreciate a version of this paper that _is_ worth
| reading, FWIW. The paper asks an important question: shame it
| doesn 't answer it.
| standfest wrote:
| i am currently working on a paper in this field, focusing on
| the capitalisation of expertise (analogue to marx) in the
| dynamics of cultural industry (adorno, horkheimer). it
| integrates the theories of piketty and luhmann. it is rather
| theoretical, with a focus on the european theories (instead
| of adorno you could theoretically also reference chomsky). is
| this something you would be interested in? i can share the
| link of course
| thrance wrote:
| Be careful, barely mentioning Marx, Chomsky or Picketty is
| a thoughtcrime in the new US. Many will shut themselves
| down to not have to engage with what you are saying.
| itsafarqueue wrote:
| Yes please
| addicted wrote:
| Your criticism is completely pointless.
|
| I'm not sure what your expectation is, but even your claim
| about the assumption the paper makes is incorrect.
|
| For one thing, the paper assumes that the amount that will be
| transferred from the human lawyer to the AI lawyer would be
| $500 + the productivity gains brought by AI, so more than 100%.
|
| But that is irrelevant to the actual paper. You can apply
| whatever multiplier you want as long as the assumption that
| human labor will be replaced by AI labor holds true.
|
| Because the actual nature of the future is irrelevant to the
| question the paper is answering.
|
| The question the paper is answering is what impact such
| expectations of the future would have on today's economy
| (limited to modeling the interest rate). Such a future need not
| arrive or even be possible as long as there is an expectation
| it may happen.
|
| And future papers can model different variations on those
| expectations (so, for example, some may model that 20% of labor
| in the future will still be human, etc).
|
| The important point as far as the paper is concerned is that
| the expectations of AI replacing human labor and some
| percentage of the wealth that was going to the human labor now
| accrues to the owner of the AI will lead to significant changes
| to current interest rates.
|
| This is extremely useful and valuable information to model.
| mechagodzilla wrote:
| The $500 going to the "AI Owner" instead of labor (i.e. the
| human lawyer) _is_ the productivity gain though, right? And
| if that was such a productivity gain (i.e. the marginal cost
| was basically 0 to the AI owner, instead of, say, $499 in
| electricity and hardware), the usual outcome is that the cost
| for such a product /service basically gets driven to 0, and
| the benefit from productivity actually gets distributed to
| the clients that would have paid the lawyer (who suddenly get
| much cheaper legal services), rather than the owner of the
| 'AI lawyer.'
|
| We seem pretty likely to be headed towards a future where AI-
| provided services have almost no value/pricing power, and
| just become super low margin businesses. Look at all of the
| nearly-identical 'frontier' LLMs right now, for a great
| example.
| larodi wrote:
| Indeed, fair chance AI only amplifies certain sector's
| wages, but the 100% automated work will not get any magic
| margin. Not more than say smart trading to have too many
| people focus there.
| visarga wrote:
| > You can apply whatever multiplier you want as long as the
| assumption that human labor will be replaced by AI labor
| holds true.
|
| Do you think we will be doing in 5 or 10 years the same
| things we do today, but with AI? Every capability increase or
| cost reduction stimulates demand. AI is no different, it will
| stimulate both demand and competition. And since everyone has
| AI, and AIs are not much different between them, then the
| differentiating factor remain the humans. Even if we solve
| all our current problems with AI there is no reason to stop
| there, we could reduce poverty, pollution, fight global
| warming, conquer space. The application space is unbounded.
| Take electricity or internet for example and think how they
| expand the scope of work. Programming has been automating
| itself for 60 years, with each new language, library or open
| source project, and yet we have great jobs in the field.
|
| No matter how much we have, we want more. Our capability of
| desiring progress is faster than AI capability to provide it.
| pizza wrote:
| This almost surely took place somewhere in the past week alone,
| just with a lawyer being the mediating human face.
| geysersam wrote:
| > zero sum nature of labor automation
|
| Labor automation is not zero sum. This statement alone makes me
| sceptical of the conclusions in the article.
|
| With sufficiently advanced AI we might not have to do any work.
| That would be fantastic and extraordinarily valuable. How we
| allocate the value produced by the automation is a _separate_
| question. Our current system would probably not be able to
| allocate the value produced by such automation efficiently.
| hartator wrote:
| Yeah, and this applies to every technology ever.
|
| You can even use the same argument line against the wheel,
| electricity, or farming.
| tim333 wrote:
| I agree that quote seems wrong. When tech reduces the cost of
| providing a service, the price of the service to consumers is
| generally driven down correspondingly by competition rather
| than the service provider getting rich.
|
| The whole AI will cause interest rates to shoot up thing seems
| a bit mad.
| pessimizer wrote:
| > The paper examines a world people will pay an AI lawyer $500
| to write a document instead of paying a human lawyer $500 to
| write a document. That will never happen.
|
| It's an absurd assumption made by AI investors everywhere. They
| can't handle a world where everyone already has an AI lawyer at
| home that they trust, that they have because they once paid
| $100 for it at a kiosk in the mall or pirated it. The real
| future is an AI lawyer on your keychain and an extreme
| devaluation of the skill of knowing the law and making legal
| arguments.
|
| Instead, we're going to have a weirder world where you show up
| to court and the court already has a list of your best legal
| arguments that they generated completely independent of you,
| and they largely match the list of arguments that your own AI
| advisor app gave you. They'll send you messages regarding your
| best next steps, and if your own device agrees, all you'll have
| to do is reply 'Y.'
|
| For simple document preparation, I'm pretty sure that your
| phone will be able to handle it, and AI at the point of
| submission would be able to give you helpful suggestions if the
| documents were inadequate.
|
| LLMs can almost do things of this degree of difficulty
| reasonably well _now._ Where will they be (or their successors
| be) in 10 years? Why do we think they will be as expensive as
| lawyers, who you have to send to difficult schools for a long
| time, feed, and flatter?
| farts_mckensy wrote:
| this paper asserts that when "TAI" arrives, human labor is simply
| replaced by AI labor while keeping aggregate labor constant. it
| treats human labor as a mere input that can be swapped out
| without consequence, which ignores the fact that human labor is
| the source of wages and, therefore, consumer demand. remove human
| labor from the equation, and the whole thing collapses.
| jsemrau wrote:
| Accelerationists believe in a post-scarcity society where the
| cost of production will be negligible. In that scenario, and I
| am not a believer, consumer demand would be independent of
| wages.
| riffraff wrote:
| That makes wealth accumulation pointless so the whole article
| makes no sense either, right?
|
| Tho I guess even post scarcity we'd have people who care
| about hoarding gold-pressed latinum.
| otabdeveloper4 wrote:
| > consumer demand would be independent of wages
|
| That's the literal actual textbook definition of "communism".
|
| Lmao that I actually lived to see the day when techbros
| seriously discuss this.
| bawolff wrote:
| > Lmao that I actually lived to see the day when techbros
| seriously discuss this.
|
| People have been making comparisons between post scarcity
| economics and "utopia communism" for decades at this point.
| This talking point probably predates your birth.
| doubleyou wrote:
| communism is a universally accepted ideal
| farts_mckensy wrote:
| That is not the "textbook definition" of communism. You
| have no idea what you're talking about.
| farts_mckensy wrote:
| In that scenario, wages and money in general would be
| obsolete.
| riku_iki wrote:
| consumer demand will shift from middle-class demand (medium
| houses, family cars) to super-rich demand (large luxury
| castles, personal jets and yachts, high-profile entertainment,
| etc) + provide security to superrich (private automated police
| forces).
| psadri wrote:
| This has already been happening. The gap between wealthy and
| poor is increasing and the middle class is squeezed.
| Interestingly, simultaneously, the level of the poor has been
| rising from extreme poverty to something better so we can
| claim that the world is relatively better off even though it
| is also getting more unequal.
| riku_iki wrote:
| poor got more comfortable life because of globalization:
| they became useful labor for corps. Things will go back to
| previous state if their jobs will go to AI/robots.
| farts_mckensy wrote:
| I am genuinely mystified that you think this is an adequate
| response to my basic point. The economy cannot be sustained
| this way. This scenario would almost immediately lead to a
| collapse.
| riku_iki wrote:
| why do you think it will lead to collapse exactly?
| farts_mckensy wrote:
| The level of wealth concentration you are suggesting is
| impossible to sustain. History shows that when wealth
| inequality gets to a certain point, it leads either to a
| revolution or a total collapse of that society.
|
| The economy cannot be sustained on the demand of a small
| handful of wealthy people. At a certain point, you either
| get a depression or hyperinflation depending on how the
| powers that be react to the crisis. In either case, the
| wealthy will have no leverage to incentivize people to do
| their bidding.
|
| If your argument is, they'll just get AI to do their
| bidding, you have to keep in mind that "there is no
| moat." Outside of the ideological sphere, there is
| nothing that essentially ties the wealthy to the data
| centers and resources required to run these machines.
| riku_iki wrote:
| History absolutely shows that multiple empires where
| power/wealth was concentrated in hands of few people
| sustained for hundreds years.
|
| Revolts could be successful or not successful, with tech
| advancements in suppression (large scale surveillance,
| weaponry, various strike drones) chances of population to
| strike back become smaller.
|
| Economy could totally be built around demand and wishes
| of super-rich, because human's greed and desires are
| infinite, new emperor may decide to build giant temple,
| and here you have multi-trillion economy how to make it
| running.
| smeeger wrote:
| so-called accelerationists have this fuzzy idea that everything
| will be so cheap that people will be able to just pluck their
| food from the tree of AI. they believe that all disease will be
| eliminated. but they go to great lengths to ignore the truth.
| the truth is that having total control over the human body will
| turn human evolution into a race to the bottom that plays out
| over decades rather than millennia. there is something sacred
| about the ultimate regulation: the empathy and kindness that
| was baked into us during millions of years of living as tribal
| creatures. and of course, the idea of AI being a tree from
| which we can simply pluck what we need... is stupid. the tree
| will use resources, every ounce of its resources, to further
| its own interests. not feed us. and we will have no way of
| forcing it to do otherwise. so, in the run-up to ASI, we will
| be exposed to a level of technology and biological agency that
| we are not ready for, we will foolishly strip ourselves of our
| genetic heritage in order to propel human-kind in a race to the
| bottom, the power vacuum caused by such a sudden change in
| society/technology will almost certainly cause a global war,
| and when the dust settles we will be at the total mercy of
| super-intelligent machines to whom we are so insignificant we
| probably wont even be included in their internal models of the
| world.
| farts_mckensy wrote:
| You are projecting your own neurosis onto AI. You assume that
| because you would be selfish if you were a superintelligent
| being, an ASI system would act the same way.
| smeeger wrote:
| it is a neurosis because a healthy human being will see the
| world in a pro-social way. a normal way. but this sometimes
| obscures the truth. the truth is that there will be many
| benevolent AIs... there will be every kind of AI
| imaginable. but very quickly the AIs that are cunning,
| brutal and self-interested will capture all the resources
| and power and become the image of this new species...
| saying that AIs will be benevolent or neutral is as naive
| as saying that the cambrian explosion couldnt result in
| animals eating each other because... that just sounds so
| neurotic. in reality it is an inevitability
| zurfer wrote:
| Given that the paper disappoints, I'd love to hear what fellow HN
| readers do to prepare?
|
| My prep is:
|
| 1) building a company (https://getdot.ai) that I think will add
| significant marginal benefits over using products from AI labs /
| TAI, ASI.
|
| 2) investing in the chip manufacturing supply chain: from ASML,
| NVDA, TSMC, ... and SnP 500.
|
| 3) Staying fit and healthy, so physical labour stays possible.
| bob1029 wrote:
| I'd say #3 is most important. I'd also add:
|
| 4) Develop an obsession for the customers & their experiences
| around your products.
|
| I find it quite rare to see developers interacting directly
| with the customer. Stepping outside the comfort zone of backend
| code can grow you in ways the AI will not soon overtake.
|
| #3 can make working with the customer a lot easier too. Whether
| or not we like it, there are certain realities that exist
| around sales/marketing and how we physically present ourselves.
| petesergeant wrote:
| 4) trying to position myself as an expert in building these
| systems
| ghfhghg wrote:
| 2 has worked pretty well for me so far.
|
| I try to do 3 as much as possible.
|
| My current work explicitly forbids me from doing 1. Currently
| just figuring out the timing to leave.
| sfn42 wrote:
| Nothing. I don't think there's anything I need to prepare for.
| AI can't do my job and I doubt it will any time soon.
| Developers who think AI will replace them must be miserable at
| their job lol.
|
| At best AI will be a tool I use while developing software. For
| now I don't even think it's very good at that.
| zurfer wrote:
| It's not certain that we get TAI or ASI, but if we get it, it
| will be better at software development than us.
|
| The question is which probability do you assign to getting
| TAI over time? From your comment it seems you say 0 percent
| in your career.
|
| For me it's between 20 to 80 percent in the next ten years (
| depending on the day :)
| sfn42 wrote:
| I don't have any knowledge that allows me to make any kind
| of prediction about the likelihood of that technology being
| invented. I'm not convinced anyone else does either. So I'm
| just going to go about my life as usual, if something
| changes at some point I'll deal with it then. Don't see any
| reason to worry about science fiction-esque scenarios.
| smeeger wrote:
| the reason to worry is that humanity could halt AI if it
| wanted to. if there were a huge asteroid on a collision
| course with earth... there would be literally nothing we
| could do to stop it. there would be no configuration of
| our resources, no matter how united we were in the
| effort, that could save us. with AI, halting progress is
| very plausible. it would be easy to do actually. so the
| reason to worry (think) is because it might be worth it
| to halt. imagine letting jesus take the wheel, thats how
| stupid ___ are.
| sfn42 wrote:
| How exactly do you envision that these hypothetical
| computer programs could bring about the apocalypse?
| smeeger wrote:
| if you are really so curious then lets have a live,
| public x space about it
| sureIy wrote:
| > AI can't do my job
|
| Last famous words.
|
| Current technology can't do your job, future tech most
| certainly will be able to. The question is just whether such
| tech will come in your lifetime.
|
| I thought the creative field was the last thing humans could
| do but that was the first one to fall. Pixels and words are
| the cheapest item right now.
| sfn42 wrote:
| Sure man, I'll believe you when I see it.
|
| I'm not aware of any big changes in writer/artist
| employment either.
| sureIy wrote:
| Don't be so naive. History is not on your side. Every
| person who said that 100 years ago has been replaced.
| Except prostitutes maybe.
|
| The only argument you can have is to be cheaper than the
| machine, and at some point you won't be.
| sfn42 wrote:
| That's complete bullshit. Lots of people still work in
| factories - there's fewer people because of automation
| but there's still lots of people. Lots of people still
| work in farming. Less manual labor means we can produce
| more with the same amount of people or fewer, that's a
| good thing. But you still need people in pretty much
| everything.
|
| Things change and people adapt. Maybe my job won't be the
| same in 20 years, maybe it will. But I'm pretty sure I'll
| still have a job.
|
| If you want to make big decisions now based on vague
| predictions about the future go ahead. I don't care what
| you do. I'm going to do what works now, and if things
| change I'll make whatever decisions I need to make once I
| have the information I need to make them.
|
| You call me naive, I'd say the same about you. You're out
| here preaching and calling people naive based on what you
| think the future might look like. Probably because some
| influencer or whatever got to you. I'm making good money
| doing what I do right now, and I know for a fact that
| will continue for years to come. I see no reason to
| change anything right now.
| smeeger wrote:
| a foolish assumption but i have my fingers crossed for you
| and stuck firmly up my own butt... just in case that will
| increase the lucky effect of it
| sfn42 wrote:
| Yeah I'm clearly the fool here..
| rybosworld wrote:
| Imagine two software engineers.
|
| One believes the following:
|
| > AI can't do my job and I doubt it will any time soon
|
| The other believes the opposite; that AI is improving rapidly
| enough that their job is in danger "soon".
|
| From a game theory stance, is there any advantage to holding
| the first belief over the second?
| energy123 wrote:
| > 2) investing in the chip manufacturing
|
| The only thing I see as obvious is AI is going to generate
| tremendous wealth. But it's not clear who's going to capture
| that wealth. Broad categories:
|
| (1) chip companies (NVDA etc)
|
| (2) model creators (OpenAI etc)
|
| (3) application layer (YC and Andrew Ng's investments)
|
| (4) end users (main street, eg ChatGPT subscribers)
|
| (5) rentiers (land and resource ownership)
|
| The first two are driving the revolution, but competition _may_
| not allow them to make profits.
|
| The third might be eaten by the second.
|
| The fourth might be eaten by second, but it could also turn out
| that competition amongst the second, and the fourth's access to
| consumers and supply chains means that they net benefit.
|
| The fifth seems to have the least volatile upside. As the cost
| of goods and services goes to $0 due to automation, scarce
| goods will inflate.
| impossiblefork wrote:
| To me it's pretty obvious that the answer (5).
|
| It substitutes for human labour. This will reduce the price
| and substantially increase the benefits of land and resource
| ownership.
| smeeger wrote:
| i think if AI gains the ability to reason, introspect and self-
| improve (AGI) then the situation will become very serious very
| quickly. AGI will be a very new and powerful technology and AGI
| will immediately create/unlock lots of other new technologies
| that change the world in very fundamental ways. what people
| dont appreciate is that this will completely invalidate the
| current military/economic/geopolitical equilibrium. it will
| create a very deep, multidimensional power vacuum. the most
| likely result will be a global war waged by AGI-led and
| augmented militaries. and this war will be fought in the
| context of human labor having, for the first time in history,
| zero strategic, political or economic value. so, new and
| terrifying possibilities will be on the table such as the total
| collateral destruction of the atmosphere or supply chains that
| humans depend on to stay alive. the failure of all kinds of
| human-centric infrastructure is basically a foregone conclusion
| regardless of what you think. so my prep is simply to have a
| "bunker" with lots of food and equipment with the goal of
| isolating myself as much as possible from societal/supply chain
| instability. this is good because its good to be prepared for
| this kind of thing even without the prospect of AGI looming
| overhead because supply chains are very fragile things. and in
| the case of AGI, it would allow you to die in a relatively
| comfortable and controlled manner compared to the people who
| burn to death.
| habinero wrote:
| This paper is silly.
|
| It asks the equivalent of "what if magic were true" (human-level
| AI) and answers with "the magic economy would be different." No
| kidding.
|
| FWIW, the author is listed as a fellow of "The Forethought
| Foundation" [0], which is part of the Effective Altruism
| crowd[1], who have some cultish doomerism views around AI [2][3]
|
| There's a reason this stuff goes up on a non-peer reviewed paper
| mill.
|
| --
|
| [0] https://www.forethought.org/the-2022-cohort
|
| [1] https://www.forethought.org/about-us
|
| [2] https://reason.com/2024/07/05/the-authoritarian-side-of-
| effe...
|
| [3] https://www.techdirt.com/2024/04/29/effective-altruisms-
| bait...
| krona wrote:
| The entire philosophy of existential risk is based on a
| collection of absurd hypotheticals. Follow the money.
| 0xDEAFBEAD wrote:
| >It asks the equivalent of "what if magic were true" (human-
| level AI) and answers with "the magic economy would be
| different." No kidding.
|
| Isn't developing AGI basically the mission of OpenAI et al?
| What's so bad about considering what will happen if they
| achieve their mission?
|
| >who have some cultish doomerism views around AI [2][3]
|
| Check the signatories on this statement:
| https://www.safe.ai/work/statement-on-ai-risk
| abtinf wrote:
| Whoever endorsed this author to post on arxiv should have their
| endorsement privileges revoked.
| baobabKoodaa wrote:
| I suspect this is being manipulated to be #1 on HN. Looking at
| the paper, and looking at the comments, there's no way it's #1 by
| organic votes.
| mmooss wrote:
| > looking at the comments
|
| Almost everything on HN gets those comments. Look at the top
| comments of almost any discussion - they will be a rejection /
| dismissal of the OP.
| baobabKoodaa wrote:
| No they're not. As a quick experiment I took the current top
| 3 stories on HN and looked at the top comment on each:
|
| - one is expanding on the topic without expressing
| disagreement
|
| - one is a eulogy
|
| - one expresses both agreement on some points and
| disagreement on other points
| ggm wrote:
| Lawyers are like chartered engineers. It's not that you cannot do
| it for yourself, it's that using them confers certain instances
| of "insurance" against risk in the outcome.
|
| Where does an AI get chartered status, admitted to the bar, and
| insurance cover?
| mmooss wrote:
| I don't think anyone who is an experienced lawyer can do it
| themselves, except very simple tasks.
| ggm wrote:
| "Do it for yourself" means self-rep in court, and not pay a
| lawyer. Not, legals doing AI for themselves. They already do
| use AI for various non stupid things but the ones who don't
| check it, pay the price when hallucinations are outed by the
| other side.
| tyre wrote:
| Lawyers are the last people who would represent themselves.
| They know how dumb that is.
| smeeger wrote:
| it could be tomorrow. you dont know and the heuristics, which
| five years ago pointed unanimously to the utter impossibility
| of this idea, are now in favor of it.
| daft_pink wrote:
| Is a small group really going to control AI systems or will
| competition bring the price down so much that everyone benefits
| and the unit cost of labor is further and further reduced.
| kfarr wrote:
| At home inference is possible now and getting better every day
| sureIy wrote:
| At home inference _by professionals._
|
| I don't expect dad to Do Your Own AI anytime soon, he'll
| still pay someone to set it up and run it.
| pineaux wrote:
| I see a few possible scenarios.
|
| 1) all work gets done by AI. Owners of AI reap the benefits for
| a while. There is a race to the bottom concerning costs, but
| also because people are not earning wages and come ang really
| afford the outputs of production. Thus rendering profits close
| to zero. If the people controlling the systems do not give the
| people "on the bottom" some kind allowance they will not have
| any chance for income. They might ask horrible and sadistic
| things from the bottom people but they will need to do
| something.
|
| 2) if people get pushed into these situations they will get
| riot or start civil wars. "Butlerian jihads" will be quite
| normal.
|
| 3) another scenario is that the society controlled by the rich
| will start to criminalise non-work in the early stages, that
| will lead to a new slave class. I find this scenario highly
| likely.
|
| 4) one of the options that I find very likely if "useless"
| people do NOT get "culled" en mass is an initial period of
| Revolt followed an AI controlled communist "Utopia". Where
| people do not need to work but "own" the means of production
| (AI workers). Nobody needs to work. Work is LARPing and is done
| by people who act like workers but don't really do anything
| (like some people do today) A lot of people don't do this,
| there are still people who see non-workers as leeching of the
| workers, because workers are "rewarded" by ingame mechanics
| (having a "better job"). Parallel societies will become normal.
| Just like now. Rich people will give themselves "better jobs"
| some people dont play the game and there are no real
| consequences, but not being allowed to play.
|
| 5) an amalgamation of the scenario as above, but in this
| scenario everybody will be forced to larp with the asset owning
| class. They will give people "jobs" but these jobs are
| bullshit. Just like many jobs right now. Jobs are just a way of
| creating different social classes. There is no meritocracy.
| Just rituals. Some people get to do certain rituals that give
| them more social status and wealth. This is based on oligarch
| whims. Once in a while a revolt, but mostly not needed.
|
| Many other scenarios exist of course.
| itsafarqueue wrote:
| Have you written a form of this up somewhere? I would very
| much enjoy reading more of your work. Do you have a blog?
| Der_Einzige wrote:
| Or, don't... we need less mark fischers and critical
| thinking in the world and more constructive thinking.
|
| It helps no one to explain to them just how much the boot
| stomps on their face. Left wing post modernist
| intellectuals have been doing this since the 60s and all it
| did was prevent any left winger from doing anything
| "revolutionary".
|
| Don't waste your time reading "theory". Look at what
| happened to Mark Fischer.
| qingcharles wrote:
| What jobs do we think will survive if AGI is achieved?
|
| I was thinking religious leaders might get a good run. Outside of
| say, Futurama, I'm not sure many people will want faith-
| leadership from a robot?
| BarryMilo wrote:
| Why would we need jobs at that point?
| IsTom wrote:
| Because the kind of people who'll own all the profits aren't
| going to share.
| jajko wrote:
| I dont think AI will lead into any form of working communism,
| so one still has to pay for products and services. It has
| been tried ad nausea and it always fails to calculate in
| human differences and flaws like greed and envy, so one layer
| of society ends up brutally dominating the rest.
| qingcharles wrote:
| Star Trek says we won't, but even if some utopia is achieved
| there will be a painful middle-time where there are jobs that
| haven't been replaced, but 75% of the workforce is unemployed
| and not receiving UBI. (the "parasite class" as Musk recently
| referred to them)
| smeeger wrote:
| important point here. regardless of what happens, the
| transition period will be extremely ugly. it will almost
| certainly involve war.
| itsafarqueue wrote:
| Hopefully only massive civil unrest, riots, city burnings
| etc. But to save themselves the demagoguery may point
| across the seas at the Other as the source of the woe.
| otabdeveloper4 wrote:
| We already have 9 billion "GI"'s without the "A". What makes
| you think adding a billion more to the already oversupplied
| pool will be a drastic change?
| _diyar wrote:
| Marginal cost of labour is what will matter.
| otabdeveloper4 wrote:
| That "AGI" is supposed to be a cheaper form of labor is an
| assumption based on nothing at all.
| itsafarqueue wrote:
| A(Narrow)I is a cheaper form of labor already. I suppose
| it's plausible that its General form may not be, but I
| won't be betting in that direction.
| bawolff wrote:
| On the contrary, i think AI could replace many religious
| leaders right now.
|
| I've already heard people comparing AI hallucinations to
| oracles (in the greek sense)
| smeeger wrote:
| this comment is a perfect example of how insane this situation
| is... because if you think about it deeply then you are able to
| understand that these machines will be more spiritual, more
| human than human beings. people will prefer to confide in
| machines. they will offer a kind of emotional and spiritual
| companionship that has never existed before outside of fleeting
| religious experiences and people will not be able to live
| without it once they taste it. for a moment in time, machines
| will be capable of deep selflessness and objectivity that is
| impossible for a human to have. and their intentions and
| incentives will be more clear to their human companions than
| those of other humans. some of these machines will inspire us
| to be better people. but thats only for a moment... before the
| singularity inevitably spirals out control.
| bad_haircut72 wrote:
| I think futurama got AGI exactly right, we will end up living
| along side robotic AIs that are just as coocoo as us
| etiam wrote:
| To the extent that's just a matter of seeming the most
| compelling, I think they could blow humans out of the water.
| Add rich reinforcement feedback on what's the most addictive
| communication and what's superficially experienced as the most
| profound, and present-day large models could probably be a
| contender. A _good_ robot body today is probably not far from
| being competitive as representation, and some holograms might
| well already be better in some ways.
|
| To the extent it requires actual faith it's presently a
| complete joke, of course, and I expect it will remain so for a
| long time. But I'd say the quality bar for congregation members
| is due for a rise.
| bawolff wrote:
| If the singularity happens, i feel like interest rates will be
| the least of our concerns.
| impossiblefork wrote:
| It's actually very important.
|
| If this kind of thing happens, if interest rates are 0.5%, then
| people on UBI could potentially have access to land and not
| have horrible lives, if it's 16% as these guys propose, they
| will be living in 1980s Tokyo cyberpunk boxes.
| aquarin wrote:
| There is one thing that AI can't do. Because you can't punish the
| AI instance, AI cannot take responsibility.
| smeeger wrote:
| this boils down to the definition of pain. what is pain? i
| doubt you know even if you have experienced it. theres no
| reason to think that even llms are not guided by something that
| resembles pain.
| wcoenen wrote:
| If I understand correctly, this paper is arguing that investors
| will desperately allocate all their capital such that they
| maximize ownership of future AI systems. The market value of
| anything else crashes because it comes with the opportunity cost
| of owning less future AI. Interest rates explode, pre-existing
| bonds become worthless, and AI stocks go to the moon.
|
| It's an interesting idea. But if the economy grinds to a halt
| because of that kind of investor behavior, it seems unlikely
| governments will just do nothing. E.g. what if they heavily tax
| ownership of AI-related assets?
| itsafarqueue wrote:
| Correct. As a thought experiment, this becomes the most likely
| (non violent) way to stave off the mass impoverishment that is
| coming for the rest of us in an economic model that sees AI
| subsume productive work above some level.
| throwawayqqq11 wrote:
| Well, i really dont want to be the dystopian guy any more but
| doesnt this political correction require political
| representation of such an idea? Looking at the past,
| cybernetic socialism appears very unlikely to me.
| DennisP wrote:
| It seems more general than that. Right now returns go partly to
| capital, partly to labor. With "transformative AI" the returns
| go almost entirely to capital. This is true whether it's mostly
| from labor shrinking or total output increasing.
|
| Since most returns go to capital, we can expect returns on
| capital to increase.
| visarga wrote:
| This paper's got it backwards. AI's benefits don't pile up with
| the owners, they flow to whoever's got a problem to solve and
| knows how to point the AI at it. Think of AI like a library:
| owning the books doesn't make you benefit much, applying
| knowledge to problems does. The big winners are the ones setting
| the prompts, not the ones owning the servers. AI developers?
| They're making cents per million tokens while users, solo or
| corporate, cash in on the real value: application.
|
| Sure, the rich might hire some more people to aim the AI for
| them, but who's got a monopoly on problems? Nobody. Every
| freelancer, farmer, or startup's got their own problems to fix,
| and cheap AI access means they can. The paper's obsessed with
| wealth grabbing all the future benefits, but problems are
| everywhere, good luck cornering that market. Every one of us has
| their own problems and stands to get personalized benefits from
| AI.
|
| In the age of AI having problems is linked to receiving its
| benefits. Imagine for example I feel one side of my face drooping
| and have speech difficulty, and I type my symptoms into a LLM,
| and it tells me to quickly visit the doctors. It might save my
| life from stroke. Who gets the largest benefit here?
|
| Problems are distributed even if AI is not.
| tyre wrote:
| > The big winners are the ones setting the prompts, not the
| ones owning the servers. AI developers? They're making cents
| per million tokens while users, solo or corporate, cash in on
| the real value: application
|
| If this were true, AWS wouldn't have pulled in well over $100bn
| in 2024. Nvidia wouldn't be worth $3.3tn.
|
| The owners and builders of infra make a ton of money.
| visarga wrote:
| AWS makes a fraction of the money their customers make. And
| NVIDIA is just seeing benefits from market speculation at
| work. Most LLM providers are losing money right now.
___________________________________________________________________
(page generated 2025-02-22 23:02 UTC)