[HN Gopher] The future of everything is lies, I guess: Work
___________________________________________________________________
The future of everything is lies, I guess: Work
Author : aphyr
Score : 233 points
Date : 2026-04-14 15:00 UTC (8 hours ago)
(HTM) web link (aphyr.com)
(TXT) w3m dump (aphyr.com)
| hoppp wrote:
| Unavailable Due to the UK Online Safety Act
| ura_yukimitsu wrote:
| Archived at https://archive.is/DY9F3
| basilikum wrote:
| https://web.archive.org/web/20260414151754/https://aphyr.com...
| bbg2401 wrote:
| The author appears to be under the misaprehnsion that a
| personal blog with a comment section is impacted by the act.
| Devasta wrote:
| Why wouldn't it be?
| monooso wrote:
| For the reasons given in my comment, above [1].
|
| [1]: https://news.ycombinator.com/item?id=47767650
| MarkusQ wrote:
| Misapprehension? If so, they aren't the only one.
|
| https://www.theregister.com/2025/02/06/uk_online_safety_act_.
| ..
| monooso wrote:
| Yes, misapprehension.
|
| According to the Ofcom regulation checker [1] (linked to by
| The Register article), the Online Safety Act does not apply
| to this content.
|
| Here's the most pertinent section (emphasis mine):
|
| > Your online service will be exempt if... Users can only
| interact with content generated by your business/the
| provider of the online service. _Such interactions include:
| comments, likes /dislikes, ratings/reviews of your content
| including using emojis or symbols. For example, this
| exemption would cover online services where the only
| content users can upload or share is comments on media
| articles you have published_...
|
| [1]: https://ofcomlive.my.salesforce-
| sites.com/formentry/Regulati...
| TimTheTinker wrote:
| Perhaps the author is being outwardly cautious but
| knowingly borderline-obtuse as a form of protest against
| a dumb law.
| krona wrote:
| > Your online service will be exempt if... Users can only
| interact with content generated by your business
|
| As soon as your blog allows comments which other people
| can read, then you're allowing people to interact with
| content not generated by your business.
| john_strinlai wrote:
| is this legal advice you are offering, as someone
| practicing law in the uk? because you are all over this
| thread stating your opinion _very confidently_.
|
| (conveniently, there is no risk to yourself if you happen
| to be wrong or misinformed.)
| monooso wrote:
| No, I'm not offering legal advice, and neither am I
| stating an opinion. I'm simply quoting Ofcom, the
| regulatory body responsible for overseeing this law.
| john_strinlai wrote:
| > _I 'm simply quoting Ofcom_
|
| no, you are doing more than that.
|
| you are saying that everyone who has a different
| interpretation of the parts you are quoting is
| misinformed.
|
| that is an opinion, which you are stating as fact, as
| someone unaffected by the outcome.
| monooso wrote:
| A valid point, and maybe I should have phrased it
| differently. I've deleted the comment which used the word
| "misinformed", so as not to cause any confusion.
|
| My point is simply that the Ofcom quote clearly states
| that user comments on an article are not subject to the
| Online Safety Act. I assume this is a fact, as it's from
| the horse's mouth.
|
| Some people appear to be basing their opinions on the
| assumption that the OSA _does_ apply to such comments
| (hence my use of the offending word).
| pixl97 wrote:
| >Please note: The outcome of this checker is indicative
| only and does not constitute legal advice. It is for you
| to assess your services and/or seek independent
| specialist advice to determine whether your service (or
| the relevant parts of it) are subject to the regulations
| and understand how to comply with the relevant duties
| under the Act.
|
| I mean even the site itself says it really shouldn't be
| used for legal advice...
|
| On top of that, none of this matters until said law is
| settled under a case. Most often it's the first judge and
| the set of appeals after that point that define how the
| law is actually implemented. Everything before that is
| bluster and potential risk.
| mock-possum wrote:
| Wow the typography is obnoxious on mobile, some lines only have 3
| words due to the text justification
| greatpost wrote:
| Thank you for this aphyr.
|
| My one ask is people seem to put "CEOs" on a pedestal any time
| things come up, like they're an alien life form and oh no they're
| going to do something terrible. There are good company executives
| and shitty ones. You should try to start a company and see if you
| can be one of the better ones.
| atomicnumber3 wrote:
| Ah yes just go start a company. Let me just ask my father for a
| small business loan of a million dollars.
| Aurornis wrote:
| Class warfare generalizations have become the safe outlet for
| internet rage because going after CEOs and billionaires is most
| "punching up" construction that is generally relatable.
|
| An unintended side effect that I've noticed is that it
| normalizes bad behavior of CEOs for those who invest a lot of
| "CEOs bad" grist (Reddit, Threads, even Hacker News). When
| someone, usually early career, takes a job with a bad CEO after
| years of reading "CEOs bad" content online, they can go into a
| learned helplessness mode because they think the behavior
| they're seeing is normal. They don't believe changing jobs
| would help because they've learned from social media to believe
| that their CEO's bad behavior is actually normal.
|
| This has becoming a frequent topic when in a rotational
| mentorship program where I volunteer: Early career folk join
| some toxic startup and stay because the internet told them all
| CEOs are like this. We have to shake them free from those ideas
| and get them to realize that there are good and bad companies
| out there and they have options.
| coldtea wrote:
| > _Class warfare generalizations have become the safe outlet
| for internet rage because going after CEOs and billionaires
| is most "punching up" construction that is generally
| relatable._
|
| Mainly because "CEOs and billionaires" have fucked us over
| time and again, with their with their lobbying and bribing,
| with their power grabs, with their consolidation of news,
| entertainment, streaming, and social media properties, with
| their participation in the millitary industrial complex, with
| their censorship and partisanship, and with their rent
| seeking and worsening of their products...
| forgetfreeman wrote:
| The downvotes in absence of any reply suggest there's a
| group of individuals who think your position is so correct
| it's functionally unassailable but are offended you said it
| out loud.
| headcanon wrote:
| > Early career folk join some toxic startup and stay because
| the internet told them all CEOs are like this.
|
| I literally did this 12 years ago based on this reasoning,
| its good you're trying to counter that with the next
| generation.
|
| With that said, I do wish there was more discourse around
| systemic issues rather than the usual finger-pointing towards
| rival social groups. Unfortunately I feel like our language
| gets in the way, systems issues are more abstract, but "bad
| people" are more visceral and easy to talk about.
| dlev_pika wrote:
| "No war but class war" rings as true in 2026 as it did 40
| years ago
| neutronicus wrote:
| Sure, although the obsession with "CEOs and billionaires"
| does have the ring of the 300k HHI software-engineer class
| hoping to play class enemies above and below them against
| each other.
| gilfaethwy wrote:
| Software engineers are in the same class as the people
| below them - the working class. The entire concept of
| "middle class" originates from a time when the middle
| class were non-nobility who were, nonetheless,
| sufficiently powerful that they needn't worry about
| things like "keeping their jobs", whether because they
| were their own employees (as were nearly all doctors,
| lawyers, etc.) or because they had sufficient social
| capital not to worry about such trivial things as paid
| labor.
|
| I want to be clear here: Eton boys were (and are)
| predominantly middle class, _not_ upper class. In the US,
| we allowed the idea to be perverted, perhaps because we
| do not _have_ nobility, and so there is no true "upper
| class". Given this, the reality is that we are bifurcated
| into a working class and an owning or capitalist class -
| though, many would argue (correctly, in my view) that we
| are in a feudal regime now, rather than a capitalist
| regime.
|
| To put perhaps too fine a point on it, software engineers
| are house slaves, and, yes, CEOs and billionaires have
| done a good job of convincing the field slaves that the
| house slaves are their enemies, and of convincing house
| slaves that the field slaves are inferior and just want
| to take what the house slaves have without working for
| it.
| pixl97 wrote:
| >normalizes bad behavior of CEOs
|
| >They don't believe changing jobs
|
| Um, yea, where did you get these ideas.
|
| Most CEOs want to be CEOs for the potentially vast amounts of
| wealth they can make from the position. When you're making
| 20-200x the average person going back to a regular job is
| pretty much out of the question.
|
| Then when you start making that kind of money you quickly
| become disconnected from the rest of humanity. [Insert meme:
| "How much does a banana cost? Like $10 dollars?]
|
| Vast wealth disparity commonly causes the issues that you are
| saying being normalized by people online, so I think you'd
| need quite a bit more evidence that is the case then with the
| already existing hypothesis.
| miyoji wrote:
| I think it's true that there are more bad CEOs than good
| CEOs. I've seen good CEOs turn into bad CEOs, but I've never
| seen a bad CEO turn into a good CEO. I assume it does happen,
| but there's a strong cultural pressure (and many hundreds of
| millions of dollars) pushing bad CEO behavior and very little
| other than personal ethics pushing good CEO behavior, and
| when the incentives look like that, swimming upstream is
| hard.
|
| > We have to shake them free from those ideas and get them to
| realize that there are good and bad companies out there and
| they have options.
|
| Not everyone does have options, though. This is why instead
| of telling people to just avoid the bad CEOs, workers should
| unionize and collectively bargain against the bad CEOs. I'm
| sure I'll be seeing a lot of class warfare generalizations
| about "unions bad" in response to this suggestion.
| philipallstar wrote:
| > Class warfare generalizations have become the safe outlet
| for internet rage because going after CEOs and billionaires
| is most "punching up" construction that is generally
| relatable.
|
| The endless re-rise of Marxism has made people assume that
| any punching is appropriate in the first place, and it's just
| a question of who. Saying "these are the people it's okay to
| punch" is dystopian.
| gilfaethwy wrote:
| And yet, the ruling class seems quite happy to punch the
| poor - and this is not dystopian? Let's not get into the
| tolerance paradox here, because if someone is already
| getting punched, and the puncher refuses to stop... well,
| yes, it's okay to punch the puncher.
| nancyminusone wrote:
| When companies do something terrible (and they do, all the
| time) who are you going to blame for it? It's not at all
| surprising that CEOs have earned the reputation they have.
| aphyr wrote:
| I am, oddly enough, the chief executive officer of two
| (trivially small) tech companies.
| theredleft wrote:
| cheers. I think you're doing a good job and ruffling some
| feathers here! Your content has been great.
|
| I highly recommend reading Marx. Your content has related
| Marxist topics like the 'Fetishism of Commodities' (Software
| as Witchcraft) and the Labor Theory of Value.
| aphyr wrote:
| There's a copy of Das Kapital on the shelf behind me right
| now, though I don't count myself conversant enough to go
| _super_ deep on class critique. Figured I 'd point a few
| very vague fingers in that direction and let folks with
| more experience talk about it.
| svilen_dobrev wrote:
| i read the other day this:
| https://jacobin.com/2026/03/work-deskilling-labor-
| capitalism...
|
| brushing the socialism aside (been there seen that), it
| talks about the deskilling as inevitable technology
| consequence. IMO LLMing puts that on steroids, and eats
| higher up the mental-chain
| Quarrelsome wrote:
| Btw why am i as a brit, blocked via my traditional routing
| because of the OSA? What possible features do you have on
| that site to make that relevant?
| DonaldPShimoda wrote:
| > people seem to put "CEOs" on a pedestal any time things come
| up, like they're an alien life form
|
| Might I suggest a viewing of the 2025 film "Bugonia"?
| evan_a_a wrote:
| spoilers
| tencentshill wrote:
| >My
|
| And who are you? An account created for one post? There is a
| pattern of green account with usernames vaguely related to the
| subject matter of their comments.
| Papazsazsa wrote:
| previously: https://news.ycombinator.com/item?id=47754379
| dlev_pika wrote:
| I think I've seen this article posted every day for the past
| week or so
| hk__2 wrote:
| No you haven't, because it was published today. What you've
| seen are past articles from the same author on the subject
| that all share the same "The Future of Everything Is Lies, I
| Guess:" prefix.
| dlev_pika wrote:
| Oh that's what's going on? Was confused as to why the same
| title kept popping up. Thank you.
| AndrewKemendo wrote:
| This has been on the front page for over a week in different
| forms what gives?
|
| https://hn.algolia.com/?q=future+of+everything+is+lies
| baal80spam wrote:
| There is new part added everyday.
| 0xbadcafebee wrote:
| > more like witchcraft than engineering
|
| Welcome to web development buddy
|
| > how ML might change the labor market
|
| Human labor is expensive. If LLMs do make things cheaper and
| faster to produce, you don't need that many humans anymore.
| Again, assuming the improvement is real, there absolutely will be
| shrinkage for existing businesses in headcount. What remains to
| be seen is how much cheaper machines make work. 1.5x? 2x? 10x?
| 100x?
|
| > unlike sewing machines or combine harvesters, ML systems seem
| primed to displace labor across a broad swath of industries [...]
| The question is what happens when [..] all lose their jobs in the
| span of a decade
|
| It's more like hand tools -> power tools; a concept applied to
| many things. Everyone will adopt them, and you'll need fewer
| workers who'll work faster with less skill. You get a gradual
| labor force shrinkage, but also an increase in efficiency, so
| it's not like a hole is opening up in your economy. A strong
| economy can create new jobs, from either private or public
| sources.
|
| > ML allows companies to shift spending away from people and into
| service contracts with companies like Microsoft
|
| The price of hardware, as it always has been, is a downward
| trend, while the efficiency of open weights is going up (it will
| plateau eventually but it's still going up). We already spend
| $20,000 on servers, whether it's buying them once on-prem, or
| renting them out in AWS. ML is just another piece of software
| running on another piece of hardware
|
| > if companies are successful in replacing large numbers of
| people with ML systems, the effect will be to consolidate both
| money and power in the hands of capital
|
| That ship left port like 30 years ago dude. Laborers have no
| power in the 21st century.
| fnimick wrote:
| > That ship left port like 30 years ago dude. Laborers have no
| power in the 21st century.
|
| Maybe we should fix that.
| altruios wrote:
| Less maybe: more should have yesterday. Do so now today.
| cratermoon wrote:
| "Another critical lesson is that humans are distinctly bad at
| monitoring automated processes".
|
| Humans are also distinctly bad at noticing certain kinds of bugs
| in software. Think off-by-one errors, deadlocks, or any sort of
| bug you've stared at for days and not noticed the one missing or
| extra semicolon. But LLMs can generate a tsunami of subtly wrong
| code in the time a reviewer will notice one typo and miss all the
| rest.
| aphyr wrote:
| Yes. For more on this, see section 2:
| https://aphyr.com/posts/412-the-future-of-everything-is-lies...
| cratermoon wrote:
| Ah I see. I had not gotten that far. Something I got from
| "Story of Your Life", by Ted Chiang. The sentence, "The
| rabbit is ready to eat"[1]. Also this old chestnut from NLP:
|
| Fruit flies like a banana. Time flies like and arrow.
|
| [1] The movie Arrival is based on this novella.
| intended wrote:
| > "Another critical lesson is that humans are distinctly bad at
| monitoring automated processes".
|
| I believe the technical term is vigilance degradation?
| curuinor wrote:
| Omnissiah-bothering, I call it.
| mannanj wrote:
| > This feels hopelessly naive. We have profitable megacorps at
| home, and their names are things like Google, Amazon, Meta, and
| Microsoft. These companies have fought tooth and nail to avoid
| paying taxes (or, for that matter, their workers). OpenAI made it
| less than a decade before deciding it didn't want to be a
| nonprofit any more. There is no reason to believe that "AI"
| companies will, having extracted immense wealth from interposing
| their services across every sector of the economy, turn around
| and fund UBI out of the goodness of their hearts.
|
| > If enough people lose their jobs we may be able to mobilize
| sufficient public enthusiasm for however many trillions of
| dollars of new tax revenue are required. On the other hand, US
| income inequality has been generally increasing for 40 years, the
| top earner pre-tax income shares are nearing their highs from the
| early 20th century, and Republican opposition to progressive tax
| policy remains strong.
|
| I think we are in general a highly naive, gullible class of
| people: we were conditioned, programmed and put into environments
| where being this was the norm and rewarded. The leaders and those
| extracting resources, who we gullibly allow to trample over our
| dignity and our rights, take advantage of this and reinforce it
| through lobby and influence of the mainstream culture and media
| campaigns around us. Further, if social media becomes a threat to
| their statuses, they have been shown to employ their influence
| there too through censorship and more; we therefore, may be best
| to learn how to not to be gullible and grow some balls.
| simianwords wrote:
| No you don't have to review every single line of code produced by
| AI in fears of security. This is quite exaggerated and I think
| the author is biased in his own field.
| recursive wrote:
| You're right. You don't _have_ to. Unless you want correct and
| secure code.
| layer8 wrote:
| How do you determine which lines have to be reviewed?
| vegancap wrote:
| How come this is blocked in the UK? :S
| Jtarii wrote:
| I think he is trying to make some misguided political
| statement.
| kentm wrote:
| His reasoning doesn't seem like a political statement:
| https://news.ycombinator.com/item?id=47754379#47757803
|
| That seems very practical and well-reasoned to me.
| jerf wrote:
| The interesting question to me at the moment is whether we are
| still at the bottom of an exponential takeoff or nearing the top
| of a sigmoid curve. You can find evidence for both. LLMs probably
| can't get another 10 times better. But then, almost literally at
| any minute, someone could come up with a new architecture that
| _can_ be 10 times better with the same or fewer resources. LLMs
| strike me as still leaving a lot on the table.
|
| If we're nearing the top of a sigmoid curve and are given 10-ish
| years at least to adapt, we probably can. Advancements in
| applying the AI will continue but we'll also grow a clearer
| understanding of what current AI can't do.
|
| If we're still at the bottom of the curve and it doesn't slow
| down, then we're looking at the singularity. Which I would remind
| people in its original, and generally better, formulation is
| simply an observation that there comes a point where you can't
| predict past it at all. ("Rapture of the Nerds" is a _very_
| particular possible instance of the unpredictable future, it is
| not the concept of the "singularity" itself.) Who knows what
| will happen.
| forgetfreeman wrote:
| "given 10-ish years at least to adapt, we probably can"
|
| Social media would like a word...
| 8n4vidtmkvmk wrote:
| We can adapt by shutting down social media. We don't really
| need that. It's been pretty bad since before the AI wave took
| off.
| fellowniusmonk wrote:
| We needed a better phone book we ended up in a world where
| most of our fellow citizens fucking casino.
| faangguyindia wrote:
| We are bottom. It's just a start.
|
| We are in era of pre pentium 4 in AI terms.
| fnimick wrote:
| And you have evidence as basis for this very confident
| statement... where?
| faangguyindia wrote:
| Intuition. It comes from the spiritual awakening and being
| aware of your consciousness. Only Time will prove what
| turns out be right.
| sophacles wrote:
| You worship the AI?
| faangguyindia wrote:
| I see AI has great utility and we'll figure out ways to
| better it. If I had any power, i would run Nuclear Power
| plants to run AI dafacenters and find other near infinite
| sources of energy to create deeper and deeper AIs. This
| level of ai tech is at its infancy, it's evidently clear.
| People are assuming it will stall soon, and won't go
| beyond a certain point. I don't believe this at all, I am
| believing it will go much much fatherer then this
| leptons wrote:
| An LLM is never, ever going to find "other near infinite
| sources of energy". All it can do is predict the next
| word in an effort to make the user stop prompting it.
| That's all it does. It does not have the ability to find
| solutions to the worlds problems.
| hypercube33 wrote:
| Weird comparison - The P4 was a major flop out of the gate
| (rambus anyone?) and at least by any good metric took three
| revisions (P4c - hypertheading) to make it come out where it
| should have ahead of its predecessor. The Pentium 3, before
| it that you are perhaps referring to was the peak of its era.
| So...it's going downhill right or what are you even saying?
| ofjcihen wrote:
| I'm seeing these extremely short but supremely confident hot
| takes with nothing to back them up on HN more and more these
| days. It's like X is leaking.
| MagicMoonlight wrote:
| We aren't anywhere near AGI. They've consumed the entirety of
| human knowledge and poisoned the well, and it still can't help
| but tell you to walk to the car wash.
|
| A peasant villager was sentient without a single book, film or
| song. You don't need this much data to be sentient. They're
| using a stupid method, and a better one will be discovered some
| day.
| pixl97 wrote:
| Sentience isn't intelligence.
| echelon wrote:
| > The interesting question to me at the moment is whether we
| are still at the bottom of an exponential takeoff or nearing
| the top of a sigmoid curve.
|
| Even using the models we have today, we have revolutionized
| VFX, video production, and graphics design.
|
| Similarly, many senior software engineers are reporting 2-10x
| productivity increases.
|
| These tools are some of the most useful tools of my career. I
| don't even think the general consumer public needs "AI" in
| their products. If we just create control surfaces for experts
| to leverage and harness the speed up and shape and control the
| outcomes, we're going to be in a very good spot.
|
| These alone will have ripple effects throughout the economy and
| innovation. We've barely begun to tap into the benefits we have
| already.
|
| We don't even need new models.
| ryandrake wrote:
| > Similarly, many senior software engineers are reporting
| 2-10x productivity increases.
|
| But are they making 2-10x compensation compared to before
| these tools? If not, these tools are not really useful to
| you, they are useful to your employer. The most shocking
| thing I find about LLM-assisted development is how gleefully
| we are just handing all this value over to our employers,
| simultaneously believing that they are great because we're
| producing more. Totally bonkers!
| echelon wrote:
| > handing all this value over to our employers,
| simultaneously believing that they are great because we're
| producing more.
|
| You could turn the table and say that you can now launch
| your own business with far fewer resources.
|
| Who needs financial capital if you can do it all with solo
| / small team labor capital?
|
| Gossip Goblin ditched his studio and now a16z is trying to
| throw him money, which he's turned down. He's turning
| everyone down.
|
| https://www.youtube.com/watch?v=-Rzl7nUdEs4
|
| Dude is legit talented and doesn't need studio capital
| anymore.
|
| This is the end of the Hollywood nepotism pyramid, where
| limited production capital was available to only a handful
| of directors.
|
| We're kind of at the start of a revolution here. I'd be way
| more worried if I were Disney or Paramount.
|
| Couldn't you take a sabbatical and end it with a brand new
| SaaS you own and control? That's entirely within reach now.
|
| The people this is going to hurt are the ICs that don't
| have a go-getting type personality where they take full-
| stack ownership: marketing, branding, design, customer
| relationships, etc. If you can do those things, you're
| going to be a rock star with total autonomy.
|
| You ought to see what the indie game devs are doing with AI
| (when they aren't getting yelled at on Steam by the
| haters). It's legitimately incredible. Game designers are
| taking on full-stack ownership over the entire experience,
| and they're making some incredible stuff.
| ryandrake wrote:
| > If you can do those things, you're going to be a rock
| star with total autonomy.
|
| What percentage of developers can do these things? 1%?
| 0.1%? 0.01%? A very small percentage of developers have
| the desire to take on the full-stack, the temperament of
| good entrepreneurs, the product judgment of good Product
| Managers and ability of good Project Managers to juggle
| dependencies and timeframes. What about the rest of them?
| The remaining 99+% of us are just handing value over to
| our employers and getting a 5% raise in return--if we're
| lucky.
|
| So, the fact that a small percentage of rockstar
| developers can capture the full value of AI-assisted
| development reinforces the point that a small number of
| people/businesses are capturing that value. The vast
| majority of workers are not capturing any value.
| gilfaethwy wrote:
| So... a tiny fraction of people get to capture the value
| again, and at even greater environmental (and thus
| societal) cost than before? Wow, what a world.
| nostrademons wrote:
| Somewhere around 2005-2007, when people were wondering if the
| Internet was done, PG was fond of saying "It has decades to
| run. Social changes take longer than technical changes."
|
| I think we're at a similar point with LLMs. The technical stuff
| is largely "done" - LLMs have closer to 10% than 10x headroom
| in how much they will technologically improve, we'll find ways
| to make them more efficient and burn fewer GPU cycles, the cost
| will come down as more entrants mature.
|
| But the social changes are going to be _vast_. Expect huge
| amounts of AI slop and propaganda. Expect white-collar
| unemployment as execs realize that all their expensive
| employees can be replaced by an LLM, followed by white-collar
| business formation as customers realize that product quality
| went to shit when all the people were laid off. Expect the
| Internet as we loved it to disappear, if it hasn 't already.
| Expect new products or networks to arise that are less open and
| so less vulnerable to the propagation of AI slop. Expect
| changes in the structure of governments. Mass media was a key
| element in the formation of the modern nation state, mass cheap
| fake media will likely lead to its fragmentation as any old Joe
| with a ChatGPT account can put out mass quantities of bullshit.
| Probably expect war as people compete to own the discourse.
| tossandthrow wrote:
| You are very strong on the "slop" bias. Why?
|
| In managing a large to enterprise sized code base, I
| experience the opposite. I can guarantee a much more
| homogenous quality of the code base.
|
| It is the opposite of slop I am seeing. And that at a lower
| cost.
|
| Today,I literally made a large and complex migration of all
| of our endpoints. Took ai 30 minutes, including all frontends
| using these endpoints. Works flawlessly, debt principal down.
| chaps wrote:
| Which company do you work at so we can avoid your migrated
| endpoints?
| tossandthrow wrote:
| Wtf. You don't even know what the migration was about?
| chaps wrote:
| I mean, I'm always down for learning something new. But I
| hope what I learn includes the name of the company I'd
| like to avoid.
| tossandthrow wrote:
| Your tone is in conflict with the statement that you are
| curious.
| chaps wrote:
| It's because you're deflecting. :)
| tossandthrow wrote:
| Deflecting from what? Telling the company name so you can
| avoid it due to your incredibly curious nature?
| chaps wrote:
| Sigh.
|
| Look friend, I really hope you can realize how you sound
| in your post. You're _extraordinarily confidently_ saying
| that you refactored some ambiguous endpoints in 30
| minutes. Whenever I see someone act that confidently
| towards refactoring, thousands alarms go off in my head.
| I hope you see how it sounds to others. Like, at least
| spend longer than a lunch break on it with just a tad
| more diligence. Or hell, maybe even consider LIEing about
| how much time you spent on it. But my point is that your
| shortcuts _will_ burn you. If you want to go down that
| path, I 'm happy to be a witness to eventual
| schadenfreude.
|
| My issue isn't with the fact that you used AI. My issue
| is with how confident you are that it worked well and
| exactly to spec. I'm very well aware of what these
| systems can do. Hell, I've been able to get postgres to
| boot inside linux inside postgres inside linux inside
| postgres recently with these tools. But I'm also acutely
| aware of the aggressive modes that these systems can
| break in.
|
| So again, which company should we all avoid so that we
| can avoid your, specifically your, refactoring?
| tossandthrow wrote:
| I definitely did not say anything about ambiguous
| endpoints.
|
| The migration was relatively straight forward and could
| likely have been implemented as automatic code
| transforms.
|
| What I did say was that it was complex.
| chaps wrote:
| Yikes. Have a good one.
| bsmith wrote:
| All big tech companies are mandating employees to use AI
| for tasks. Unless there's a similar movement to open
| source that is AI-free, you're going to need to be tech-
| free of you want to avoid companies that use AI.
| apsurd wrote:
| One point: yes, you're speaking from the power position.
| God-mode over a fleet of minions has always been an
| engineer's wet-dream. That's not even bad per-say. It's the
| collateral damage down stream that's at issue. Maybe you
| don't see any damage, but that's largely the point. Is it
| really up to you to say?
| tossandthrow wrote:
| What is the collateral damage? In ensuring that a bunch
| of endpoints use the same structure using LLMs?
| apsurd wrote:
| Let's not debate that it's possible to make very large
| very safe changes. It is possible that you did that.
|
| This is about "slop bias". I'd wager that empowering
| everyone, _especially_ power-positions to ship 50x more
| code will produce more code that is slop than not. You
| strongly oppose this because it 's possible for you to
| update an API?
|
| I'm stuck on the power-position thing because I'm living
| it. I'm pro-AI but there are AI-transformation waves
| coming in and mandating top-down. From their green-field
| position it's undeniable crush-mode killin' it.
| Maintenance of all kinds is separate and the leaders and
| implementors don't pay this cost. Maybe AI will address
| everything at every level. But those imposing this world
| _assume_ that to be true, while it 's the line-engineers
| and sales and customer service reps that will bear the
| reality.
| tossandthrow wrote:
| > Maybe AI will address everything at every level.
|
| I think this is the idea you need to entertain / ponder
| more on.
|
| I largely agree with you, what I don't agree with is the
| weighting about the individual elements.
|
| My point was that I could do a 30 minutes cleanup in
| order to streamline hundreds of endpoints. Without AI I
| would not have been able to justify this migration due to
| business reasons.
|
| We get to move faster, also because we can shorten
| deprication tails and generally keep code bases more fit
| more easily.
|
| In particular, we have dropped the external backoffice
| tool, so we have a single mono repo.
|
| An Ai does tasks all the way from the infrastructure
| (setting policies to resources) and all the way to the
| frontends.
|
| Equally, if resources are not addressed in our codebase,
| we know at a 100% it is not in use, and can be cleaned
| up.
|
| Unused code audits are being done on a weekly schedule.
| Like our sec audits, robustness audits, etc.
| apsurd wrote:
| Yeah, the more I debate the AI-lovers the more I can
| empathize with the possibility it may very well turn out
| to be everything is an Agent. Encodable.
|
| I'm not a doomer either, but I do think this arc is a
| human arc: there's going to be a lot of collateral
| damage. To your point, Agents with good stewardship can
| also implement hygiene and security practices.
|
| It's important we surface potential counter metrics and
| unintended side effects. And even in doing so the unknown
| unknowns will get us. With that said, I like this
| positive stewardship framing, I'll choose to see and
| contribute to that, thanks!
| tossandthrow wrote:
| I definitely don't identify as an AI lover. For me year 0
| of Ai was February 6th 2026 and the release of Opus 4.6.
|
| Until that day we had roughly zero Ai code in the code
| base (additions or subtractions). So in all reasonable
| terms I am a late adopter.
|
| For code bases Ai does not concern me. We have for quite
| some time worked with systems that are too complex for
| single people to comprehend, so this is a natural
| extension of abstraction.
|
| On the other hand, am super concerned about Ai and the
| society. The impact of human well being from "easy" Ai
| relations over difficult human connection. The continued
| human alienation and relational violation (I think the
| "woke" discourse will go on steroids).
|
| I think society is going to be much less tolerant. And
| that frightens me.
| hliyan wrote:
| > Today, I literally made a large and complex migration of
| all of our endpoints. Took ai 30 minutes, including all
| frontends using these endpoints. Works flawlessly, debt
| principal down.
|
| This is either a very remarkable or a very frightening
| statement. You're claiming flawless execution within the
| same day as the change.
|
| If you're unable to tell us which product this is, can you
| at least commit to report back in a month as to how well
| this actually went?
| tossandthrow wrote:
| It is a part of the smoke testing process right now.
|
| But we run 90% test coverage, e2e test etc. None of which
| had been altered, and are all passing.
|
| Migrations are generally not that high risk if you have a
| code base in alright shape.
| peterbell_nyc wrote:
| Seeing plenty of this. The quality of agentic code is a
| function of the quantity and quality of adversarial quality
| gates. I have seen no proof that an agentic system is
| incapable of delivering code that is as functional,
| performant and maintainable as code from a great team of
| developers, and enough anecdotes in the other direction to
| suggest that AI "slop" is going to be a problem that teams
| with great harnesses will be solving fairly soon if they
| haven't already.
| apsurd wrote:
| I take your point but then it makes me think is there no
| more value in diversity?
|
| [Philosophy disclaimer] So in a code-base diversity is
| probably a bad idea, ok that makes sense. But in an
| agentic world, if everything is run through the Perfect
| Harness then humans are intentionally just triggers? Not
| even that, like what are humans even needed for?
| Everything can be orchestrated. I'm not against this
| world, this is an ideal outcome for many and it's not my
| place to say whether it's inevitable.
|
| What I'm conflicted on is does it even "work" in terms of
| outcomes. Like have we lost the plot? Why have any humans
| at all. 1 person billion dollar company incoming.
| Software aside, is the premise even valid? 1 person's
| inputs multiplied by N thousand agents -> ??? -> profit
| tossandthrow wrote:
| These are the right questions to ask.
| bluecheese452 wrote:
| Ironically the post saying it is not slop sounds exactly
| like ai slop.
| tossandthrow wrote:
| Too. Many spelling errors for that to be slop...
| skeeter2020 wrote:
| >> Works flawlessly, debt principal down.
|
| I don't doubt it completed the initial coding work in a
| short time, but the fact that you've equated that with
| flawless execution is on the concerning-scary spectrum. I
| can only assume you're talking "compiles-runs-ship it"
|
| The danger is not generating obvious slop, it's accepting
| decent and convincing outputs as complete and absolving
| ourselves of responsibility.
| tossandthrow wrote:
| You are right, and it happens that the output looks
| decent.
|
| Code idioms, or patterns if you will, is largely our
| solution.
|
| We have small pattern/[pattern].md files througout the
| code base where we explain how certain things should be
| done.
|
| In this case, the migration was a normalization to the
| specific pattern specified in the pattern file for the
| endpoints.
|
| Semantics was not changed and the transform was straight
| forward. Just not task I would be able to justify
| spending time on from a business perspective.
|
| Now, the more patterns you have, and the more your code
| base adheres to these patterns, the easier you can verify
| the code (as you recognize the patterns) and the easier
| you cal call out faulty code.
|
| It is easier to hear an abnormality in music than in
| atmospheric noise. It is the same with code.
| hn_throwaway_99 wrote:
| > Somewhere around 2005-2007, when people were wondering if
| the Internet was done
|
| Literally who wondered that? Drives me nuts when people start
| off an argument with an obvious strawman. I remember the time
| period of 2005-2007 very well, and I don't remember a single
| person, at least in tech, thinking the Internet was done. I
| don't know, maybe some ragebait articles were written about
| it, but being knee-deep in web tech at that time, I remember
| the general feeling is that it was pretty obvious there was
| tons to do. E.g. we didn't necessarily know what form mobile
| would take, but it was obvious to most folks that the tech
| was extremely immature and that it would have a huge impact
| on the Internet as it progressed. That's just one example -
| social media was still in its nascent stages then so it was
| obvious there would be a ton of work around that as well.
| magicalist wrote:
| > _I don 't know, maybe some ragebait articles were written
| about it, but being knee-deep in web tech at that time, I
| remember the general feeling is that it was pretty obvious
| there was tons to do_
|
| Almost definitely professional ragebaiters in Wired or Time
| or whatever, yeah.
| nostrademons wrote:
| If you were in tech in 2005-2007 you were part of a small
| minority of the general population. It often didn't _feel_
| like a small minority because, well, you knew all those
| other people on the Internet, but that 's a pretty strong
| selection bias.
|
| There is, of course, the Paul Krugman quote from 1998 that
| by 2005 the Internet would be no more important than a fax
| machine. [1]
|
| Here's Wired in 2007 saying, in reference to Facebook, "no
| company in its right mind would give it a $15 billion
| valuation". [2]
|
| I remember, being at Google in ~2011, we used to laugh at
| the Wall Street analysts because they would focus on CPC
| numbers to forecast a valuation, which is important only if
| the number of clicks is remaining constant. We knew, of
| course that total Internet usage was still growing quite
| rapidly and that queries had increased by roughly 4x over
| the 2009-2013 timeframe.
|
| And a lot of people will say "If you're so smart, why
| aren't you rich?", and I'll point out that many people who
| assumed the Internet had lots of room to grow in 2005-2007
| _did_ end up very rich. Google stock has increased roughly
| 20x since 2007 (and 40x from its 2009 lows). Meta is now
| worth $1.6T, a 100x increase over the $15B valuation that
| everyone thought was insane in 2007. Amazon is also up
| about 100x. _It would not be possible to take the other
| side of the trade and make these kind of profits if the
| majority of people did not think the Internet was largely
| over_.
|
| [1] https://www.snopes.com/fact-check/paul-krugman-
| internets-eff...
|
| [2] https://www.wired.com/2007/10/facebook-future/
| lamasery wrote:
| > If you were in tech in 2005-2007 you were part of a
| small minority of the general population. It often didn't
| feel like a small minority because, well, you knew all
| those other people on the Internet, but that's a pretty
| strong selection bias.
|
| Didn't we only pass 50% of households having a home PC in
| like... '00 or '01 or something? And I mean just in the
| US, which was way ahead of the curve.
|
| > Here's Wired in 2007 saying, in reference to Facebook,
| "no company in its right mind would give it a $15 billion
| valuation". [2]
|
| I actually think that's correct... if the smartphone
| hadn't taken off _right_ after that. The "consumer"
| Internet and computing, the attention economy, et c.,
| functionally _is_ the smartphone. A desktop computer and
| even a laptop aren 't in use when driving, at the store,
| at the park, every moment on vacation, et c. It'd still
| only be nerds lugging computers everywhere if nobody'd
| managed to make a smartphone that's capable-enough and
| pleasant-enough-to-use to expand the market beyond the
| set of folks who might have had a beeper in earlier years
| (the part of the market Blackberry was addressing). A
| gigantic proportion of the "GDP of the Internet", if you
| will, exists because smartphones exist.
| fragmede wrote:
| I'm reminded of the quote that ATMs didn't unemploy bank
| tellers, smartphones did. While not owning a laptop may
| seem inconceivable to us here, smartphones exist as the
| _only_ connection to the Internet for many.
|
| The interesting question is without Apple and the iPhone,
| would RIM/Blackberry have "figured it out"? Would we be
| on 2-way "pagers" with keyboards and stupidly expensive
| data plans that you have to order separately? Because
| while the original iphone was a marvel in terms of
| hardware, I think the bidet contribution was the
| integration with AT&T for the cellphone plan, which only
| Steve Jobs had the clout to pull off.
| Maxatar wrote:
| I was also in tech at that time, in fact I worked for
| Google during that period and people definitely thought
| that the Internet had reached its peak. So many criticisms
| back then not about just peak Internet but that all these
| companies were blowing money on unproven business models,
| they were unsustainable, unprofitable, it was all just
| hype.
|
| You also had numerous telecommunications companies going
| bust in one of the largest sector collapses in modern
| financial history, the largest bankruptcy in history (at
| that time) was WorldCom, followed by the second largest
| bankruptcy in history with Global Crossing... Lucent
| Technologies went belly up and the largest telecom company
| at the time Nortel lost 90% of its value, eventually going
| bankrupt in 2009.
|
| And then of course the great recession hit, tech companies
| took a massive blow, Microsoft, Google, Intel, Apple and
| other tech giants lost 50% of their stock value in a matter
| of months. You don't lose 50% of your value because people
| think you have a promising future.
|
| It wouldn't be until the explosive rise of smart phones and
| close to zero percent interest rates that sentiment turned
| around and tech companies ballooned in value in what would
| end up being the longest bull run in U.S. history.
| vharuck wrote:
| I agree with the gist of your points, but not much with these
| two:
|
| >followed by white-collar business formation as customers
| realize that product quality went to shit when all the people
| were laid off.
|
| These will be rare boutique affairs. Based on how mass
| production and cheap shipping played out, most people value
| price over quality. The economy will rearrange itself around
| those savings, making boutique products and services
| expensive.
|
| >mass cheap fake media will likely lead to its fragmentation
| as any old Joe with a ChatGPT account can put out mass
| quantities of bullshit.
|
| We have this today. And that's not a "same as it ever was"
| dismissal. Today, there are a lot of terminally online people
| posting the equivalent of propaganda (and actual propaganda).
| Social media pushes hot takes in audiences' faces, a portion
| of them reshare it, and it spreads exponentially. The only
| limitation to propaganda today is how much time the audience
| spends staring at the "correct" content provider.
| peterbell_nyc wrote:
| I model this as "stacked sigmoid curves". I have no reason to
| believe that any specific technological implementation will be
| exponential in impact vs sigmoidal.
|
| However if we throw enough money and smart people at the
| problems and get enough value from the early sigmoid curves,
| the effective impact of a large number of stacked sigmoids
| could theoretically average to a linear impact, but if the
| sigmoids stay of a similar magnitude (on average) and appear at
| a higher velocity over time, you end up with an exponential
| made up of sigmoids*
|
| * To be fair, it has been so long since I have done math that
| this may be completely incorrect mathematically - I'm not sure
| how to model it. However I think in practice more and more
| sigmoids coming faster and faster with a similar median
| amplitude is gonna feel very fast to humans very soon - whether
| or not it's a true exponential.
|
| I'm honestly having a very hard time thinking through the
| likely implications of what's currently happening over the next
| 2-10 years. Anyone who has the answers, please do share. I'm
| assuming from Cynafin that it's a peturbated complex adaptive
| system so I can just OODA or experiment, sense and respond to
| what happens - not what I think might happen.
| fny wrote:
| Why is everyone so damn obsessed with the singularity? You
| don't need superintelligence to disrupt humanity. We easily
| have enough advancement to change the economy dramatically as
| is. The adoption isn't there yet.
| Quarrelsome wrote:
| Moreover the singularity makes this crass assumption that a
| single player takes all. It seems to ignore a future of many,
| many AI players, or many, many human + AI players instead.
|
| Furthermore, regardless of how smart one thing is, it cannot
| win towards infinite games of poker against 7 billion humans,
| who as a race are cognitively extremely diverse and adaptive.
| ikrenji wrote:
| that's kind of optimistic. for example a misaligned super
| AI might engineer a virus that wipes out most of the 7
| billion humans. that would put a damper on the adaptability
| of the human race...
| Quarrelsome wrote:
| and then might overfit the lack of danger in that
| aftermath, leading to those fragmented humans doing
| something to overthrow it. For all we know this AI might
| get bored and decide to make a cure, or turn itself off,
| or anything really.
| fzzzy wrote:
| The singularity does no such thing.
| Quarrelsome wrote:
| well that's certainly cleared it all up.
| kaibee wrote:
| > regardless of how smart one thing is, it cannot win
| towards infinite games of poker against 7 billion humans,
|
| AI isn't one thing though. Really its kind of a natural
| evolution of 'higher order life'. I think that something
| like a 'organization', (corps, governments, etc) once large
| enough is at least as alive as a tardigrade. And for the
| people who are its cells, it is as comprehensible as the
| tardigrade is to any of its individual cells. So why
| wouldn't organizations over all of human history eventually
| 'evolve' a better information processing system than humans
| making mouth sounds at each other? (writing was really the
| first step on this). Really if you look at the last 12,000
| years of human society as actually being the first 12,000
| years of the evolutionary history of 'organizations', it
| kinda makes a lot of sense. And so much of it was exploring
| the environment, trying replication strategies, etc. And we
| have a lot of different organizations now, like an
| evolutionary explosion, where life finds various niches to
| exploit.
|
| /schitzoposting
| Quarrelsome wrote:
| > AI isn't one thing though.
|
| What's the single in "singularity" doing then?
|
| My issue is I feel like some people treat intelligence as
| an integer value and make the crass assumption that
| "perfect intelligence" beats all other intelligences and
| just think that's quite a thick way to think about it. A
| fool can beat an expert over the course of towards
| infinite hands because they happen to do something
| unexpected. Everything is a trade off and there's no such
| thing as perfect, every player has to take risk.
| jerf wrote:
| Even after I explained the exact usage I was invoking, the
| attractive nuisance of all the science fiction that has
| gotten attached to the term still prevented you and
| Quarrelsome from reading my post as written.
|
| I really wish the term hadn't been mangled so much. Though
| the originator of the term bears a non-trivial amount of the
| responsibility for it, having written some rather good
| science fiction on the topic himself. The original meaning
| from the paper is quite useful and nothing has stepped up to
| replace it.
|
| All the singularity means as I explicitly used it here is
| _you entirely lose the ability to predict the future_. It is
| relative to who is using it... we are all well past the
| Caveman Singularity, where no (metaphorical) caveman could
| possibly predict anything about our world. If we stabilize
| where we are now I feel like I have at least a grasp on the
| next ten years. If we continue at this pace I don 't. That
| doesn't mean I believe AI will inevitably do this or that...
| it means _I can 't predict anymore_, which is really the
| exact opposite. AI doesn't have to get to "superintelligence"
| to wreck up predictions.
| tim333 wrote:
| >the originator of the term ... rather good science fiction
|
| I guess you are thinking of Vernor Vinge but the term first
| came up with John von Neumann in the 1950s:
|
| >...on the accelerating progress of technology and changes
| in human life, which gives the appearance of approaching
| some essential singularity in the history of the race
| beyond which human affairs, as we know them, could not
| continue
| gilfaethwy wrote:
| We've had enough advancement to change the economy for many
| decades, but the powers that be have insisted that, despite
| the lack of need, we continue to toil doing completely
| unnecessary work, because that's what's required to extend
| their fiefdoms.
|
| Not that the singularity has any relevance here, either -
| except maybe that the robots take over, and the billionaires
| have missed the boat? I don't know.
| lamasery wrote:
| > The adoption isn't there yet.
|
| It's worth noting that after ~50 years[edit: to preempt
| nitpicking, yes I know we've been using computers
| productively quite a bit longer than that, but that's roughly
| the time when the computerized office started to really gain
| traction across the whole economy in developed countries],
| we've only extracted a tiny proportion of the hypothetical
| value of _computers_ , period, as far as benefits to the
| economy and potential for automation.
|
| I actually think a lot of the real value of LLMs is "just"
| going to be making accessing a little (only a little!) more
| of that existing unrealized benefit feasible for the median
| worker.
|
| My expectation is that we'll also harness only a tiny
| proportion of the hypothetical value of LLMs. We're just not
| good enough at organizing work to approach the level of
| benefit folks think of when they speculate about how
| transformational these things will be. A big deal? Yes. _As_
| big a deal as some suppose? Probably not.
|
| [edit: in positive ways, I mean. I think we're going to see
| huge boosts in productivity to anti-social enterprises. I'd
| not want to bet on whether the development of LLMs are going
| to be net-positive or net-harmful to humanity, not due to the
| "singularity" or "alignment" or whatever, but because of the
| sorts of things they're most-useful for]
| tim333 wrote:
| >Why is everyone so damn obsessed with the singularity?
|
| I don't think most are - it tends to regarded as rather
| cranky stuff, and a lot of people who use the term are a bit
| cranky.
|
| Even so AI maybe overtaking human intelligence is an
| interesting thing in human history.
| afthonos wrote:
| An interesting thing in AI history. For human history, it's
| epochal.
| guelo wrote:
| Because it's happening no matter how much you'd rather ignore
| it or scoff at it.
| balamatom wrote:
| >Why is everyone so damn obsessed with the singularity?
|
| Because they are captives (to a system of incentives that is
| already "superintelligent" in comparison to any individual)
| who are hoping for salvation (something to make them free
| against their will; since it is their will which is
| captured).
|
| Singularity, then, is the point at which the system itself
| "finally becomes able to imagine what it is like to be a
| person", and decides to stop torturing people. IMO, this is
| unlikely to work out like that.
| CamperBob2 wrote:
| _Why is everyone so damn obsessed with the singularity? You
| don 't need superintelligence to disrupt humanity._
|
| And at the same time, we don't take advantage of the
| intelligence we already have.
| juped wrote:
| Neither! A logistic curve is just an exponential with a
| carrying capacity - it is still an exponential! There is no
| reason to believe that AI capability, which grows
| _logarithmically_ with the handwaved-resources used on it
| (roughly, this is compute and training data), grows, has grown,
| or is growing exponentially!
|
| I know this sounds like "the moderate position" to people but
| you are accepting that something logarithmic is somehow in fact
| exponential (these are inverse functions of one another) based
| on no evidence or argument.
|
| Here is Sam Altman, the one man in the world with the most
| incentive to overstate AI capability, accepting the extremely-
| well-known logarithmic growth:
| https://blog.samaltman.com/three-observations
|
| What we see in reality is a basically-linear growth pattern due
| to pushing exponentially more resources into this logarithm.
| keeda wrote:
| I've said it before, but it would be a mistake to just focus on
| the models, and ignore everything else that is changing in the
| ecosystem -- tools, harnesses, agents, skills, availability of
| compute, etc. -- things are changing very quickly overall.
|
| The thing that is changing most rapidly, however, is the
| understanding of how to harness this insanely powerful,
| versatile, and unpredictable new technology.
|
| Like, those who experimented deeply with LLMs could tell that
| even if all model development completely froze in 2024,
| humanity had decades worth of unrealized applications and
| optimizations to explore. Even with AI recursively accelerating
| this process of exploration. As a trivial example, way back in
| 2023 anyone who got broken code from ChatGPT, fed it the error
| message, and got back working code, knew agents were going to
| wreck things up very quickly. It wasn't clear that this would
| look like MD files, Claude Code, skills, GasTown, and YOLO
| vibe-coding, but those were "mere implementation details."
|
| I'm half-convinced an ulterior goal of these AI companies
| (other than the lack of a better business model) to give away
| so many cheap tokens is to encourage experimentation and
| overcome this "capability overhang."
|
| Given all this, it's very hard to judge where we are on the
| curve, because there isn't just one curve, there are actually
| multiple inter-playing curves.
| joquarky wrote:
| Anyone who believes in materialism should recognize that there
| is still a lot of room to improve.
| _doctor_love wrote:
| Another interesting one from 'aphyr -- I think the points around
| the Ironies of Automation deserve deeper focus, possibly even a
| separate follow-up post.
|
| I would encourage folks to look at the following industries:
| nuclear safety, commercial aviation, remote surgery. These
| industries have dealt with the issues of automation for much
| longer than we have as programmers.
|
| In the research I've done, these industries went through a
| similar journey in the 20th century as we are now: once something
| becomes automated enough, the old way simply won't work. You have
| to evolve new frameworks and procedures to deal with it.
|
| So in the case of aviation they developed CRM and SRM - how to
| manage the airplane as a crew and how to manage it as a solo
| operator. Remember that modern airplanes are highly automated!!
| The human pilot is not typically hands-on-wheel for most of the
| flight.
|
| In the case of surgeons, they found that de-skilling without
| regular practice can occur in as little as four weeks! So to
| combat that, some surgeons are now required to practice in
| simulated environment to keep their skills sharp.
|
| My feeling is that 'aphyr is right in the short-to-medium term.
| Current market forces and US regulatory posture (or lack thereof)
| makes it so that there are less rules and less enforcement. IMHO
| the results are depressingly predictable but the train has left
| the station with enough momentum that there's no stopping it. If
| we survive long enough to make it past the medium-term things
| will change.
| aphyr wrote:
| Thank you for this! I really wanted to go deeper on human
| factors, and I think there's a lot to be said about CRM and
| sociotechnical systems design, especially when ML gets used for
| decision support. Ultimately wound up truncating that section
| (along with more of the economic critique) because the piece
| was already far too long.
| intended wrote:
| There's a paper out there, on designing IT systems from god
| knows when. It is incredibly dry, except for a line in it
| that stood out: All IT systems are political systems, because
| they decide how information and decisions flow.
|
| I can only guess as to how much content you would have to
| explore on that axis.
| _doctor_love wrote:
| You're welcome! I imagine you already know this one as well
| but just in case.
|
| Learning to Learn by the late Dr Richard Hamming. See
| especially Chapter 2.
|
| A point Hamming makes is that when transitions from hand to
| machine production occurred, usually _what_ is built ends up
| changing as the old techniques don 't transfer 1:1 from the
| old world.
|
| So for instance, we went from nuts and bolts to rivets and
| welding (Dr Hamming's literal example). This required
| builders to produce an equivalent product to the old, built
| with different techniques - and crucially! - under tighter
| control limits.
|
| The reason things are going all over the place with AI at the
| moment is that it's speed, speed, speed. They had an all
| hands at my company recently where the top brass talked about
| AI. The only thing mentioned was speed - go faster, do more,
| etc. Not a single soul talked about quality.
|
| But if you know your software engineering wisdom you know
| that you can only pick two when it comes to speed, scope, or
| quality. It's going to get real dumb for a while until people
| realize/remember quality is how you achieve speed.
| aphyr wrote:
| I have not read Hamming yet, thank you!
| _doctor_love wrote:
| You're in for a treat :)
| enraged_camel wrote:
| >> Imagine a co-worker who generated reams of code with security
| hazards, forcing you to review every line with a fine-toothed
| comb. One who enthusiastically agreed with your suggestions, then
| did the exact opposite. A colleague who sabotaged your work,
| deleted your home directory, and then issued a detailed, polite
| apology for it. One who promised over and over again that they
| had delivered key objectives when they had, in fact, done nothing
| useful. An intern who cheerfully agreed to run the tests before
| committing, then kept committing failing garbage anyway. A senior
| engineer who quietly deleted the test suite, then happily
| reported that all tests passed.
|
| >> You would fire these people, right?
|
| Okay, now imagine a different colleague. One who writes a solid
| first draft of any boilerplate task in seconds, freeing you to
| focus on architecture instead of plumbing. A dev who never gets
| defensive when you rewrite their code, never pushes back out of
| ego, and never says "that's not my job." A pair programmer who's
| available at 3 AM on a Sunday when prod is down and you need to
| think out loud. One who remembers every API you've forgotten,
| every flag in every CLI tool, every syntax quirk in a language
| you use twice a year, or even every day.
|
| You'd want that person on your team, right? In fact, you would
| probably give them a promotion.
|
| Here's the thing: the original argument describes real failure
| modes, but then commits a subtle sleight of hand. It
| _personifies_ the tool as a colleague with agency, then condemns
| it for lacking the judgment that agency implies. But you don 't
| fire a table saw because it doesn't know when to stop cutting,
| right? You learn where to put your hands.
|
| Every flaw in that list is, at the end of the day, a flaw in the
| workflow, not the tool. Code with security hazards? That's what
| reviews are for. And AI-generated code gets reviewed at far
| higher rates than the human code people have been quietly rubber-
| stamping for decades. Commits failing tests? Then your CI
| pipeline should be the gate, not a promise. Deleted your home
| directory? Then it shouldn't have had the permissions to do that
| in the first place. In fact, the whole "deleted my home
| directory" shit is the same thing as "our intern deleted the prod
| database". We all know that the response to the latter is "why
| did they have permission to prod in the first place??" AI is the
| same way, but for some god damn reason people apply totally
| different standards to it.
| aphyr wrote:
| > It personifies the tool as a colleague with agency,
|
| Er, just to be clear, I am not personifying these tools. This
| entire section is a critique of the attempt to frame LLMs as
| "coworkers".
| simoncion wrote:
| > But you don't fire a table saw because it doesn't know when
| to stop cutting, right?
|
| If I purchased a table saw and that table saw irregularly and
| unpredictably jumped past its safeties -as we've plenty of
| evidence that LLMs [0] do-, then I would [1] immediately stop
| using that saw, return it for a refund, alert the store that
| they're selling wildly unsafe equipment, and the relevant
| regulators that a manufacturer is producing and selling wildly
| unsafe equipment.
|
| [0] ...whether "agentic" or not...
|
| [1] ...after discovering that yes, this is not a defective
| unit, but this model of saw working as designed...
| enraged_camel wrote:
| But that's the thing: the table saw has _safeties_. Someone
| put them there. Without those safeties, it, too, would jump
| unpredictably.
|
| Scary scenarios like AIs deleting home directories are the
| result of the developers explicitly bypassing those safeties.
| simoncion wrote:
| > But that's the thing: the table saw has _safeties_.
| Someone put them there.
|
| You noticed that I mentioned that this hypothetical table
| saw has poorly-designed, entirely inadequate safeties?
| Things like Opus treating the data it presents to the user
| as commands that it should execute [0] is _definitely_ [1]
| a sign of solid, well-designed safety mechanisms.
|
| You might choose to retort "Well, that's because the user
| isn't running the tool in the mode that makes it wait for
| confirmation before doing anything of consequence!". In
| reply, I would point in the general direction of the half-
| squillion studies indicating that a system whose safety
| requires an operator to remain vigilant when presented with
| a large volume of irregularly-presented decision points
| (nearly all of which can be safely answered with a "Yes, do
| it.") does not make for a safe system. [2] It -in fact-
| makes for a system that's designed [3] to be unsafe.
|
| You might also choose to retort "That's never happened to
| me, or anyone that I know about.". _Intermittent_ failures
| of built-in safeties that happen under unpredictable
| circumstances are far, _far_ worse than predictable
| failures that happen under known ones. I hope you
| understand why.
|
| [0] <https://old.reddit.com/r/ClaudeCode/comments/1sex28q/o
| pus_46...>
|
| [1] ...not...
|
| [2] I would also -somewhat wryly- note that "An AI Agent
| that does all of your scutwork, but whose every decision
| you have to carefully scrutinize, because it will
| irregularly plan to do something irreversibly destructive
| to something you care about." is not at all the picture
| that "AI" boosters paint of these tools.
|
| [3] ...whether intentionally or not...
| m0llusk wrote:
| Bullshit is more dangerous than lies.
| pixl97 wrote:
| In enough quantity it becomes impossible to tell the difference
|
| https://en.wikipedia.org/wiki/Brandolini%27s_law
| elcapitan wrote:
| I really appreciate this series of posts, as it serves as a good
| summary of key points of the discourse around AIs, and links to
| the relevant articles etc. I find following all those discussions
| myself exhausting, so if I can find this all in one place and
| read it nicely grouped, that is very helpful.
| buildbot wrote:
| I love the analogy of AI coding as witchcraft! It's very accurate
| to how working with these tools feels - At one point I was forced
| to invoke a "litany against stubbing" in a loop to make claude
| code actually implement a renode setup for some firmware. That
| worked really well.
|
| It feels like hexing the technical interview come to real life ;)
| barbazoo wrote:
| > I continue to write all of my words and software by hand, for
| the reasons I've discussed in this piece--but I am not confident
| I will hold out forever.
|
| There it is, an actual em-dash in the wild, written by hand.
| aphyr wrote:
| I put... I'd guess around 60 hours into editing this piece, and
| had review from a dozen-odd friends, and I am _still_ finding
| and fixing errors. I imagine that asking an LLM for a
| copyediting pass probably would have been helpful, but
| goshdarnit, I want to show that we can still write somewhat-
| passable prose by hand.
| bluefirebrand wrote:
| > I want to show that we can still write somewhat-passable
| prose by hand
|
| For what it's worth I think it's pretty reasonably good
| prose, not merely somewhat passable
| aphyr wrote:
| Thank you <3
| itissid wrote:
| Everyday I sit down to build a product for my clients. I am a one
| man shop _now_. Before I had people helping me. My mental state
| is not good. A very odd thing happens when claude or codex
| complete code fast, I begin to think of all the other things that
| are needed to make AI Agent work better. I begin to worry about
| problems that other people use to help me with and think "Can I
| do those too?". Problems like product design, devops work etc. In
| a bid to try I get nerd sniped by the velocity people seem to
| have -- and these are respected devs not just twitter claims. And
| because I am so bad at "doing it all" its causing my mental
| health to suffer because of long hours i have to put it in. I
| miss my friends and colleagues who I worked with.
|
| I always struggled with coding before 2023, but i made ends meet
| and put food on the table and could work sane hours and knew what
| I needed to do. Logically I should have been happy that I did not
| have to grind on code -- and some days I truly am -- but it would
| yield such poor quality of life at such a high cost was not what
| I expected...
| artur_makly wrote:
| you can always course-correct and find your sweeter spot.
| itissid wrote:
| For course correction, I began with trying to think a bit
| more about solving problems for my clients by talking to them
| more often. That helps to some extent because I feel happy
| talking to them for understanding how to solve their
| problems.
|
| What I do feel the issue is with I just having to do
| everything to keep costs down because hiring another dev vs
| doing it with AI consideration is real and it has collateral
| damage: I spend more time trying to build AI agents to do the
| work and there is 1 or 2 fewer jobs I create.
| itissid wrote:
| For any one who has not read the cockpit recording of air-
| france-447 I would encourage them to[1]. It is simply jaw
| dropping study in how things go wrong so fast -- a risk with AI
| we have barely begun to acknowledge let alone regulate as a
| community.
|
| [1](https://tailstrike.com/database/01-june-2009-air-france-447/)
| macrocosmos wrote:
| That catastrophe is entirely on Bonin the bonehead.
| tra3 wrote:
| I read through the link. The other pilot and the captain are
| complicit by the virtue of being there. Autopilot disengages
| at 2:10 and they crash at 2:14. Terrible.
|
| My other immediate thought -- Tesla's autopilot. I've never
| used it so I'm not sure I'm fully correct here, but
| apparently it requires you to be vigilant and take over in
| certain situations? Wonder how well that works out in
| practice.
| jcalvinowens wrote:
| Anybody who is interested should read the full report:
| https://www.faa.gov/sites/faa.gov/files/AirFrance447_BEA.pdf
| groby_b wrote:
| I really wish we'd stop arguing about AI with an "some automation
| failed, so all automation is bad" approach.
|
| Yes, AF447 crashed due to lack of training for a specific
| situation. And yet, air travel is safer than ever.
|
| Yes, that Tesla drove into a wall, and yet robotaxis exist, work
| well, and are significantly safer than human drivers.
|
| Yes, there are a lot of "witchcraft" approaches to working with
| AI, but there are also significant accelerations coming out of
| the field that have nothing to do with AI.
|
| Yes, AI occasionally makes very stupid mistakes - but ones any
| competent engineer would have guardrails in place against.
|
| And so a lot of the piece spends time arguing strawmen propped up
| by anecdotes. And that detracts from the deeply necessary
| discussion kicked off in the second part, on labor shock, capital
| concentration, and fever dreams of AI.
|
| The problem of AI isn't that it's useless and will disrupt the
| world. It's that it's already extremely useful - and that's the
| thing that'll lead to disrupting the world.
| tra3 wrote:
| I think you're maybe oversimplifying a bit. I dont think the
| argument here is that "AI" is not 100% so we shouldn't use AI.
| There are issues we need to be aware of.
|
| Specifically, AI companies want to inflate the utility of AI
| because that's how they make money. There should be guardrails
| where appropriate. Unfortunately, as usual, we need to make
| mistakes before we can learn from them.
|
| Robotaxis do exist, but they are not made equal. Tesla's for
| instance are 4x worse than humans:
| https://electrek.co/2026/02/17/tesla-robotaxi-adds-5-more-cr...
| _dwt wrote:
| I think you may have missed a subtle point: there is an
| especial risk from automation which almost always works
| correctly. The aviation industry calls the phenomenon
| "automation fatigue". It's very difficult for humans to stay
| alert and monitor systems like these, and the use of the
| systems tends to lead to de-skilling over time in the very
| skills required to monitor them and fix the (rare but fatal -
| at least in aviation) error cases when they occur.
| groby_b wrote:
| And yet, aviation safety keeps improving.
|
| I didn't miss that point. I'm saying it's blown out of
| proportion, and that diminishes the value of the actually
| important content.
| GistNoesis wrote:
| Programming is indeed becoming witchcraft, with LLMs it is of the
| utmost importance that you chose the right database
| administrator.
|
| For example I'm now relying on Soteria, the greek goddess of
| safety, salvation and preservation from harm to act as my
| database administrator.
| drivebyhooting wrote:
| In the case of UBI, how would we differentiate between a
| previously highly paid professional (SWE, lawyer, author) and a
| pauper (janitor, car washer, unemployed).
|
| It's only fair that they would receive the same amount. But then
| how can the former category continue to fulfill their
| obligations?
| stevenally wrote:
| "But then how can the former category continue to fulfill their
| obligations?".
|
| They can't. Just like the steel workers who lost their jobs in
| the 1970's.
| intended wrote:
| Does Aphyr give himself a limit of 6 semicolons ? If their editor
| returns, will this count drop to 0?
|
| (And before anyone brings pitch forks out, this is what they
| wrote in a previous article:
|
| > "Cool it already with the semicolons, Kyle." No. I cut my teeth
| on Samuel Johnson and you can pry the chandelierious intricacy of
| nested lists from my phthisic, mouldering hands. I have a
| professional editor, and she is not here right now, and I am
| taking this opportunity to revel in unhinged grammatical squalor.
|
| My life was made poorer for knowing that semicolons are
| apparently a sin, but richer for the rebellion.
| keeganpoppen wrote:
| i respect the author of this post wayyyy too much to ever imply
| that i know more than them, or that i even have proprietary
| knowledge that they, themselves do not possess. i admire aphyr,
| and i aesthetically agree with many of the arguments offered
| forth. but this whole thing feels a bit cherry-picked-- i'm not
| gonna go chapter-and-verse (cf. belt-and-suspenders) about it,
| but on some levels this comes across as a bit superficial. i
| think the general thrust-- that ai is a sort of Narcissus's
| pond-- is completely a reasonable and well-considered take. but i
| would be shocked if someone with the intellectual powers of
| someone like Aphyr has never had an interaction with an ai in
| which they did not feel like they were interacting with the deep
| recesses of their mind in a way both profound and, more
| importantly, productive. and yeah, there's plenty of pyrite in
| them thar hills. but, it does have this almost Lord of the Rings
| The One Ring -esque pull when you get into a certain "embedding
| space" (/ thought space) in a certain thread conversing with ai.
| it genuinely is a profound transformation of cognition, and
| working superlinearly productively with it is a matter of "when",
| not "if". i share all the same aeathetic concerns, and all the
| deeper ones. but there have been sessions that i have had with ai
| that made me blankly stare up at the heavens as well, and i don't
| think i'm anywhere near the only one.
| mrdependable wrote:
| Care to provide any examples of what sort of content are in
| these conversations you had with AI?
| hliyan wrote:
| > One of her key lessons is that automation tends to de-skill
| operators
|
| I recently discovered an example of this phenomenon in a
| completely unrelated area: navigation. About a week ago, I
| realized that I couldn't remember the exact turns to reach a
| certain place I started driving to recently, even after having
| driven there about 3-4 times over a period of a month. Each time
| I had used Google Maps. When I used to drive pre-Google-Maps, I
| would typically develop a good spatial model of a route on my
| third drive. This skill seems to have atrophied now. Even when I
| explicitly decide to drive without Google Maps, and make mental
| notes of the turns, my retention of new routes is now much weaker
| than it used to be. Thankfully, routes I retained before becoming
| Google Maps dependent, are still there.
| acoard wrote:
| Plato on how reading and writing make us more forgetful as we
| rely on this new technology:
|
| > And so it is that you by reason of your tender regard for the
| writing that is your offspring have declared the very opposite
| of its true effect. If men learn this, it will implant
| forgetfulness in their souls. They will cease to exercise
| memory because they rely on that which is written, calling
| things to remembrance no longer from within themselves, but by
| means of external marks.
| ofjcihen wrote:
| I see this copy-pastad everywhere these days but it misses a
| huge point which is that written things don't read or
| understand themselves.
| _dwt wrote:
| "Yes, Socrates, you can easily invent tales of Egypt, or of
| any other country."
| wslh wrote:
| I wonder if vibe coding is partly what happens when software
| engineering fails to converge on reusable abstractions. Instead,
| we got fragmented tools and endless reinvention of the same
| components, and LLMs arrived as an ad hoc abstraction layer on
| top.
| Terr_ wrote:
| Copy-paste-and-hope As A Service.
| asdfman123 wrote:
| > I can imagine a future in which some or even most software is
| developed by witches, who construct elaborate summoning
| environments, repeat special incantations ("ALWAYS run the
| tests!"), and invoke LLM daemons who write software on their
| behalf.
|
| This sort of prompting is only necessary now because LLMs are
| janky and new. I might have written this in 2025, but now LLMs
| are capable of saying "wait, that approach clearly isn't working,
| let's try something else," running the code again, and revising
| their results.
|
| There's still a little jankiness but I have confidence LLMs will
| just get better and better at metacognitive tasks.
|
| UPDATE: At this very moment, I'm using a coding agent at work and
| reading its output. It's saying things like:
|
| > Ah! The command in README.md has specific flags! I ran:
| <internal command>. Without these flags! I missed that. I should
| have checked README.md again or remembered it better. The user
| just viewed it, maybe to remind me or themselves. But let's first
| see what the background task reported. Maybe it failed because I
| missed the flags, or passed because the user got access and
| defaults worked.
|
| AI is already developing better metacognition.
| gilfaethwy wrote:
| I'm concerned that developing better metacognition is really
| just throwing more finite resources at the problem. We surely
| don't have unlimited compute, or unlimited (V)RAM, and so there
| must be a wall here. If it could be demonstrated that this
| improved metacognition was coming _without_ associated
| increases in resource utilization, I would find these
| improvements to be much more convincing... but as things stand,
| we 're very much not there.
|
| (There may be an argument here re: the move from dense to MoE
| models, but all research I am aware of suggests that MoE models
| are not dramatically more efficient than dense models - i.e.,
| active parameter count is not the overriding factor, and total
| parameter count is still extremely important, though it does
| seem to roughly follow a power law.)
| baliex wrote:
| Is anyone else just getting this? <h1>Unavailable
| Due to the UK Online Safety Act</h1>
| omega3 wrote:
| The answer has always been the same: self-regulated profession
| and trade unions. Instead the ever efficient software engineers
| have efficiently dug their own grave. The regulated professions
| aren't going to be affected by the AI because their members
| understand that preservation of job security[0], their pay and
| QOL is more important than automating themselves out of
| existence.
|
| [0] https://www.bma.org.uk/news-and-opinion/medical-degree-
| appre...
| npodbielski wrote:
| Yes, this is so true. But we never thought about that but
| instead thought about how smart and better and productive we
| are over other people in similar position.
|
| Also you forgot the link?
| rambambram wrote:
| The comparison with sociopaths is a good one. On the surface all
| human behavior, but if you lift the veil even a little bit it
| becomes clear there's no substance, no conscience, etc.
|
| Read up on Cluster B personality disorders (borderline,
| narcissism, sociopaths/psychopaths) and you see the similarities.
| Love bombing, gaslighting, a shared fantasy, etc. It's very
| interesting and scary at the same time.
| sambuccid wrote:
| Great article, near the end it talks about where the money go and
| if there will be universal basic income. I think those paragraph
| had an assumption that if models get very smart all the money
| will go to big tech.
|
| But, thanks to all the companies working on open-weight models,
| I'm starting to think this might no longer happen. Currently
| open-weights models are said to be just months behind the top
| players (and I think we should really try to do what we can to
| keep it that way).
|
| I'm wondering what the predictions would be in the case where AI
| becomes very powerfull, but also models are generally available.
|
| Two possibilities come to mind, the first one where all the money
| no longer spent on employment would go towards hardware. New
| hardware manufacturers or innovators could jump in and create a
| bit more employment, but eventually it would probably all
| progress in one direction, which is the only finite resource in
| the chain, the materials/minerals needed for the hardware. Those
| materials might become the new "petrol". It's possible that
| eventually we would have build enough chips to power all the AI
| we need without needing more extraction, but I wouldn't
| underestimate our ability to waste resources when they feel
| aboundant.
|
| In the second possibility, alongside a very powerful open-weight
| LLM, there could be big performance advancements, which would
| make the hardware no longer the bottleneck. But I'm struggling to
| imagine this scenario, maybe we would all be better off? Maybe we
| would all just be deppressed because most people won't feel
| "usefull" to society or their peers anymore?
| hn_acc1 wrote:
| Even if hardware is "cheap" and open-source/weight models are
| available..
|
| Now what? How does this benefit the average person who just
| wants to have a 9-5 job and go home and hang with family /
| enjoy some hobbies? Not everyone's idea of utopia is "all the
| code I could ever think of writing at my fingertips 24x7".. I
| mean, I sometimes code for fun, etc. But I also do other
| things. I don't WANT to be able to do 25x my current amount of
| work just because. Imagine if you're sick for 2 weeks - now
| you're so far behind you'll never catch up?
|
| The older I get, the less I want all the latest tech
| everywhere. I just want dependable things that work. And
| ESPECIALLY stuff that isn't spying on me.
|
| If AI can replace anyone who today uses a keyboard/mouse/screen
| or does something adjacent (for example, teaching) - what's
| left? If the AI bros are in it for the $$ (many are, I think) -
| what if a few hundred people in the world had, effectively, all
| the $$$?
|
| Will I still be able to retire in a few years, or will my $$ be
| worthless? Will I only be allowed to live/buy food/have medical
| care if I swear allegiance to one of a few tech overlords?
|
| Some of those super-dev-brand-marketing-everything guys will be
| able to spin up a business in a weekend - to what end? What
| products would they sell? They're a prompt away (not man-years)
| from someone else copying their product - so why would I give
| you $10 for it? So you effectively have ZERO $$ from software
| sales. What's the purpose of self-driving cars if no one has $$
| to go anywhere?
|
| Do I think we'll get there? I (mostly) don't - but I also don't
| understand the thinking behind those who DO want to get there
| at all costs.
| MomsAVoxell wrote:
| Every time I hear of a hallucinogenic AI event, I am reminded of
| what happens often with synthesizers, as in the musical variant -
| an instrument, set up for musicality, creativity, and exploration
| which - in a mere glance of a finger tip upon a delicately
| balanced knob - can turn immediately into ear-splitting terror
| and calamity, if one is .. you know .. not too careful.
|
| We have to remember that the results of our prompting is a
| synthesis, formed on the mass psychosis of a humanity which is
| simultaneously capable of being completely and utterly heinous to
| each other, and gloriously noble and kind as well - with nought
| but a stray new word and a thousand old forgotten to keep us all
| together or not.
|
| In any case, all culture is a lie, which only persists in the re-
| telling. The past is a lie, too, somehow, someday, forgotten the
| day nobody remembers it. Hope you make some tunes into the winds
| and they echo on forever. And by you, I mean, not an AI/ML-based
| entity, but rather, the source of all lies, the human soul
| itself.
| lrvick wrote:
| > Machine learning seems likely to further consolidate wealth and
| power in the hands of large tech companies
|
| Only if you let it. You can own the means of production. I self
| host my daily driver LLMs in hardware in my garage.
|
| Never given money to an LLM provider and never will. I only do
| work with tools I own.
| ares623 wrote:
| Who produced the hardware though?
___________________________________________________________________
(page generated 2026-04-14 23:01 UTC)