[HN Gopher] Code is cheap. Show me the talk
___________________________________________________________________
Code is cheap. Show me the talk
Author : ghostfoxgod
Score : 139 points
Date : 2026-01-30 12:05 UTC (10 hours ago)
(HTM) web link (nadh.in)
(TXT) w3m dump (nadh.in)
| ekidd wrote:
| In January 2026, _prototype_ code is cheap. _Shitty_ production
| code is cheap. If that 's all you need--which is sometimes the
| case--then go for it.
|
| But actually good code, with a consistent global model for what
| is going on, still won't come from Opus 4.5 or a Markdown plan.
| It still comes from a human fighting entropy.
|
| Getting eyes on the code still matters, whether it's plain old AI
| slop, or fancy new Opus 4.5 "premium slop." Opus is quite smart,
| and it does its best.
|
| But I've tried seriously using a number of high-profile, vibe-
| coded projects in the last few weeks. And good grief _what
| unbelievable piles of shit_ most of them are. I spend 5% of the
| time using the vibe-coded tool, and 95% of the time trying to
| uncorrupt my data. I spend plenty of time having Opus try to look
| at the source to figure out what went wrong in 200,000 lines of
| vibe-coded Go. And even Opus is like, "This never worked! It's
| broken! You see, there's a race condition in the daemonization
| code that causes the daemon to auto-kill itself!"
|
| And at that point, I stop caring. If someone can't be bothered to
| even _read_ the code Opus generates, I can 't be bothered to
| debug their awful software.
| giancarlostoro wrote:
| AI was never the problem we have been having a downgrade in
| software in general AI just amplifies how badly you can build
| software. The real problem is people who just dont care about the
| craft just pushing out human slop, whether it be because the
| business goes "we can come back to that dont worry" or what have
| you. At least with AI me coming back to something is right here
| and right now, not never or when it causes a production grade
| issue.
| rewilder12 wrote:
| The original phrase "talk is cheap" is generally used to mean
| "it's easy to say a whole lot of shit and that talk often has no
| real value." So this cleaver headline is telling me the code has
| even less value than the talk. That alone betrays a level of
| ignorance I would expect from the author's work. I go to read the
| article and it confirmed my suspicion.
| xnorswap wrote:
| It's directly an inversion of
| https://www.goodreads.com/quotes/437173-talk-is-cheap-show-m...
| joenot443 wrote:
| Did you get very far in? They're referring to a pretty specific
| contextual usage of the phrase (Linus, back in 2000), not the
| adage as a whole.
| wiseowise wrote:
| I read the whole thing, and GP is right. Code is important,
| whether it is generated or handwritten. At least until true
| AGI is here.
| rewilder12 wrote:
| I think I made it to about here haha
|
| > One can no longer know whether such a repository was "vibe"
| coded by a non-technical person who has never written a
| single line of code, or an experienced developer, who may or
| may not have used LLM assistance.
|
| I am talking about what it means to invert that phrase.
| quadrifoliate wrote:
| I think you are hyper-focusing on the headline, which is just a
| joke. The underlying article does not indicate to me that the
| author is ignorant of code, and if you care to look, they seem
| to have a substantial body of public open source contributions
| that proves this quite conclusively.
|
| The underlying point is just that while it was very cognitively
| expensive to back up a good design with good code back in 2000,
| it's much cheaper now. And therefore, making sure the _design_
| is good is the more important part. That 's it really.
| jdjeeee wrote:
| And... the design (artistry) aspect is always the toughest.
| So explain to me, where do the returns come from if it is
| seemingly obviously only those who are very well informed of
| their domains/possess general intelligence can benefit from
| this tool?
|
| Personally I don't see it happening. This is the bitter
| reality the LLM producers have to face at some point.
| quadrifoliate wrote:
| > So explain to me, where do the returns come from if it is
| seemingly obviously only those who are very well informed
| of their domains/possess general intelligence can benefit
| from this tool?
|
| I...don't think this is true at all. "The design of the car
| is more important than what specific material you use" does
| not mean that the material is _unimportant ", just that it
| is _relatively* less important. To put a fake number on it,
| maybe 10% less important.
|
| I think people who have domain knowledge _and_ good coding
| skills will probably benefit the most from this LLM
| producer stuff.
| lo_zamoyski wrote:
| Yes, the original phrase has a specific meaning. But in another
| context, "talk" is more important than the code.
|
| In software development, code is in a real sense less important
| than the understanding and models that developers carry around
| in their heads. The code is, to use an unflattering metaphor, a
| kind of excrement of the process. It means nothing without a
| human interpreter, even if it has operational value. The model
| is _never_ part of the implementation, because software apart
| from human observers is a purely syntactic construct, _at best_
| (even there, I would argue it isn 't even that, as syntax
| belongs to the mind/language).
|
| This has consequences for LLM use.
| gipp wrote:
| I see a lot of the same (well thought out) pushback on here
| whenever these kinds of blind hype articles pop up.
|
| But my biggest objection to this "engineering is over" take is
| one that I don't see much. Maybe this is just my Big Tech
| glasses, but I feel like for a large, mature product, if you
| break down the time and effort required to bring a change to
| production, the actual _writing of code_ is like... ten, _maybe_
| twenty percent of it?
|
| Sure, you can bring "agents" to bear on other parts of the
| process to some degree or another. But their value to the design
| and specification process, or to live experiment, analysis, and
| iteration, is just dramatically less than in the coding process
| (which is already overstated). And that's without even getting
| into communication and coordination across the company, which is
| typically the real limiting factor, and in which heavy LLM usage
| almost exclusively makes things worse.
|
| Takes like this seem to just have a completely different
| understanding of what "software development" even means than I
| do, and I'm not sure how to reconcile it.
|
| To be clear, I think these tools absolutely have a place, and I
| use them where appropriate and often get value out of them.
| They're part of the field for good, no question. But this take
| that it's a _replacement for_ engineering, rather than an
| engineering power tool, consistently feels like it 's coming from
| a perspective that has never worked on supporting a real product
| with real users.
| techblueberry wrote:
| Yeah in a lot of ways, my assertion is that @ "Code is cheap"
| actually means the opposite of what everyone thinks it does.
| Software Engineer is even more about the practices we've been
| developing over the past 20 or so years, not less
|
| Like Linus' observation still stands. Show me that the code you
| provided does exactly what you think it should. It's easy to
| prompt a few lines into an LLM, it's another thing to know
| exactly the way to safely and effectively change low level
| code.
|
| Liz Fong-Jones told a story on LinkedIn about this at
| HoneyComb, she got called out for dropping a bad set of PR's in
| a repo, because she didn't really think about the way the
| change was presented.
| patrickmay wrote:
| > Takes like this seem to just have a completely different
| understanding of what "software development" even means than I
| do, and I'm not sure how to reconcile it.
|
| You're absolutely right about coding being less than 20% of the
| overall effort. In my experience, 10% is closer to the median.
| This will get reconciled as companies apply LLMs and track the
| ROI. Over a single year the argument can be made that "We're
| still learning how to leverage it." Over multiple years the
| 100x increase in productivity claims will be busted.
|
| We're still on the upslope of Gartner's hype cycle. I'm curious
| to see how rapidly we descend into the Trough of
| Disillusionment.
| mupuff1234 wrote:
| They're also great for writing design docs, which is another
| significant time sink for SWEs.
| simonw wrote:
| I'm not sure you're actually in disagreement with the author of
| this piece at all.
|
| They didn't say that software engineering is over - they said:
|
| > Software development, as it has been done for decades, is
| over.
|
| You argue that writing code is 10-20% of the craft. That's the
| point they are making too! They're framing the rest of it as
| the "talking", which is now even more important than it was
| before thanks to the writing-the-code bit being so much
| cheaper.
| Imustaskforhelp wrote:
| > Software development, as it has been done for decades, is
| over.
|
| Simon I guess vb-8558's comment inn here is something which
| is really nice (definitely worth a read) and they mention how
| much coding has changed from say 1995 to 2005 to 2015 to 2025
|
| Directly copying line from their comment here : For sure, we
| are going through some big changes, but there is no "as it
| has been done for decades".
|
| Recently Economic Media made a relevant video about all of
| this too: How Replacing Developers With AI is Going Horribly
| Wrong [https://www.youtube.com/watch?v=ts0nH_pSAdM]
|
| My (point?) is that this pure mentality of code is cheap show
| me the talk is weird/net negative (even if I may talk more
| than I code) simply because code and coding practices are
| something that I can learn over my experience and hone in
| whereas talk itself constitutes to me as non engineers trying
| to create software and that's all great but not really
| understanding the limitations (that still exist)
|
| So the point I am trying to make is that I feel as if when
| the OP mentioned code is 10-20% of the craft, they didn't
| mean the rest is talk. They meant all the rest are
| architectural decisions & just everything surrounding the
| code. Quite frankly, the idea behind Ai/LLM's is to automate
| that too and convert it into pure text and I feel like the
| average layman _significantly_ overestimates what AI can and
| cannot do.
|
| So the whole notion of show me the talk atleast in a more non
| engineering background as more people try might be net
| negative not really understanding the tech as is and quite
| frankly even engineers are having a hard time catching up
| with all which is happening.
|
| I do feel like that the AI industry just has too many words
| floating right now. To be honest, I don't want to talk right
| now, let me use the tool and see how it goes and have a
| moment of silence. The whole industry is moving faster than
| the days till average js framework days.
|
| To have a catchy end to my comment: There is just too much
| talk nowadays. Show me the trust.
|
| I do feel like information has become saturated and we are
| transitioning from the "information" age to "trust" age.
| Human connections between businesses and elsewhere matter the
| most right now more than ever. I wish to support projects
| which are sustainable and fair driven by passion & then I
| might be okay with AI use case imo.
| mehagar wrote:
| The book _Software Engineering at Google_ makes a distinction
| between software engineering and programming. The main
| difference is that software engineering occurs over a longer
| time span than programming. In this sense, AI tools can make
| programming faster, but not necessarily software engineering.
| wrs wrote:
| My recent experience demonstrates this. I had a couple weeks of
| happily cranking out new code and refactors at high speed with
| Claude's help, then a week of what felt like total stagnation,
| and now I'm back to high velocity again.
|
| What happened in the middle was _I didn't know what I wanted_.
| I hadn't worked out the right data model for the application
| yet, so I couldn't tell Claude what to do. And if you tell it
| to go ahead and write more code at that point, very bad things
| will start to happen.
| chasd00 wrote:
| Ive been using LLMs through the web to help with discreet
| pieces of code and scripts for a while now. I've been putting
| it off (out of fear?) but I finally sat down with Claude Code
| on the console and an empty directory to see what the fuss
| was about. Over about a total of 4 hrs and maybe $15 pay as
| you go it became clear things are drastically different now
| in web dev. I'm not saying changed for good or bad just
| things have definitely changed and will never go back.
| jatins wrote:
| Did you read the article? Author is one of the more thoughtful
| and least hype guys you'll find when it comes to these things
| karmasimida wrote:
| Regardless, knowing syntax of programming language or remember
| some library API, is a dead business.
| pmg101 wrote:
| I for one am quite happy to outsource this kind of simply
| memorisation to a machine. Maybe it's the thin end of the
| slippery slope? It doesn't FEEL like it is but...
| negamax wrote:
| I keep on wondering how much of the AI embrace is marketing
| driven. Yes, it can produce value and cut corners. But it seems
| like self driving by 2016 Musk prediction. Which never happened.
| With IPO/Stock valuations closely tied to hype, I wonder if we
| are all witnessing a giant bubble in the making
|
| How much of this is mass financial engineering than real value.
| Reading a lot of nudges how everyone should have Google or other
| AI stock in their portfolio/retirement accounts
| Cthulhu_ wrote:
| No need to wonder, just look at the numbers - investments
| versus revenue are hugely disparate, growth is plateauing.
| dbtablesorrows wrote:
| I realize many are disappointed (especially by technical churn,
| star-based-development JS projects on github without technical
| rigour). I don't trust any claim on the open web if I don't
| know the technical background of the person making it.
|
| However I think - Nadh, ronacher, the redis bro - these are
| people who can be trusted. I find Nadh's article (OP) quite
| balanced.
| Imustaskforhelp wrote:
| When you mention Redis bro, I think you are talking about
| Antirez correct?
| dbtablesorrows wrote:
| yeah, forgot his name.
| xiaoape wrote:
| Maybe we haven't seen much economic value or productivity
| increase given all the AI hypes. I don't think we can deny the
| fact that programming has been through a paradigm shift where
| humans aren't the only ones writing code and the amount of code
| written by humans I would say is decreasing.
| lo_zamoyski wrote:
| There's nothing to wonder about. It's obviously marketing.
|
| The whole narrative of "inevitability" is the stock behavior of
| tech companies who want to push a product onto the public. Why
| fight the inevitable? All you can do is accept and adapt.
|
| And given how many companies ask vendors whether their product
| "has AI" without having the slightest inkling of what that even
| means or whether it even makes sense, as if it were some kind
| of magical fairy dust - yeah, the stench of hype is thick
| enough you could cut it with a knife.
|
| Of course, that doesn't mean it lacks all utility.
| funnyfoobar wrote:
| What you are saying may have made sense at the start of 2025
| where people were still using github copilot tab auto
| completes(atleast I did) and was just toying with things like
| cursor, but unsure.
|
| Things have changed drastically now, engineers with these
| tools(like claude code) have become unstoppable.
|
| Atleast for me, I have been able to contribute to the codebases
| i was unfamiliar with, even with different tech stacks. No, I
| am not talking about generating ai slop, but I have been
| enabled to write principal engineer level code unlike before.
|
| So i don't agree with the above statement, it's actually
| generating real value and I have become valuable because of the
| tools available to me.
| leecommamichael wrote:
| > Ignoring outright bad code, in a world where functional code is
| so abundant that "good" and "bad" are indistinguishable,
| ultimately, what makes functional AI code slop or non-slop?
|
| I'm sorry, but this is an indicator for me that the author hasn't
| had a critical eye for quality in some time. There is massive
| overlap between "bad" and "functional." More than ever. The
| barrier-to-entry to programming got irresponsibly low for a time
| there, and it's going to get worse. The toolchains are not in a
| good way. Windows and macOS are degrading both in performance and
| usability, LLVM still takes 90% of a compiler's CPU time in
| unoptimized builds, Notepad has AI (and crashes,) simple social
| (mobile) apps are >300 MB download/installs when eight years ago
| they were hovering around a tenth of that, a site like Reddit
| only works on hardware which is only "cheap" in the top 3 GDP
| nations in the world... The list goes on. Whatever we're doing,
| it is not scaling.
| atomicnature wrote:
| This is the "artisanal clothing argument".
|
| I'd think there'll be a dip in code quality (compared to human)
| initially due to "AI machinery" due to its immaturity. But
| over-time on a mass-scale - we are going to see an improvement
| in the quality of software artifacts.
|
| It is easier to 'discipline' the top 5 AI agents in the planet
| - rather than try to get a million distributed devs
| ("artisans") to produce high quality results.
|
| It's like in the clothing or manufacturing industry I think.
| Artisans were able to produce better individual results than
| the average industry machinery, at least initially. But
| overtime - industry machinery could match the average artisan
| or even beat the average, while decisively beating in scale,
| speed, energy efficiency and so on.
| instig007 wrote:
| > This is the "artisanal clothing argument".
|
| > it is easier to 'discipline' the top 5 AI agents in the
| planet - rather than try to get a million distributed devs
| ("artisans") to produce high quality results.
|
| Your take essentially is "let's live in a shoe box, packaging
| pipelines produce them cheaply en masse, who needs slow poke
| construction engineers and architects anymore"
| atomicnature wrote:
| Where have I said engineers/architects aren't necessary? My
| point is that it is easier to get AI to get better than try
| to improve a million developers. Isn't that a
| straightforward point?
|
| What the role of an engineer in the new context - I am not
| speculating on.
| instig007 wrote:
| > My point is that it is easier to get AI to get better
| than try to improve a million developers.
|
| No it's not, your whole premise is invalid both in terms
| of financing the effort and in the AI's ability to
| improve beyond RNG+parroting. The AI code agents produce
| shoe boxes, your claim is that they can be improved to
| produce buildings instead. It won't happen, not until you
| get rid of the "temperature" (newspeak for RNG) and
| replace it with conceptual cognition.
| lo_zamoyski wrote:
| > industry machinery could match the average artisan or even
| beat the average
|
| Whether it could is distinct from whether it will. I'm sure
| you've noticed the decline in the quality of clothing.
| Markets a mercurial and subject to manipulation through hype
| (fast fashion is just a marketing scheme to generate revenue,
| but people bought into the lie).
|
| With code, you have a complicating factor, namely, that LLMs
| are now consuming their own shit. As LLM use increases, the
| percentage of code that is generated vs. written by people
| will increase. That risks creating an echo chamber of sorts.
| atomicnature wrote:
| I don't agree with the limited point about fast
| fashion/enthittification, etc.
|
| Quick check: Do you want to go back to pre-industrial era
| then - when according to you, you had better options for
| clothing?
|
| Personally, I wouldn't want that - because I believe as a
| customer, I am better served now (cost/benefit wise) than
| then.
|
| As to the point about recursive quality decline - I don't
| take it seriously, I believe in human ingenuity, and
| believe humans will overcome these obstacles and over time
| deliver higher quality results at bigger scale/lower
| costs/faster time cycles.
| lo_zamoyski wrote:
| > Quick check: Do you want to go back to pre-industrial
| era then - when according to you, you had better options
| for clothing?
|
| This does not follow. Fast fashion as described is
| historically recent. An an example, I have a cheap
| t-shift from the mid-90s that is in excellent condition
| after three decades of use. Now, I buy a t-shirt in the
| same price range, and it begins to fall apart in less
| than a year. This decline in the quality of clothing is
| well known and documented, and it is incredibly wasteful.
|
| The point is that this development is the product of
| consumerist cultural presuppositions that construct a
| particular valuation that encourages such behavior,
| especially one that fetishizes novelty for its own sake.
| In the absence of such a valuation, industry would take a
| different direction and behave differently. Companies, of
| course, promote fast fashion, because it means higher
| sales.
|
| Things are not guaranteed to become better. This is the
| fallacy of progress, the notion that the state of the
| world at _t+1_ must be better than it was at _t_. At the
| very least, it demands an account of what constitutes
| "better".
|
| > I don't take it seriously, I believe in human
| ingenuity, and believe humans will overcome these
| obstacles
|
| That's great, but that's not an argument, only a
| sentiment.
|
| I also didn't say we'll experience necessarily a decline,
| only that LLMs are now trained on data produced by human
| beings. That means the substance and content is entirely
| derived from patterns produced by us, hence the
| appearance of intelligence in the results it produces.
| LLMs merely operate over statistical distributions in
| that data. If LLMs reduce the amount of content made by
| human beings, then training on the generated data is
| circular. "Ingenuity" cannot squeeze blood out of a
| stone. Something cannot come from nothing. I didn't say
| there can't be this something, but there does need to be
| a something from which an LLM or whatever can benefit.
| noosphr wrote:
| The issue is that code isn't clothing. It's the clothing
| factory. We aren't artisans sewing clothing. We're production
| engineers deciding on layouts for robots to make clothes most
| efficiently.
|
| I see this type error of thinking all the time. Engineers
| don't make objects of type A, we make functions of type A ->
| B or higher order.
| atomicnature wrote:
| Go concrete. In FAANG engineering jobs now what % is this
| factory designer category vs what % is writing some mundane
| glue code, moving data around in CRUD calls, or putting in
| a monitoring metric etc?
|
| Once you look at the present engineering org compositions
| see what's the error in thinking.
|
| There are other analogy issues in your response which I
| won't nitpick
| noosphr wrote:
| Production egineers don't design the looms in a weaving
| factory either.
| leecommamichael wrote:
| Except I am not talking about clothing. You are guessing when
| you say "I'd think" based on your comparison to manufacturing
| clothing. Why guess and compare when you have more context
| than that? You're in this industry, right? The commodity of
| clothing is not like the commodity of software at all. Almost
| nothing is, as it doesn't really have a physical form. That
| impacts the economics significantly.
|
| To highlight the gaps in your analogy; machinery still fails
| to match artisan clothing-makers. Despite being relatively
| fit, I've got wide hips. I cannot buy denim jeans that both;
| fit my legs, _and_ my waist. I either roll the legs up or
| have them hemmed. I am not all that odd, either. One size
| cannot fit all.
| CuriouslyC wrote:
| One issue is that tooling and internals have been optimized for
| individual people's tastes currently. Heterogeneous
| environments make the models spikier. As we shift to building
| more homogenized systems optimized around agent accessibility,
| I think we'll see significant improvements
|
| Elegantly, agents finally give us an objective measure of what
| "good" code is. It's code that maximizes the likelihood that
| future agents will be able to successfully solve problems in
| this codebase. If code is "bad" it makes future problems
| harder.
| leecommamichael wrote:
| > Elegantly, agents finally give us an objective measure of
| what "good" code is. It's code that maximizes the likelihood
| that future agents will be able to successfully solve
| problems in this codebase. If code is "bad" it makes future
| problems harder.
|
| An analogous argument was made in the 90's to advocate for
| the rising desire for IDEs and OOP languages. "Bad" code came
| to be seen as 1000+ lines in one file because you could
| simply conjure up the documentation out-of-context, and so
| separation of concerns slipped all the way from "one function
| one purpose" to something not far from "one function one
| file."
|
| I don't say this as pure refusal, but to beg the question of
| what we lose when we make these values-changes. At this time,
| we do not know. We are meekly accepting a new mental
| prosthesis with insufficient foresight of the consequences.
| wiseowise wrote:
| >> Remember the old adage, "programming is 90% thinking and 10%
| typing"? It is now, for real.
|
| > Proceeds to write literal books of markdown to get something
| meaningful
|
| >> It requires no special training, no new language or framework
| to learn, and has practically no entry barriers--just good old
| critical thinking and foundational human skills, and competence
| to run the machinery.
|
| > Wrote a paragraph about how it is important to have serious
| experience to understand the generated code prior to that
|
| >> For the first time ever, good talk is exponentially more
| valuable than good code. The ramifications of this are
| significant and disruptive. This time, it is different.
|
| > This time is different bro I swear, just one more model, just
| one more scale-up, just one more trillion parameters, bro we're
| basically at AGI
| ctrlmeta wrote:
| This "Code is cheap. Show me the talk." punchline gets overused
| as a bait these days. It is an alright article but that's a lot
| of words to tell us something we already know. There's nothing
| here that we don't already know. It's not just greedy companies
| riding the AI wave. Bloggers and influencers are also riding the
| AI wave. They know if you say anything positive or negative about
| AI with a catchy title it will trend on HN, Reddit, etc.
|
| Also credit where credit is due. Origin of this punchline:
|
| https://nitter.net/jason_young1231/status/193518070341689789...
|
| https://programmerhumor.io/ai-memes/code-is-cheap-show-me-th...
| vb-8448 wrote:
| > Software development, as it has been done for decades, is over.
|
| I'm pretty sure the way I was doing things in 2005 was completely
| different compared to 2015. Same for 2015 and 2025. I'm not old
| enough to know how they were doing things in 1995, but I'm pretty
| sure there very different compared to 2005.
|
| For sure, we are going through some big changes, but there is no
| "as it has been done for decades".
| awesan wrote:
| I don't think things have changed that much in the time I've
| been doing it (roughly 20 years). Tools have evolved and new
| things were added but the core workflow of a developer has more
| or less stayed the same.
| mobiuscog wrote:
| I don't think that's true, at least for everywhere I've
| worked.
|
| Agile has completely changed things, for better or for worse.
|
| Being a SWE today is nothing like 30 years ago, for me. I
| much preferred the earlier days as well, as it felt far more
| engineered and considered as opposed to much of the MVP
| 'productivity' of today.
| lo_zamoyski wrote:
| MVP is not necessarily opposed to engineered and
| considered. It's just that many people who throw that term
| around have little regard for engineering, which they hide
| behind buzzwords like "agile".
| seszett wrote:
| I also wonder what those people have been doing all this
| time... I also have been mostly working as a developer for
| about 20 years and I don't think much has changed at all.
|
| I also don't feel less productive or lacking in anything
| compared to the newer developers I know (including some LLM
| users) so I don't think I am obsolete either.
| neutronicus wrote:
| At some point I could straight-up call functions from the
| Visual Studio debugger Watch window instead of editing and
| recompiling. That was pretty sick.
|
| Yes I know, Lisp could do this the whole time. Feel free to
| offer me a Lisp job drive-by Lisp person.
| bryanlarsen wrote:
| 1995 vs 2005 was definitely a larger change than subsequent
| decades; in 1995 most information was gathered through dead
| trees or reverse engineering.
| jsight wrote:
| Yeah, I remember being amazed at the immediate incremental
| compilation on save in Visual Age for Java many years ago.
| Today's neovim users have features that even the most advanced
| IDEs didn't have back then.
|
| I think a lot of people in the industry forget just how much
| change has come from 30 years of incremental progress.
| dist-epoch wrote:
| Long blog posts are cheap. Show me the prompt.
| lioeters wrote:
| Prompts are cheap. Show me the spark of consciousness that
| brings the whole thing to life, that which makes all of it
| worthwhile and meaningful.
| ojr wrote:
| talk is even cheaper, still show me the code, people claim 10x
| productivity that translates to 10x of work done in a month, even
| with Opus 4.5 out since November 2025 I haven't seen signs of
| this. AI makes the level of complexity with modern systems
| bearable, it was getting pretty bad before and AI kinda saved us.
| A non-trivial React app is still a pain to write. Also creating a
| harness for a non-deterministic api that AI provides is also
| pain. At least we don't have to fight through typing errors or
| search through relevant examples before copying and pasting. AI
| is good at automating typing, the lack of reasoning and the
| knowledge cutoff still makes coding very tedious though.
| program_whiz wrote:
| Best example of this is Claude's own terminal program.
| Apparently renders react at 60fps and then translates it into
| ANSI chars that then diff the content of the terminal and do an
| overwrite...
|
| All to basically mimic what curses can do very easily.
| v3ss0n wrote:
| code is cheap, show me the prompt
| Waterluvian wrote:
| I think if your job is to assemble a segment of a car based on a
| spec using provided tools and pre-trained processes, it makes
| sense if you worry that giant robot arms might be installed to
| replace you.
|
| But if your job is to assemble a car in order to explore what
| modifications to make to the design, experiment with a single
| prototype, and determine how to program those robot arms, you're
| probably not thinking about the risk of being automated.
|
| I know a lot of counter arguments are a form of, "but AI _is_
| automating that second class of job!" But I just really haven't
| seen that at all. What I have seen is a misclassification of the
| former as the latter.
| HorizonXP wrote:
| This is actually a really good description of the situation.
| But I will say, as someone that prided myself on being the
| second one you described, I am becoming very concerned about
| how much of my work was misclassified. It does feel like a lot
| of work I did in the second class is being automated where
| maybe previously it overinflated my ego.
| skydhash wrote:
| SWE is more like formula 1 where each race presents a unique
| combination of track, car, driver, conditions. You may have
| tools to build the thing, but designing the thing is the main
| issue. Code editor, linter, test runner, build tools are for
| building the thing. Understanding the requirements and the
| technical challenges is designing the thing.
| Waterluvian wrote:
| The other day I said something along the lines of, "be
| interested in the class, not the instance" and I meant to
| try to articulate a sense of metaprogramming and
| metaanalysis of a problem.
|
| Y is causing Z and we should fix that. But if we stop and
| study the problem, we might discover that X causes the
| class of Y problem so we can fix the entire class, not just
| the instance. And perhaps W causes the class of X issue. I
| find my job more and more being about how far up this
| causality tree can I reason, how confident am I about my
| findings, and how far up does it make business sense to
| address right now, later, or ever?
| altmanaltman wrote:
| is it? I really fail to see the metaphor as an F1 fan. The
| cars do not change that much; only the setup does, based on
| track and conditions. The drivers are fairly consistent
| through the season. Once a car is built and a pecking order
| is established in the season, it is pretty unrealistic to
| expect a team with a slower car to outcompete a team with a
| faster car, no matter what track it is (since the
| conditions affect everyone equally).
|
| Over the last 16 years, Red Bull has won 8 times, Mercedes
| 7 times and Mclaren 1. Which means, regardless of the
| change in tracks and conditions, the winners are usually
| the same.
|
| So either every other team sucks at "understanding the
| requirements and the technical challenges" on a clinical
| basis or the metaphor doesn't make a lot of sense.
| Waterluvian wrote:
| I wonder about how true this was historically. I imagine
| race car driving had periods of rapid, exciting
| innovation. But I can see how a lot of it has probably
| reached levels of optimization where the rules, safety,
| and technology change well within the realm of
| diminishing returns. I'm sure there's still a ridiculous
| about of R&D though? (I don't really know race car
| driving)
| altmanaltman wrote:
| Sure there is crazy levels of R&D but that mostly happens
| off season or if there is a change in regulations which
| happen every 4-5 years usually. Interestingly, this year
| the entire grid starts with new regs and we don't really
| know the pecking order yet.
|
| But my whole point was that race to race, it really isn't
| that much different for the teams as the comment implied
| and I am still kind of lost how it fits to SWE unless
| you're really stretching things.
|
| Even then, most teams dont even make their own engines
| etc.
| skydhash wrote:
| Do you really think that rainy Canada is the same as
| Jedddah, or Singapore? And what is the purpose of the
| free practice sessions?
|
| You've got the big bet to design the car between the
| season (which is kinda the big architectural decisions
| you make at the beginning of the project). Then you got
| the refinement over the season, which are like bug
| fixings and performance tweaks. There's the parts
| upgrade, which are like small features added on top of
| the initial software.
|
| For the next season, you either improve on the design or
| start from scratch depending on what you've learned. In
| the first case, It is the new version of the software. In
| the second, that's the big refactor.
|
| I remember that the reserve drivers may do a lot of
| simulations to provide data to the engineers.
| skydhash wrote:
| Most projects don't change that much either. Head over to
| a big open source project, and more often you will only
| see tweaks. To be able to do the tweaks require a very
| good understanding of the whole project (Naur's theory of
| programming).
|
| Also in software, we can do big refactors. F1 teams are
| restricted to the version they've put in the first race.
| But we do have a lot of projects that were designed well
| enough that they've never changed the initial version,
| just build on top of it.
| enlyth wrote:
| A software engineer with an LLM is still infinitely more
| powerful than a commoner with an LLM. The engineer can debug,
| guide, change approaches, and give very specific instructions
| if they know what needs to be done.
|
| The commoner can only hammer the prompt repeatedly with "this
| doesn't work can you fix it".
|
| So yes, our jobs are changing rapidly, but this doesn't strike
| me as being obsolete any time soon.
| Waterluvian wrote:
| I think it's a bit like the Dunning-Kruger effect. You need
| to know what you're even asking for and how to ask for it.
| And you need to know how to evaluate if you've got it.
|
| This actually reminds me so strongly of the Pakleds from Star
| Trek TNG. They knew they wanted to be strong and fast, but
| the best they could do is say, "make us strong." They had no
| ability to evaluate that their AI (sorry, Geordi) was giving
| them something that looked strong, but simply wasn't.
| JoelMcCracken wrote:
| Oh wow this is a great reference/image/metaphor for
| "software engineers" who misuse these tools - "the great
| pakledification" of software
| bambax wrote:
| Agree totally.
| javier_e06 wrote:
| I listened to an segment on the radio where a College Teacher
| told their class that it was okay to use AI assist you during
| test provided:
|
| 1. Declare in advance that AI is being used.
|
| 2. Provided verbatim the questions and answer session.
|
| 3. Explain why the answer given by the AI is good answer.
|
| Part of the grade will include grading 1, 2, 3
|
| Fair enough.
| bheadmaster wrote:
| This is actually a great way to foster the learning spirit
| in the age of AI. Even if the student uses AI to arrive at
| an answer, they will still need to, at the very least, ask
| the AI to give it an explanation that will teach them how
| it arrived to the solution.
| jdjeeee wrote:
| No this is not the way we want learning to be - just like
| how students are banned from using calculators until they
| have mastered the foundational thinking.
| stevofolife wrote:
| Calculator don't tell you step by step. AI can.
| simianparrot wrote:
| And it's making that up as well.
| danaris wrote:
| Yeah; it gets steps 1-3 right, 4-6 obviously wrong, and
| then 7-9 subtly wrong such that a student, who needs it
| step by step while learning, can't tell.
| bheadmaster wrote:
| That's a fair point, but AI can do much more than just
| provide you with an answer like a calculator.
|
| AI can explain the underlying process of manual
| computation and help you learn it. You can ask it
| questions when you're confused, and it will keep
| explaining no matter how off the topic you go.
|
| We don't consider tutoring bad for learning - quite the
| contrary, we tutor slower students to help them catch up,
| and advanced students to help them fulfill their
| potential.
|
| If we use AI as if it was an automated, tireless tutor,
| it may change learning for the better. Not like it was
| anywhere near great as it was.
| aesch wrote:
| Props to the teacher for putting in the work to
| thoughtfully grade an AI transcript! As I typed that I
| wondered if a lazy teacher might then use AI to grade the
| students AI transcript?
| chasd00 wrote:
| It's better than nothing but the problem is students will
| figure out feeding step 2 right back to the AI logged in
| via another session to get 3.
| moffkalast wrote:
| That's roughly what we did as well. Use anything you want,
| but in the end you have to be able to explain the process
| and the projects are harder than before.
|
| If we can do more now in a shorter time then let's teach
| people to get proficient at it, not arbitrarily limit them
| in ways they won't be when doing their job later.
| raincole wrote:
| > I know a lot of counter arguments are a form of, "but AI is
| automating that second class of job!"
|
| Uh, it's not the issue. The issue is that there isn't that much
| demand for the second class of job. At least not yet. The first
| class of job is what feeds billions of families.
|
| Yeah, I'm aware of the lump of labour fallacy.
| Waterluvian wrote:
| Discussing what we should do about the automation of labour
| is nothing new and is certainly a pretty big deal here. But I
| think you're reframing/redirecting the intended topic of
| conversation by suggesting that "X isn't the issue, Y is."
|
| It wanders off the path like if I responded with, "that's
| also not the issue. The issue is that people need jobs to
| eat."
| blktiger wrote:
| It depends a lot on the type of industry I would think.
| Buttons840 wrote:
| My job is to make people who have money think I'm indispensable
| to achieving their goals. There's a good chance AI can fake
| this well enough to replace me. Faking it would be good enough
| in an economy with low levels of competition; everyone can
| judge for themselves if this is our economy or not.
| crazylogger wrote:
| You are describing tradition (deterministic?) automation before
| AI. With AI systems as general as today's SOTA LLMs, they'll
| happily take on the job regardless of the task falling into
| class I or class II.
|
| Ask a robot arm "how should we improve our car design this
| year", it'll certainly get stuck. Ask an AI, it'll give you a
| real opinion that's at least on par with a human's opinion. If
| a company builds enough tooling to complete the "AI comes up
| with idea -> AI designs prototype -> AI robot physically builds
| the car -> AI robot test drives the car -> AI evaluates all
| prototypes and confirms next year's design" feedback loop, then
| theoretically this definitely can work.
|
| This is why AI is seen as such a big deal - it's fundamentally
| different from all previous technologies. To an AI, there is no
| line that would distinguish class I from II.
| figassis wrote:
| I don't think this is the issue "yet". It's that no matter what
| class you are, your CEO does not care. Mediocre AI work is
| enough to give them immense returns and an exit. He's not
| looking out for the unfortunate bag holders. The world has
| always had tolerance for highly distributed crap. See Windows.
| dasil003 wrote:
| This seems like a purely cynical lacking any substantive
| analysis.
|
| Despite whatever nasty business practices and shitty UX
| Windows has foisted on the world, there is no denying the
| tremendous value that it has brought, including impressive
| backwards compatibility that rivals some of the best
| platforms in computing history.
|
| AI shovelware pump-n-dump is an entirely different short term
| game that will never get anywhere near Microsoft levels of
| success. It's more like the fly-by-nights in the dotcom
| bubble that crashed and burned without having achieved
| anything except a large investment.
| figassis wrote:
| You misunderstand me. While I left Windows over a decade
| ago, I recognize it was a great OS in some aspects. I was
| referring to the recent AI fueled Windows developments and
| Ad riddled experiences. Someone decided that is fine, and
| you won't see orgs or regular users drop it...tolerance.
| mips_avatar wrote:
| Well a lot of managers view their employees as doing the
| former, but they're really doing the latter
| dbtablesorrows wrote:
| OK, fuck it, show me the demo (without staging it). show me the
| result.
| keybored wrote:
| Lots of words to say that "now" communicating in regular human
| language is important.
|
| What soft-skill buzzword will be the next one as the capital
| owners take more of the supposed productivity profits?
| raincole wrote:
| Talk is never cheap. Communicating your thoughts to people
| without the exact same kind of expertise as you is the most
| important skill.
|
| This quote is from Torvalds, and I'm quite sure that if he
| weren't able to write eloquent English no one would know Linux
| today.
|
| Code is important when it's the best medium to express the
| essence of your thoughts. Just like a composer cannot express the
| music in his head with English words.
| CuriouslyC wrote:
| You want a real mind bender? Imagine a universe where Linus's
| original usenet post didn't go viral.
| Imustaskforhelp wrote:
| I don't think Linus is a people person. This is something which
| he talks about himself in the famous ted-ed video.
|
| I just re-watched the video (currently halfway) & I feel like
| the point of Linux is something which you are forgetting but it
| was never _intended_ to grow so much and Linux himself in the
| video when asked says that he never had a moment where he went
| like oh this went big.
|
| In fact he talks about when the project was little. On how he
| had gratitude when the project had 10 people maybe 100 people
| working on it and then things only grow over a very large time
| frame (more than 25-30years? maybe now 35 just searched 34)
|
| He talks about how he got other people's idea which he couldn't
| have thought of things themselves and when he first created the
| project he just wanted to show off to the world to look at what
| I did (and he did it mainly for both the end result of the
| project and programming itself too) and then he got introduced
| to open source (free software) by his friend and he just
| decided to have it open source.
|
| My point is it was neither the code nor the talk. Linus is the
| best person to maintain Linux, why? Because he has been
| passionate over it for 25 years. I feel like Linux would be
| just as interested in talking about the code and any
| improvements now with maybe the same vigour as 34 years ago. He
| loves his creation & we love Linux too :)
|
| Another small point I wish to add is that if talk was the only
| thing, then you are missing the point because Linux was created
| because hurd was getting delayed (so all talks no code)
|
| Linux himself says that if the hurd kernel would've been
| released earlier, Linux wouldn't have been created.
|
| So all talk no code Hurd project (which from what I hear right
| now is still a bit limbo as now everyone [rightfully?] uses
| linux) is what led to creation of linux project.
|
| Everyone who hasn't watched Linus's ted ed should definitely
| watch it.
|
| The Mind Behind Linux | Linus Torvalds | TED :
| https://www.youtube.com/watch?v=o8NPllzkFhE
| z0r wrote:
| From the article: Historically, it would take a
| reasonably long period of consistent effort and many iterations
| of refinement for a good developer to produce 10,000 lines of
| quality code that not only delivered meaningful results, but was
| easily readable and maintainable. While the number of lines of
| code is not a measure of code quality--it is often the inverse--a
| codebase with good quality 10,000 lines of code indicated
| significant time, effort, focus, patience, expertise, and often,
| skills like project management that went into it. Human traits.
| Now, LLMs can not only one-shot generate that in seconds,
|
| Evidence please. Ascribing many qualities to LLM code that I
| haven't (personally) seen at that scale. I think if you want to
| get an 'easily readable and maintainable' codebase of 10k lines
| with an LLM you need somebody to review its contributions very
| closely, and it probably isn't going to be generated with a 1
| shot prompt.
| Imustaskforhelp wrote:
| Okay I was writing a comment to simon (and I have elaborated some
| there but I wanted this to be something catchy to show how I feel
| and something people might discuss with too)
|
| Both Code and talk are cheap. Show me the trust. Show me how I
| can trust you. Show me your authenticity. Show me your passion.
|
| Code used to be the sign of authenticity. This is whats changing.
| You can no longer guarantee that large amounts of code let's say
| are now authentic, something which previously used to be the case
| (for the most part)
|
| I have been shouting into the void many times about it but Trust
| seems to be the most important factor.
|
| Essentially, I am speaking it from a consumer perspective but
| suppose that you write AI generated code and deploy it. Suppose
| you talked to AI or around it. Now I can do the same too and
| create a project sometimes (mostly?) more customizable to my
| needs for free/very-cheap.
|
| So you have to justify why you are charging me. I do feel like
| that's only possible if there is something additional added to
| value. _Trust_ , I trust the decision that you make and
| personally I trust people/decisions who feel like they take me or
| my ideas into account. So, essentially not ripping me off while
| actively helping. I don't know how to explain this but the most
| thing which I hate is the feeling of getting ripped off. So
| justifiable sustainable business who is open/transparent about
| the whole deal and what he gets and I get just gets my respect
| and my trust and quite frankly, I am not seeing many people do
| that but hopefully this changes.
|
| I am curious now what you guys of HN think about this & what
| trust means to you in this (new?) ever-changing world.
|
| Like y'know I feel like everything changes all the time but at
| the same time nothing changes at the same time too. We are still
| humans & we will always be humans & we are driven by our human
| instincts. Perhaps the community I envision is a more tight knit
| community online not complete mega-sellers.
|
| Thoughts?
| api wrote:
| Uhh... how about show me both?
|
| I think that's always been true. The ideas and reasoning process
| matter. So does the end product. If you produced it with an LLM
| and it sucks, it still sucks.
| heliumtera wrote:
| Please no. Talk is cheap.
|
| I hate this trend of using adjectives to describe systems.
|
| Fast Secure Sandboxed Minimal Reliable Robust Production grade AI
| ready Let's you _____ Enables you to _____
|
| But somewhat I agree, code is essentially free, you can shit out
| infinite amounts of code. Unless it's good, then show the code
| instead. If your code is shit, show the program. If your program
| is shit, your code is worse, but you still pursing an interesting
| idea (in your eyes), show the prompt instead of the slop
| generated. Or even better communicate an elaborate version of the
| prompt.
|
| >One can no longer know whether such a repository was "vibe"
|
| This is absurd. Simply false, people can spot INSTANTLY when the
| code is good, see: https://news.ycombinator.com/item?id=46753708
| bambax wrote:
| > _Code was always a means to an end. Unlike poetry or prose, end
| users don't read or care about code._
|
| Yes and no. Code is not art, but _software_ is art.
|
| What is art, then? Not something that's "beautiful", as beauty is
| of course mostly subjective. Not even something that works well.
|
| I think art is a thing that was made with great care.
|
| It doesn't matter if some piece of software was vibe-coded in
| part or in full, if it was edited, tested, retried enough times
| for its maker to consider it "perfect". Trash is something that's
| done in a careless way.
|
| If you truly love and use what you made, it's likely someone else
| will. If not, well... why would anyone?
| jll29 wrote:
| Well, why do humans read code:
|
| 1. To maintain it (to refactor or extend it).
|
| 2. To test it.
|
| 3. To debug it (to detect and fix flaws in it).
|
| 4. To learn (to get better by absorbing how the pros do it).
|
| 5. To verify and improve it (code review, pair programming).
|
| 6. To grade it (because a student wrote it).
|
| 7. To enjoy its beauty.
|
| These are all I can think of right now, and they are ordered
| from most common to most rare case.
|
| Personally, I have certainly read and re-read SICP code to
| enjoy its beauty (7), perhaps mixed in with a desire to learn
| (4) how to write equally beautiful code.
| jdjeeee wrote:
| Art is expression. What the software provides (an experience)
| for which the artist (software engineer) expresses in code.
| captain5123 wrote:
| > The real concern is for generations of learners who are being
| robbed of the opportunity to acquire the expertise to objectively
| discern what is slop and what is not. How do new developers build
| the skills that seniors generated through time? I see my seniors
| having higher success in vibe-coding than me. How can I short-
| circuit the time they put through for myself?
| monster_truck wrote:
| Feels like this website is yelling at me with its massive text
| size. Had to drop down to -50% to get it readable.
|
| Classical indicators of good software are still very relevant and
| valid!
|
| Building something substantial and material (ie not an api
| wrapper+gui, to-do list) that is undeniably well made, while
| being faster and easier than it used to be, still takes a _lot_
| of work. Even though you don't have to write a line of code, it
| moves so fast that you are now spending 3.5-4 days of your work
| week reading code, using the project, running benchmarks and
| experimental test lanes, reviewing specs and plans, drafting
| specs, defining features and tests.
|
| The level of granularity needed to get earnestly good results is
| more than most people are used to. It's directly centered at the
| intersection between spec heavy engineering work and writing
| requirements for a large, high quality offshore dev team that is
| endearingly literal in how they interpret instructions. Depending
| on the work, I've found that I average around one 'task' per
| 22-35 lines of code.
|
| You'll discover a new sense of profound respect for the better
| PMs, QA Leads, Eng Directors you have worked with. Months of
| progress happen each week. You'll know you're doing it right when
| you ask an Agent to evaluate the work since last week and it
| assumes it is reviewing the output of a medium sized business and
| offers to make Jira tickets.
| optymizer wrote:
| > because one is hooked on and dependent on the genie, the
| natural circumstances that otherwise would allow for foundational
| and fundamental skills and understanding to develop, never arise,
| to the point of cognitive decline.
|
| After using AI to code, I came to the same conclusion myself.
| Interns and juniors are fully cooked:
|
| - Companies will replace them with AI, telling seniors to use AI
| instead of juniors
|
| - As a junior, AI is a click away, so why would you spend
| sleepless nights painstakingly acquiring those fundamentals?
|
| Their only hope is to use AI to accelerate their own _learning_,
| not their performance. Performance will come after the learning
| phase.
|
| If you're young, use AI as a personal TA, don't use it to write
| the code for you.
| polytely wrote:
| as someone who is sort of a medior programmer it is very hard
| to balance, trying to keep up with the advancements in AI while
| not shooting myself in the foot by robbing myself of learning
| experiences
| passivegains wrote:
| if it helps, that kind of thoughtfulness is how to learn the
| things that matter most. you're already on the right track.
| MyHonestOpinon wrote:
| My latest take on AI assisted coding is that AI tools are an
| amplifier of the developer.
|
| - A good and experienced developer who knows how to organize and
| structure systems will become more productive.
|
| - An inexperienced developer will also be able to produce more
| code but not necessarily systems that are maintainable.
|
| - A sloppy developer will produce more slop.
| pton_xd wrote:
| Code, talk, who cares. Show me the product. If it works and is
| useful I will incorporate it into my life. Ultimately no one
| cares how the sausage is made.
| ares623 wrote:
| Uhh I kinda care? And some people do too? People have given
| software social permission so far. I have a feeling that it's
| about to change. Engineers are thinking too narrowly about the
| effects of LLM assisted coding. They only see the shiny bits
| that benefit them.
| lifetimerubyist wrote:
| > Ultimately no one cares how the sausage is made.
|
| Yeah...now that prompt injection is a fact of life and
| basically unsolvable - we can't really afford this luxury
| anymore.
| w10-1 wrote:
| It might be a mistake to think in terms of production costs.
|
| The real "cost" of software is reliance: what risk do your API
| clients or customers take in relying on you? This is just as true
| for free-as-in-beer software as for SaaS with enterprise SLA's.
|
| In software and in professions, providers have some combination
| of method and qualifications or authority which justifies
| reliance by their clients. Both education and software have
| reduced the reliance on naked authority, but a good track record
| remains the gold standard.
|
| So providers (individuals and companies) have to ask how much of
| their reputation do they want to risk on any new method (AI,
| agile, ...)? Initially, it's always a promising boost in
| productivity. But then...
|
| So the real question is what "Show me" means - for a quick meet,
| an enterprise sale, an enduring human-scale consumer
| dependence...
|
| So, prediction: AI companies and people that can "show me" will
| be the winners.
|
| (Unfortunately, we've also seen competitive advantage accrue to
| dystopian hiding of truth and risk, which would have the same
| transaction-positive effect but shift and defer the burden of
| risk. Let's hope...)
| datatrashfire wrote:
| premise is wrong. have seen a number of claude/codex disasters
| that never make it to production with clients, yet consumed an
| enormous amount of human time and bandwidth.
|
| expertise and effort is and will continue to be for the
| forseeable future essential.
|
| talk, like this, still cheap.
| whatever1 wrote:
| If you have a solid test environment that would allow for an
| agent to check if it is right or wrong, I encourage you to do the
| experiment.
|
| Put the agent on the wheel and observe it as it tries ruthlessly
| to pass the test. These days, likely it will manage to pass the
| tests after 3-5 loops, which I find fascinating.
|
| Close the loop, and try an LLM. You will be surprised.
| SLWW wrote:
| I would like articles like this to have a quick "who" and "what
| experience" is talking. I can usually tell the conclusions based
| on experience/skill level regardless, but it would be nice.
|
| Also, that projects page on his website is atrocious; hate to be
| "that guy" but I don't trust the author's insight since "personal
| projects" seems to include a lot more than just his work; the
| first several PRs I looked at where all vibed.
|
| I'm not interested in re-implementations of the same wheel over
| and over again telling me and people who know how to write real
| software (have been doing it since I was 12) that we are becoming
| unnecessary bc you can bully an extremely complex machine built
| on a base theory of heuristics abstracted out endlessly
| (perceptually) to re-invent the same specs in slightly different
| flavors.
|
| > 100% human written, including emdashes. Sigh. If you can't
| write without emdashes, maybe you spend too much time with LLMs
| and not enough time reading and learning on your own. Also people
| can lie on the Internet, they do it all the time, and if not then
| I'm doing it right now.
|
| The hubris on display is fascinating.
| overgard wrote:
| I asked Codex to write some unit tests for Redux today. At first
| glance it looked fine, and I continued on. I then went back to
| add a test by hand, and after looking more closely at the output
| there were like 50 wtf worthy things scattered in there. Sure
| they ran, but it was bad in all sorts of ways. And this was just
| writing something very basic.
|
| This has been my experience almost every time I use AI:
| superficially it seems fine, once I go to extend the code I
| realize it's a disaster and I have to clean it up.
|
| The problem with "code is cheap" is that, it's not. GENERATING
| code is now cheap (while the LLMs are subsidized by endless VC
| dollars, anyway), but the cost of owning that code is not. Every
| line of code is a liability, and generating thousands of lines a
| day is like running up a few thousand dollars of debt on a credit
| card thinking you're getting free stuff and then being surprised
| when it gets declined.
___________________________________________________________________
(page generated 2026-01-30 23:00 UTC)