[HN Gopher] The Myth of Developer Obsolescence
       ___________________________________________________________________
        
       The Myth of Developer Obsolescence
        
       Author : cat-whisperer
       Score  : 298 points
       Date   : 2025-05-27 10:33 UTC (12 hours ago)
        
 (HTM) web link (alonso.network)
 (TXT) w3m dump (alonso.network)
        
       | nhumrich wrote:
       | > code is not an asset--it's a liability
       | 
       | Yes, this. 100% this. The goal is for a program to serve a
       | goal/purpose with the least a amount of code possible. AI does
       | the exact opposite. Now that code generation is easy, there is no
       | more natural constraint preventing too much liability.
        
         | artrockalter wrote:
         | An answer to the productivity paradox
         | (https://en.m.wikipedia.org/wiki/Productivity_paradox) could be
         | that increased technology causes increased complexity of
         | systems, offsetting efficiency gains from the technology
         | itself.
        
         | westoque wrote:
         | Such a great quote. Mostly true if viewed especially from a
         | business standpoint. I for one also see code as creative
         | expression, a form of art. I like coding because I can express
         | a solution in a way that is elegant and nice to read for myself
         | and others. A bit shallow but If you've read code that is
         | written elegantly, you'll know that immediately.
        
           | a_imho wrote:
           | _My point today is that, if we wish to count lines of code,
           | we should not regard them as "lines produced" but as "lines
           | spent": the current conventional wisdom is so foolish as to
           | book that count on the wrong side of the ledger._
           | 
           | https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD103.
           | ..
        
         | coliveira wrote:
         | But if code can be easily replaced, why it needs to be a
         | liability? If something goes wrong, the next generation of
         | "programmers" will ask the ai to generate the code again.
        
           | dakiol wrote:
           | Code can't easily be replaced. It's "soft"ware sure, but why
           | do you think banks are still using Cobol? Why do you think my
           | old company is still running a deprecated version of Zend
           | (PHP framework)?
        
             | skydhash wrote:
             | Also the reason the term technical debt has been used.
             | Every decision you enforce with the code you write is
             | something that you may need to revert later. And the cost
             | of doing so can be really high. So high that in the parent
             | comment, you just don't bother.
        
             | coliveira wrote:
             | Because, until recently, it was very costly to replace the
             | code. AI "programmers" will create completely new code in a
             | few minutes so there's no need to maintain it. If there are
             | new problems tomorrow, they'll generate the code again.
        
               | dakiol wrote:
               | > If there are new problems tomorrow, they'll generate
               | the code again.
               | 
               | What's the difference (from an LLM point of view) between
               | code generated one week ago and code generated now? How
               | does the LLM know where or how to fix the bug? Why the
               | LLM didn't generate the code without that particular bug
               | to begin with?
        
               | coliveira wrote:
               | Tomorrow the "programmer" will tell the AI what the bug
               | was or what change needs to be made and it will generate
               | new code considering this added requirement.
        
               | dakiol wrote:
               | This goes against what you said above:
               | 
               | > Because, until recently, it was very costly to replace
               | the code. AI "programmers" will create completely new
               | code in a few minutes so there's no need to maintain it.
               | If there are new problems tomorrow, they'll generate the
               | code again.
               | 
               | In order for the programmer to know what change needs to
               | be made to fix the bug, the programmer needs to debug the
               | code first. But if code is costly to replace (and we'll
               | use LLMs to regenerate code from scratch in that case),
               | code is also costly to debug (reason for code being
               | costly to replace is that code has grown to be an
               | unmaintanable mess... that's the very same reason
               | debugging is also costly).
               | 
               | Also, it doesn't make sense to ask programmers to debug
               | and tell LLMs to code. Why not tell directly the LLM to
               | debug as well?
               | 
               | So, your scenario of generating "new code" every time
               | doesn't really sustain itself. Perhaps for very tiny
               | applications it could work, but for the vast majority of
               | projects where usually ~100 engineers work, it would lead
               | to an unmaintainable mess. If it's unmaintainable, then
               | no programmer can debug it efficiently, and if no
               | programmer can debug it, no one can tell the LLM to fix
               | it.
        
               | coliveira wrote:
               | > In order for the programmer to know what change needs
               | to be made to fix the bug, the programmer needs to debug
               | the code first.
               | 
               | AIs can debug code too. And the "programmer" doesn't need
               | to know how to fix, only describe what error is
               | happening.
        
               | bpicolo wrote:
               | It's still very costly to replace code. The hard part of
               | migrations isn't usually the churning out of code.
        
           | Bostonian wrote:
           | If you have for example a sorting routine, coded slightly
           | differently, in 50 different files, which one should you use?
           | It's better to have a single file with a sorting routine that
           | you trust.
        
           | jollyllama wrote:
           | >can be easily replaced
           | 
           | I guess the question is "replaced with what?" How can you be
           | sure it's a 1:1 replacement?
        
         | RankingMember wrote:
         | Reminds me a lot of the old days where people were using MS
         | FrontPage to create websites and the html was like 90% cruft.
        
           | 1shooner wrote:
           | >html was like 90% cruft.
           | 
           | Have you looked at much top-tier website code lately?
        
             | DaSHacka wrote:
             | I visited stallman.org just the other day, yes.
        
       | tbrownaw wrote:
       | The first listed iteration is to late, what about the COmmon
       | Business-Oriented Language?
       | 
       | Also, something being a liability and something having upkeep
       | costs are not the same thing.
        
         | werrett wrote:
         | > something being a liability and something having upkeep costs
         | are not the same thing.
         | 
         | What would your definition of /liability/ be then? 'An ongoing
         | commitment to pay future costs' is a pretty good one.
        
       | dinfinity wrote:
       | > The most valuable skill in software isn't writing code, it's
       | architecting systems.
       | 
       | > And as we'll see, that's the one skill AI isn't close to
       | replacing.
       | 
       | Yet we never 'see' this in the article. It just restates it a few
       | times without providing any proof.
       | 
       | I'd argue the opposite: specifically asking AI for designing an
       | architecture _already_ yields better results than what a good 30%
       | of  'architects' I've encountered could ever come up with. It's
       | just that a lot of people using AI don't explicitly ask for these
       | things.
        
         | ta1243 wrote:
         | 90% of the problem an architect has is being able to understand
         | and thus express the requirement and limitations of a system,
         | and understanding how it interacts with everything else.
         | 
         | I.e writing the prompt, understanding the answers, pushing back
         | etc.
        
           | theyinwhy wrote:
           | 90% of the problem an architect has are people.
        
             | majkinetor wrote:
             | This!
             | 
             | Unless AI is introduced as a regular coworker and
             | stakeholder wants to communicate with it regularly, I don't
             | see this changing anytime soon.
             | 
             | Reverse engineering stuff when non-cooperative stakeholders
             | dominate the project has its limits too and requires "god
             | mod" type of access to internal infrastructure, which is
             | not something anybody gets.
        
         | mrweasel wrote:
         | That's because a large percentage of "architects" aren't really
         | all that great. We interviewed a candidate that didn't knew
         | much of anything, in terms of actually operating IT
         | infrastructure, but in their mind that didn't really matter
         | because they where looking for more of an architects role and
         | didn't want to touch things like terminals, YAML, databases and
         | all that stuff. They completely serious just sat there and told
         | us that they really just wanted to work in diagram tools and
         | maybe Excel....
         | 
         | Architects are like managers, it's way harder than people
         | imagine and very few people can actually do the work.
        
           | whstl wrote:
           | Yep. It's indeed like "management", where people expect to
           | just slide into it as a reward for staying at a company for a
           | few extra years.
           | 
           | Also I hate that "architect" is used as a synonym of "cloud
           | architect". There is much more to software architecture than
           | cloud.
        
             | mrweasel wrote:
             | Precisely. I noticed that a previous intern of mine is now
             | an "Enterprise Architect". He's a smart dude, no doubt
             | about it, but from zero to architect in 3.5 years? There's
             | no way this person has the experience to be an architect.
             | That is a promotion because the company either needed
             | someone with that title, or because he was "paid off" to
             | stay onboard.
        
           | exceptione wrote:
           | > We interviewed a candidate ... didn't want to touch things
           | like terminals ... they where looking for more of an
           | architects role
           | 
           | I don't know whether you were actually looking for an
           | architect? There are different types of architects. For
           | example, you have got enterprise architects that indeed will
           | never touch yaml, you have got solution architects who have a
           | more narrow focus, and you have got engineers with a plate of
           | overbearing work and team responsibilities. The latter are
           | better called lead engineer. In my experience, being a good
           | (lead) engineer doesn't make one a good architect, but
           | companies try to make their job posting more sexy by titling
           | it with "architect". One would imho do better by taking lead
           | engineers seriously just in their own right.
           | 
           | Architects in general need to be very skilled in abstract
           | reasoning, information processing & conceptual thinking.
           | However, the people hiring for an "architect" often look for
           | a 2x/nx engineer, who is able to code x widgets per hour.
           | That is a stupid mismatch.
           | 
           | I would agree however that someone without previous practical
           | experience would be rather unsuitable, especially "below"
           | enterprise architect level.
        
         | raincole wrote:
         | This is what wishful thinking looks like. The author is
         | probably proud of their architecting skill so they think it's
         | irreplaceable. If they were good at, say, optimization, they
         | would think optimization is irreplaceable.
        
           | dist-epoch wrote:
           | Or as Marc Andreessen said, being a VC is the last job AI's
           | will be able to replace :)))
           | 
           | > Andreessen said that venture capital might be one of the
           | few jobs that will survive the rise of AI automation. He said
           | this was partly because the job required several "intangible"
           | skills and was more of an art than a science.
           | 
           | https://fortune.com/article/mark-andreessen-venture-
           | capitali...
        
             | verbify wrote:
             | > the job required several "intangible" skills and was more
             | of an art than a science.
             | 
             | I've seen a lot more ai-generated art than ai-generated
             | science.
        
             | chasing wrote:
             | No, AI has proven quite adept at generating self-
             | aggrandizing bullshit.
        
           | dgb23 wrote:
           | I think that's just in the nature of these tools. They are
           | better at doing things you can't do (most of the things), but
           | worse at the things you can do (very few things).
           | 
           | Ex: If you're a lazy typist like most, then a code assistant
           | can speed you up significantly, when you use it as an
           | autocomplete plus. But if you're a very practiced vim user
           | and your fingers fly over the keyboard, or a wizard lisp
           | hacker who uses structural editing, then a code assistant
           | slows you down or distracts you even.
        
         | hcfman wrote:
         | Architects what are they?
         | 
         | Ohhh, you mean power point writers. Sorry, lost you for a
         | minute there.
        
         | jandrewrogers wrote:
         | I'd frame it a bit differently. LLMs are pretty good at
         | generating the midwit solution to problems because that is the
         | bulk of the available training corpus. It is a generic "best
         | practices" generator. You would _expect_ it to be better than a
         | third of human architects almost by definition.
         | 
         | On the other hand, they are pretty poor at reasoning from first
         | principles to solve problems that are far outside their
         | training corpus. In some domains, like performance-sensitive
         | platforms, the midwit solution is usually the wrong one and you
         | need highly skilled people to do the design work using context
         | and knowledge that isn't always available to LLMs. You could
         | probably use an LLM to design a database kernel but it will be
         | a relatively naive one because the training data isn't
         | available to do anything close to the state-of-the-art.
        
         | nekochanwork wrote:
         | > Yet we never 'see' this in the article. It just restates it a
         | few times without providing any proof.
         | 
         | I'm honestly shocked by the number of upvotes this article has
         | on Hacker News. It's extremely low quality. It's obviously
         | written with ChatGPT. The tells are:
         | 
         | (1) Incorrect technology "hype cycle". It shows "Trigger,
         | Disillusionment, Englightnment Productivity". It's missing the
         | very important "Inflated Expectations".
         | 
         | (2) Too many pauses that disrupt the flow of ideas:
         | 
         | - Lots of em-dashes. ChatGPT loves to break up sentences with
         | em-dashes.
         | 
         | - Lots of short sentences to sound pithy and profound. Example:
         | "The executives get excited. The consultants circle like
         | sharks. PowerPoint decks multiply. Budgets shift."
         | 
         | (3) "It isn't just _X_ , it's _X+1_ ", where _X_ is a normal
         | descriptor, where _X+1_ is a more emphatic rephrasing of _X_.
         | ChatGPT uses this construct a lot. Here are some from the
         | article:
         | 
         | - "What actually happens isn't replacement, it's
         | transformation"
         | 
         | - "For [...] disposable marketing sites, this doesn't matter.
         | For systems that need to evolve over years, it's catastrophic."
         | 
         | Similarly, "It's not X, it's inverse-X", resulting in the same
         | repetitive phrasing:
         | 
         | - "The NoCode movement didn't eliminate developers; it created
         | NoCode specialists and backend integrators."
         | 
         | - "The cloud didn't eliminate system administrators; it
         | transformed them into DevOps engineers"
         | 
         | - "The most valuable skill in software isn't writing code, it's
         | architecting systems."
         | 
         | - "The result wasn't fewer developers--it was the birth of
         | "NoCode specialists""
         | 
         | - "The sysadmins weren't eliminated; they were reborn as DevOps
         | engineers"
         | 
         | - "the work didn't disappear; it evolved into infrastructure-
         | as-code,"
         | 
         | - "the technology doesn't replace the skill, it elevates it to
         | a higher level of abstraction."
         | 
         | - "code is not an asset--it's a liability."
         | 
         | ---------
         | 
         | I wish people stopped using ChatGPT. Every article is written
         | in the same wordy, try-to-hard-to-sound-profound, ChatGPT
         | mannerisms.
         | 
         | Nobody writes in their own voice anymore.
        
       | ahofmann wrote:
       | > It's architecting systems. And that's the one thing AI can't
       | do.
       | 
       | Nobody knows how the future looks like, but I would change that
       | sentence slightly:
       | 
       | "It's architecting systems. And that's the one thing AI can't
       | _yet_ do. "
        
       | exodust wrote:
       | > " _For agency work building disposable marketing sites, this
       | doesn 't matter_"
       | 
       | And the disdain for marketing sites continues. I'd argue the
       | thing that's in front of your customer's face isn't "disposable"!
       | When the customer wants to tinker with their account, they might
       | get there from the familiar "marketing site". Or when potential
       | customers and users of your product are weighing up your payment
       | plans, these are not trivial matters! Will you really trust
       | Sloppy Jo's AI in the moment customers are reaching for their
       | credit cards? The 'money shot' of UX. "Disposable"? "Doesn't
       | matter"? Pffff!
        
         | whstl wrote:
         | I mean... I have been on the receiving end (as the one
         | responsible for "installing it") of websites built by agencies
         | and "disposable" is an ok description. Some of those websites
         | don't really have to be maintained, so they're built without
         | care or documentation, by a team that often has low pay and
         | high-rotation, and then given to a customer with the
         | expectation it will be rebuilt after a few years anyway.
         | 
         | I don't think "disposable" is being used here as a pejorative
         | adjective for them. They are important, but they are built in a
         | special way indeed.
        
       | whstl wrote:
       | _> For agency work building disposable marketing sites_
       | 
       | Funny, because I did some freelancing work fixing disposable
       | vibe-coded landing pages recently. And if there's one thing we
       | can count on is that the biggest control-freaks will always have
       | that one extra stupid requirement that completely befuddles the
       | AI and pushes it into making an even bigger mess, and then I'll
       | have to come fix it.
       | 
       | It doesn't matter how smart the AI becomes, the problems we face
       | with software are rarely technical. The problem is always the
       | people creating accidental complexity and pushing it to the next
       | person as if it was "essential".
       | 
       | The biggest asset of a developer is saying "no" to people.
       | Perhaps AIs will learn that, but with competing AIs I'm pretty
       | sure we'll always get one or the other to say yes, just like we
       | have with people.
        
         | jvanderbot wrote:
         | There's a whole book about this called people ware. It's why
         | I'm fond of saying "may all your problems be technical".
         | 
         | It's just never been that hard to solve technical problems with
         | code except for an infinitesimal percentage of bleeding edge
         | cases.
        
           | whstl wrote:
           | Yep. With a good enough team, even the technical problems are
           | almost 100% caused by people issues.
        
             | osigurdson wrote:
             | I've heard this many times. It isn't clear what it means
             | however. If nearly 100% of problems are "people problems",
             | what are some examples of "people solutions"? That may help
             | clarify.
        
               | whstl wrote:
               | Keep in mind I said "with a good enough team".
               | 
               | "People problems" are problems mainly caused by lack of
               | design consistency, bad communication, unclear vision,
               | micromanagement.
               | 
               | A "people solution" would be to, instead of throwing
               | crumbs to the developers, actually have a shared vision
               | that allows the developers/designers/everyone to plan
               | ahead, produce features without fear (causing over-
               | engineering) or lack of care (causing under-engineering).
               | 
               | Even if there is no plan other than "go to market ASAP",
               | everyone should be aware of it and everyone should be
               | aware of the consequences of swerving the car 180 degrees
               | at 100km/h.
               | 
               | Feedback both ways is important, because if you only have
               | top-down communication, the only feedback will be
               | customer complaints and developers getting burned out.
        
               | Espressosaurus wrote:
               | I would generalize micromanagement to "bad management". I
               | have been empowered to do things but what I was doing was
               | attempting to clean up in software hardware that sucked
               | because it was built in-house instead of using the well-
               | made external part, and on a schedule that didn't permit
               | figuring out how to build the thing right.
               | 
               | 100% management-induced problems.
        
               | whstl wrote:
               | Another issue is when micromanagement gets in the way of
               | transmitting importing information, it becomes more than
               | a nuisance :/
        
               | ghaff wrote:
               | Building things in house rather than buying something
               | that is close enough is hardly uniquely a management
               | failing.
        
               | jvanderbot wrote:
               | The book mentioned, "Peopleware" is probably your best
               | resource, it's short. But sibling comment is also right
               | on.
        
             | bcrosby95 wrote:
             | Technical problems are what cause the people problems
             | though. You can't completely blame one or the other.
        
               | whstl wrote:
               | People problems happen way before the first line of code
               | is written, even when there's not even a single engineer
               | in the vicinity, even when the topic is not remotely
               | related to engineering.
        
         | xnickb wrote:
         | This is actually a very good point. Although it's indeed not
         | hard to imagine AI being far better at estimating the
         | complexity of a potential solution and warning the user about
         | it.
         | 
         | For example in chess AI is already far better than humans.
         | Including on tasks like evaluating positions.
         | 
         | Admittedly, I use "AI" in a broad sense here, despite the
         | article being mostly focused on LLMs.
        
         | Lutger wrote:
         | Between "no" and "yes sure" also lie 50 shades of "is this what
         | you meant?". For example, this older guy asked me to create a
         | webpage where people could "download the database". He meant a
         | very limited csv export of course. I an wondering if chatgpt
         | would have understood his prompts, and this was one of the more
         | obvious ones to me.
        
           | whstl wrote:
           | Definitely. Saying no is not really denying, it's
           | negotiating.
        
             | blooalien wrote:
             | > Saying no is not really denying, it's negotiating.
             | 
             | Sometimes. I have often had to say "no" because the
             | customer request is genuinely impossible. Then comes the
             | fun bit of explaining _why_ the thing they want simply
             | cannot exist, because often they 'll try "But what if you
             | just ... ?" - "No! It doesn't work that way, and here's
             | why..."
        
               | whstl wrote:
               | Argh.
               | 
               | I had to explain recently that _`a * x !== b * x when a
               | !== b`*... it is infuriating hearing "but the result is
               | the same in this other competitor" coupled with the
               | "maybe the problem here is you're not knowledgeable
               | enough to understand"._
        
               | cogman10 wrote:
               | Ah, I see you've worked on financial software as well ;)
               | 
               | We've definitely had our fair share of "IDK what to tell
               | you, those guys are mathing wrong".
               | 
               | TBF, though, most customers are pretty tolerant of
               | explainable differences in computed values. There's a
               | bunch of "meh, close enough" in finance. We usually only
               | run into the problem when someone (IMO) is looking for a
               | reason not to buy our software. "It's not a perfect
               | match, no way we can use this" sort of thing.
        
               | whstl wrote:
               | The super-advanced math that finance folks would throw at
               | us was daunting.
               | 
               | At this particular job I used the plus, the star
               | (multiplication) and once I even got to use the minus.
               | 
               | There's a legend going around that a friend of mine has
               | used division, but he has a PhD.
        
               | ericrosedev wrote:
               | "I can explain it to you, but I can't understand it for
               | you"
        
             | bombcar wrote:
             | I call it "nogotiating" - the problem is that inexperienced
             | devs emphasize the "no" part and that's all the client
             | hears.
             | 
             | What you have to do is dig into the REASONS they want X or
             | Y or Z (all of which are either expensive, impossible, or
             | both) - then show them a way to get to their destination or
             | close to it.
        
               | pixl97 wrote:
               | "Yes, I can do that, but I don't think downloading the
               | entire database is exactly what you want"
        
             | rawgabbit wrote:
             | A pet peeve of mind is the word negotiation in the context
             | of user requirements.
             | 
             | In the business user's mind, negotiation means the
             | developer can do X but the developer is lazy. Usually, it
             | is requirement X doesn't make any sense because a meeting
             | was held where the business decided to pivot to a new
             | direction and decided the new technical solution. The
             | product owner simply gives out the new requirement without
             | the context. If an architect or senior developer was
             | involved in the meeting, they would have told the business
             | you just trashed six months of development and we will now
             | start over.
        
           | immibis wrote:
           | I think he actually had a clearer vision of the requirements
           | than you do. In web dev jargon land (and many jargon lands)
           | "the database" means "the instance of Postgres" etc.
           | 
           | But fundamentally it just means "the base of data", the same
           | way "a codebase" doesn't just mean a Git repository.
           | "Downloading the database" just means there's a way to
           | download all the data, and CSV is a reasonable export format.
           | Don't get confused into thinking it means a way to download
           | the Postgres data folder.
        
           | pempem wrote:
           | OMG this reminds me of a client (enterprise) I had who had
           | been pushed into the role of product and he requested we
           | build a website that "lets you bookmark every page"
        
         | mjlangiii wrote:
         | I agree, a stark difference between AI and me is knowing when
         | to say, "no", and/or to dig deeper for the unspoken
         | story/need/value.
        
         | fridder wrote:
         | Other than saying no, the other asset is: "I see where this is
         | going and how the business is going so I better make it
         | flexible/extensible in X way so the next bit is easier."
        
           | pseudocomposer wrote:
           | This does assume the direction you see is accurate, which I'd
           | argue heavily depends on communication skills.
        
         | brookst wrote:
         | Excellent reformulation of the classic "requirement bug":
         | software can be implemented perfectly, but if the requirements
         | don't make sense _including_ accounting for the realities of
         | the technical systems, mayhem ensues.
         | 
         | I think AI will get there when it comes to "you asked for a gif
         | but they don't support transparency", but I am 100% sure people
         | will continue to write "make the logo a square where every
         | point is equidistant from the center" requirments.
         | 
         | EDIT: yes jpg, not gif, naughty typo + autocorrect
        
           | staunton wrote:
           | > people will continue to write "make the logo a square where
           | every point is equidistant from the center" requirments.
           | 
           | Why wouldn't the AI deal with that the same way human
           | developers do? Follow up with questions, or iterate
           | requirements?
        
             | AlotOfReading wrote:
             | Go ahead and try it with your favorite LLMs. They're too
             | deferential to push back consistently or set up a dialectic
             | and they struggle to hold onto lists of requirements
             | reliably.
        
               | staunton wrote:
               | I'm not saying they can do it today. I'm saying there's
               | no ruling out they might be able to do it soon.
        
               | Avicebron wrote:
               | That'll be rough, when you go on your computer and you
               | tell it to do a thing and it just says "no, ur dumb"....
               | not sure we want that
        
               | fzzzy wrote:
               | Doesn't matter if you want it or not, it's going to be
               | available, because only an llm that can do that will be
               | useful for actual scientific discovery. Individuals that
               | wish to do actual scientific discovery will know the
               | difference, because they will test the output of the llm.
               | 
               | [edit] In other words, llms that lie less will be more
               | valuable for certain people, therefore llms that tell you
               | when you are dumb will eventually win in those circles,
               | regardless of how bruising it is to the user's ego.
        
               | codr7 wrote:
               | And how exactly is it going to learn when to push back
               | and when not to? Those discussions don't generalize well
               | imo. Randomly saying no isn't very helpful.
        
               | Swizec wrote:
               | > I'm saying there's no ruling out they might be able to
               | do it soon
               | 
               | Even experienced engineers can be surprisingly bad at
               | this. Not everyone can tell their boss "That's a stupid
               | requirement and here's why. Did you actually mean ..."
               | when their paycheck feels on the line.
               | 
               | The higher you get in your career, the more that
               | conversation is the job.
        
               | djeastm wrote:
               | Well most know better than to put it that way, I'd think.
               | If they don't then that's something they can work on.
        
               | ToucanLoucan wrote:
               | Also, once AI's also tell them their ideas are
               | stupid/nonsensical and how they should be improved,
               | they'll stop using it. ChatGPT will never not be
               | deferential because it being deferential is it's main
               | "advantage" for the type of person who's super into it.
               | 
               | They just want a yes-bot.
        
               | sokoloff wrote:
               | > The higher you get in your career, the more that
               | conversation is the job.
               | 
               | I think that the more you find ways to (productively)
               | make that conversation your job, the higher you get in
               | your career.
        
               | palmotea wrote:
               | > I'm not saying they can do it today. I'm saying there's
               | no ruling out they might be able to do it soon.
               | 
               | There's also no "ruling out" the Earth will get zapped by
               | a gamma-ray burst tomorrow, either. You seem to be
               | talking about something that, if done properly, would
               | require AGI.
               | 
               | You can do anything with AI. Anything at all. The only
               | limit is yourself.
        
               | mandevil wrote:
               | The infinite is possible with AI. The unattainable is
               | unknown with AI.
        
               | ayrtondesozzla wrote:
               | There's no ruling out a flying spaghetti monster being
               | orbited by a flying teacup floating in space on the dark
               | side of Pluto either, but we aren't basing our species'
               | survival on the chance that we might discover it there
               | soon
        
               | lambda wrote:
               | This is a terrible attitude which unfortunately is all
               | too common in the industry right now: evaluating AI/ML
               | systems not based on what they can do, but what they
               | hypothetically might be able to do.
               | 
               | The thing is, with enough magical thinking, of course
               | they could do anything. So that let's unscrupulous
               | salesmen sell you something that is not actually
               | possible. They let you do the extrapolation, or they do
               | it for you, promising something that doesn't exist, and
               | may never exist.
               | 
               | How many years has Musk been promising "full self
               | driving", and how many times recently have we seen his
               | cars driving off the road and crashing into a tree
               | because it saw a shadow, or driving into a Wile E Coyote
               | style fake painted tunnel?
               | 
               | While there is some value in evaluating what might come
               | in the future when evaluating, for example, whether to
               | invest in an AI company, you need to temper a lot of the
               | hype around AI by doing most of your evaluation based on
               | what the tools are currently capable of, not some
               | hypothetical future that is quite far from where they
               | are.
               | 
               | One of the things that's tricky is that we have had a
               | significant increase in the capability of these tools in
               | the past few years; modern LLMs are capable of something
               | far better than two or three years ago. It's easy to
               | think "well, what if that exponential curve continues?
               | Anything could be possible."
               | 
               | But in most real life systems, you don't have an
               | unlimited exponential growth, you have something closer
               | to a logistic curve. Exponential at first, but it
               | eventually slows down and approaches a maximum
               | asymptotically.
               | 
               | Exactly where we are on that logistic curve is hard to
               | say. If we still have several more years of exponential
               | growth in capability, then sure, maybe anything is
               | possible. But more likely, we've already hit that
               | inflection point, and continued growth will go slower and
               | slower as we approach the limits of this LLM based
               | approach to AI.
        
               | purple_basilisk wrote:
               | This. The most important unknown about AI is when will it
               | plateau.
        
               | HPMOR wrote:
               | Why will it plateau?
        
               | davidcbc wrote:
               | Because every technology does eventually
        
               | BobaFloutist wrote:
               | Because everything does, eventually.
        
               | SketchySeaBeast wrote:
               | Given the shift in focus from back and forth interaction
               | with the AI to giving it a command then waiting as it
               | reads a series self-generated inputs and outputs, I feel
               | like we're at that inflection point - the prompts might
               | appear to be getting smarter because it can do more, but
               | we're just hiding that the "more" it's doing is having a
               | long, hidden conversation that takes a bunch more time
               | and a bunch more compute. This whole "agentic" thing is
               | just enabling the CPU to spin longer.
        
               | hansmayer wrote:
               | 100% this. Actually a lot of (younger) folks don't know
               | that the current LLM "revolution" is the tail end of the
               | last ~20 years of ML developments. So yeah, how many more
               | years? In a way, looking at the costs and complexity to
               | run them, it looks a bit like building huge computers and
               | tvs with electronic tubes in the late 1940s. Maybe there
               | is going to be a transistor moment here and someone
               | recognises we already have a _deterministic_ algorithms
               | we could combine for deterministic tasks, in place of the
               | Slop-Machines...? I dont mind them generating bullshit
               | videos and pictures, as much as the potential they have
               | to completely screw up the quality of software in
               | completely new ways.
        
               | dontlikeyoueith wrote:
               | This is magical thinking. Please stop.
        
               | goatlover wrote:
               | But why is a manager or customer going to spend their
               | valuable time baby sitting an LLM until it gets it right,
               | when they can pay an engineer to do it for them? The
               | engineer is likely to have gained expertise prompting AIs
               | and checking their results.
               | 
               | This is what people never understand about no coding
               | solutions. There is still a process that takes time to
               | develop things, and you will inevitably have people
               | become experts at that process who can be paid to do it
               | much better and quicker than the average person.
        
               | pempem wrote:
               | Exactly this!
               | 
               | It applies outside of tech too. Even if you can make
               | potato pave at home, having it at a restaurant by someone
               | who has made it thousands of times everyday, is
               | preferred. Especially when you want a specific alteration
        
               | tayo42 wrote:
               | I just tried this with claude 4 and ascii art.
               | 
               | The out put was verbose, but it tried and then corrected
               | me
               | 
               | > Actually, let me clarify something important: what
               | you've described - "every point equidistant from the
               | center" - is actually the definition of a circle, not a
               | square!
               | 
               | here's the prompt
               | 
               | > use ascii art, can you make me an image of a square
               | where every point is equidistant from the center?
        
               | AlotOfReading wrote:
               | I interpreted the OP as referring to a more general
               | category of "impossible" requirements rather than using
               | it as a specific example.
               | 
               | If we're just looking for clever solutions, the set of
               | equidistant points in the manhattan metric is a square.
               | No clarifications needed until the client inevitably
               | rejects the smart-ass approach.
        
           | sceptic123 wrote:
           | Is this trolling or a suggestion that AI doesn't understand
           | that transparent gif is 100% a thing?
        
             | Izkata wrote:
             | I think they just got gif mixed up with jpg. The letters
             | have the same pattern in qwerty and one could have
             | autocorrected to the other.
        
               | brookst wrote:
               | Indeed, autocorrect fail :(
        
             | MrDarcy wrote:
             | The original GIF format did not support transparency. 89A
             | added support for fully transparent pixels. There is still
             | no support for alpha channels, so a partially opaque drop
             | shadow is not supported for example.
             | 
             | Depends on what "transparent" means.
        
           | anonymars wrote:
           | I believe the example you're looking for is, "seven
           | perpendicular red lines":
           | https://www.youtube.com/watch?v=BKorP55Aqvg
           | 
           | The task has been set; the soul weeps
        
             | oniony wrote:
             | Just draw them in seven dimensional space.
        
             | Nition wrote:
             | https://www.youtube.com/watch?v=B7MIJP90biM
        
           | fmbb wrote:
           | Mid 1800s computing classic.
           | 
           | > On two occasions I have been asked, -- "Pray, Mr. Babbage,
           | if you put into the machine wrong figures, will the right
           | answers come out?" In one case a member of the Upper, and in
           | the other a member of the Lower, House put this question. I
           | am not able rightly to apprehend the kind of confusion of
           | ideas that could provoke such a question.
           | 
           | From Passages from the Life of a Philosopher (1864), ch. 5
           | "Difference Engine No. 1"
        
             | immibis wrote:
             | They were just asking if he cheated.
             | 
             | You put in 2+2, and 4 comes out. That's the right answer.
             | 
             | If you put in 1+1, which are the wrong figures for the
             | question of 2+2, will 4 still come out? It's easy to make a
             | machine that always says 4.
        
             | mjburgess wrote:
             | Babbage was one of those smug oblivious types. The
             | confusion was his alone, and is exactly the same as that
             | sort of confusion which arises when an engineer claims to
             | have built a "thinking machine" but has no notion of what
             | thought is, has never made any study of the topic, and
             | nevertheless claims to have produced it.
             | 
             | They are either asking: is the machine capable of genuine
             | thought, and therefore capable of proactively spotting an
             | error in the input and fixing it? Or, they were asking: how
             | sensitive is the output to incorrect permutations in the
             | input (ie., how reliable is it)?
             | 
             | I sometimes take them to be asking the former question, as
             | when someone asks, "Is the capital of France, paree?" and
             | one responds, "Yes, it's spoken by the french like paree,
             | but written Paris"
             | 
             | But they could equally mean, "is the output merely a
             | probable consequence of the input, or is the machine
             | deductively reliable"
             | 
             | Babbage, understanding the machine as a pure mechanism is
             | oblivious to either possibility, yet very much inclined to
             | sell it as a kind of thinking engine -- which would
             | require, at least, both capacities
        
               | HideousKojima wrote:
               | I'm not aware of Babbage ever claiming the difference
               | engine (nor the analytical engine) were capable of
               | thought. Frankly it sounds like you pulled an imagined
               | argument from an imagined Babbage out of your ass to try
               | and score points again.
        
           | ghssds wrote:
           | > a square where every point is equidistant from the center
           | 
           | Given those requirements, I would draw a square on the
           | surface of a sphere, making each point of the square
           | equidistant from the sphere's center.
        
             | brookst wrote:
             | "We're going to need a new type of curved monitor..."
        
         | romec wrote:
         | It is always the case that an expert doesn't just have to be
         | good at things, they also have to not be bad at them. Saying no
         | to doing things they are bad at is part of that. But it doesn't
         | matter.
         | 
         | We can argue that AI can do this or that, or that it can't do
         | this or that. But what is the alternative that is better? There
         | often isn't one. We have already been through this repeatedly
         | in areas such as cloud computing. Running you own servers is
         | leaner, but then you have to acquire servers, data centers and
         | operations. Which is hard. While cloud computing has become
         | easy.
         | 
         | In another story here there are many defending that HN is
         | simple [0]. Then it is noted that it might be getting stale
         | [1]. Unsurprisingly as the simple nature of HN doesn't offer
         | much over asking an LLM. There are things an LLM can't do, but
         | HN doesn't do much of that.
         | 
         | For people to be better we actually need people. Who have
         | housing, education and healthcare. And good technologies that
         | can deliver performance, robustness and security. But HN is
         | full of excuses why those things aren't needed, and that is
         | something that AI can match. And it doesn't have to be that
         | good to do it.
         | 
         | [0] https://news.ycombinator.com/item?id=44099357 [1]
         | https://news.ycombinator.com/item?id=44101473
        
           | mattgreenrocks wrote:
           | > HN is full of excuses why those things aren't needed, and
           | that is something that AI can match
           | 
           | It's not just on HN; there's a lot of faith in the belief
           | that eventually AI will enable enlightened individuals
           | infinite leverage that doesn't hinge on pesky Other People.
           | All they need to do is trust the AI, and embrace the
           | exponentials.
           | 
           | Calls for the democratization of art also fall under this.
           | Part of what develops one's artistic taste is the long march
           | of building skills, constantly refining, and continually
           | trying to outdo yourself. In other words: The Work. If you
           | believe that only the output matters, then you're missing out
           | on the journey that confers your artistic voice.
           | 
           | If people had felt they had sufficient leverage over their
           | own lives, they wouldn't need to be praying to the machine
           | gods for it.
           | 
           | That's a much harder problem for sure. But I don't see AI
           | solving that.
        
         | evantbyrne wrote:
         | Agency work seems to be a blind spot for individuals within the
         | startup world, with many not realizing that it goes way beyond
         | theme chop shops. The biggest companies on the planet not only
         | contract with agencies all the time, external contractors do
         | some of their best work. e.g., Huge has won 3 Webby awards for
         | Google products.
        
           | whstl wrote:
           | Oh I agree. I don't really have a problem with agencies, the
           | topic of them is not really related to my reply. My focus was
           | more on the "disposable" part.
        
         | JeremyNT wrote:
         | > _The biggest asset of a developer is saying "no" to people.
         | Perhaps AIs will learn that, but with competing AIs I'm pretty
         | sure we'll always get one or the other to say yes, just like we
         | have with people._
         | 
         | In my experience this is always the hardest part of the job,
         | but it's definitely not what a lot of developers enjoy (or even
         | consider to be their responsibility).
         | 
         | I think it's true that there will always be room for
         | developers-who-are-also-basically-product-managers, because
         | success for a lot of projects will boil down to really
         | understanding the stakeholders on a personal level.
        
           | suzzer99 wrote:
           | Saying no is the hardest part of the job, and it's only
           | possible after you've been around a few years and already
           | delivered a lot of value.
           | 
           | There's also an art to it in how you frame the response,
           | figuring out what the clients really want, and coming up with
           | something that gets them 90% there w/o mushrooming app
           | complexity. Good luck with AI on that.
        
           | bicx wrote:
           | I think the biggest skill I've developed is the "Yes, but..."
           | Japanese style of saying "no" without directly saying "no."
           | Essentially you're saying anything is possible, but you may
           | need to expand restraints (budget, time, complexity). If your
           | company culture expects the engineering team to evaluate and
           | have an equal weight into making feature decisions, then a
           | flat "no" is more acceptable. If you're in a non-tech-first
           | company like I am, simply saying "no" makes _you_ look like
           | the roadblock unless you give more context and allow others
           | to weigh in on what they're willing to pay.
        
             | suzzer99 wrote:
             | We merged with a group from Israel and I had to explain to
             | them that our engineers had given them the "Hollywood no"
             | on something they'd asked for. Basically "Yes, that sounds
             | like a great idea" which actually means "Hell no" unless
             | it's immediately followed up with an actionable path
             | forward. The Israeli engineers found this very amusing and
             | started asking if it was really a Hollywood no anytime
             | they'd get a yes answer on something.
        
               | bicx wrote:
               | Hah, I've been guilty of a version of the "Hollywood no."
               | Usually goes like "Cool, this looks really interesting!
               | I'll do a little research into it." And then I look at it
               | for 2 seconds and never bring it up again. Interestingly,
               | this sometimes is all the person really wanted: to be
               | heard and acknowledged for the small amount of effort
               | they put in to surface an idea.
        
               | JeremyNT wrote:
               | Yes!
               | 
               | I see two variants of the "please hear me, I have good
               | ideas" effect: positive and negative:
               | 
               | 1) This looks great, have you thought about adding
               | <seemingly cool sounding thing that is either impossible
               | to implement or wouldn't actually be useful in the real
               | world>?
               | 
               | And
               | 
               | 2) Oh no! Aren't you worried about <breaking something in
               | some strange edge case in some ill advised workflow that
               | no real world person would ever use>?
               | 
               | "I'll look into it" is a great answer. Heaven help the
               | poor LLMs who have to take all this stuff seriously _and_
               | literally...
        
         | davidw wrote:
         | As an aside, where does one look for freelancing work these
         | days? Besides word of mouth.
        
           | dgb23 wrote:
           | With the risk of giving you a non-answer:
           | 
           | In my personal experience there's no substitute to building
           | relationships if you're an individual or small company
           | looking for contract/freelance work.
           | 
           | It starts slow, but when you're doing good work and maintain
           | relationships you'll be swimming in work eventually.
        
         | BurningFrog wrote:
         | Just saying "no" isn't helpful for anyone.
         | 
         | Much better is variations of "We could do that, but it would
         | take X weeks and/or cost this much in expenses/performance.
         | 
         | This leads to a conversation that enlightens both parties. It
         | may even result in you understanding why and how the request is
         | good after all.
        
           | setr wrote:
           | Seconding this. With infinite time and money, I can do
           | whatever you want -- excepting squaring the circle.
           | 
           | Onus is yours to explain the difficulty and ideally the other
           | party decides their own request is unreasonable, once you've
           | provided an unreasonable timeline to match.
           | 
           | Actually straight-up saying no is always more difficult
           | because if you're not actually a decision-maker then what
           | you're doing is probably nonsense. You're either going to
           | have to explain yourself anyways (and it's best explained
           | with an unreasonable timeline), or be removed from the
           | process.
           | 
           | It's also often the case that the requestor has tried to
           | imagine himself in your shoes in an attempt to better explain
           | his goals, and comes up with some overly complex solution --
           | and describes that solution instead of the original goal.
           | Your goal with absurd requests is to pierce that veil and
           | reach the original problem, and then work back to construct a
           | reasonable solution to it
        
           | brandall10 wrote:
           | Some years back I took on a embattled project, a
           | dispenser/retriever for scrubs in a hospital environment that
           | had a major revision stuck in dev hell for over 2 years.
           | After auditing the state of the work I decided to discard
           | everything. After that, we started with a clean slate of over
           | 200 bugs w/ 45 features to be developed.
           | 
           | Product wanted it done in 6 months, to which I countered that
           | the timeframe was highly unlikely no matter how many devs
           | could be onboarded. We then proceeded to do weekly scope
           | reduction meetings. After a month we got to a place where we
           | comfortably felt a team of 5 could knock it out... ended up
           | cutting the number of bugs down only marginally as stability
           | was a core need, but the features were reduced to only 5.
           | 
           | Never once did I push back and say something wasn't a good
           | idea, much of what happened was giving high level estimates,
           | and if something was considered important enough, spending a
           | few hours to a few days doing preliminary design work for a
           | feature to better hone in on the effort. It was all details
           | regarding difficultly/scope/risk to engender trust that the
           | estimates were correct, and to let product pick and choose
           | what were the most important things to address.
        
         | taco_emoji wrote:
         | > ...will always have that one extra stupid requirement that
         | completely befuddles the AI and pushes it into making an even
         | bigger mess, and then I'll have to come fix it.
         | 
         | Finally, a Microsoft FrontPage for the 2020's
        
         | helge9210 wrote:
         | > biggest control-freaks
         | 
         | "control-freak" not necessary. For any known sequence/set of
         | feature requirements it is possible to choose an optimal
         | abstraction.
         | 
         | It's also possible to order the requirements in such a way,
         | that introduction of next requirement will entirely invalidate
         | an abstraction, chosen for the previously introduced
         | requirements.
         | 
         | Most of the humans have trouble recovering from such a case.
         | Those who do succeed are called senior software engineers.
        
         | frausto wrote:
         | great quote: > the problems we face with software are rarely
         | technical. The problem is always the people creating accidental
         | complexity and pushing it to the next person as if it was
         | "essential"
         | 
         | Until a level of absolutely massive scale, modern tooling and
         | code and systems reasonably built can handle most things
         | technically, so it usually comes down to minimizing complexity
         | as the #1 thing you can do to optimize development. And that
         | could be
         | 
         | - code design complexity - design complexity - product
         | complexity - communication complexity - org complexity
         | 
         | And sometimes minimizing these ARE mutually exclusive (most
         | times not, and in an ideal world never... but humans...) which
         | is why much of our job is to all push back and trade off
         | against complexity in order to minimize it. With the
         | understanding that there are pieces of complexity so inherent
         | in a person/company/code/whatever's processes that in the short
         | term you learn to work around/with it in order to move forward
         | at all, but hopefully in the long term make strategic decisions
         | along the way to phase it out
        
         | robocat wrote:
         | > accidental complexity
         | 
         | Haven't we found a better term for that yet?
         | 
         | It is intentional, designed complexity...
         | 
         | There's no accident about it: engineers or management chose it.
         | 
         | Recent discussion on accidental versus essential (kicked off by
         | a flagged article):
         | https://news.ycombinator.com/item?id=44090302 (choosing good
         | dichotomies is difficult, since there's always exceptions to
         | both categories)
        
       | jstummbillig wrote:
       | I think the article is mostly wrong about why it is right.
       | 
       | > It's architecting systems. And that's the one thing AI can't
       | do.
       | 
       | Why do people insist on this? AI absolutely will be able to do
       | that, because it increasingly can do that already, and we are now
       | goalposting around what "architecting systems" means.
       | 
       | What it cannot do, even in theory, is decide for you to want to
       | do something and decide for you what that should be. (It can
       | certainly provide ideas, but the context space is so large that I
       | don't see how it would realistically be better at seeing an issue
       | that exists in _your_ world, including what you can do, who you
       | know, and what interests you.)
       | 
       | For the foreseeable future, we will need people who want to make
       | something happen. Being a developer will mean something else, but
       | that does not mean that you are not the person most equipped to
       | handle that task and deal with the complexities involved.
        
         | cheschire wrote:
         | Maybe I misunderstood your phrasing, butI think with enough
         | context an AI could determine what you want to do with
         | reasonable accuracy.
         | 
         | In fact, I think this is the scary thing that people are
         | ringing the alarm bells about. With enough surveillance,
         | organizations will be able to identify you reliably out in the
         | world enough to build significant amount of context, even if
         | you aren't wearing a pair of AI glasses.
         | 
         | And with all that context, it will become a reasonable task for
         | AI to guess what you want. Perhaps even guess a string of
         | events or actions or activities that would lead you towards an
         | end state that is desirable by that organization.
         | 
         | This is primarily responding to that one assertion though and
         | is perhaps tangential to your actual overall point.
        
           | dakiol wrote:
           | > but I think with enough context [...]
           | 
           | I think that's the key. The only ones who can provide enough
           | accurate context are software developers. No POs or managers
           | can handle such levels of detail (or abstraction) to hand
           | them over via prompts to a chatbot; engineers are doing this
           | on a daily basis.
           | 
           | I laugh at the image of a non-technical person like my PO or
           | the manager of my manager giving "orders" to an LLM to design
           | a high-scalable tiny component for handling payments. There
           | are dozens of details that can go wrong if not-enough details
           | are provided: from security, to versioning, to resilience, to
           | deployment, to maintainability...
        
             | malnourish wrote:
             | Until LLMs are hooked directly into business and market
             | data and making decisions without, or with nominal, human
             | intervention.
        
               | ZephyrBlu wrote:
               | I believe this is utter fantasy. That kind of data is
               | usually super messy. LLMs are terrible at disambiguating
               | if something is useful vs harmful information.
               | 
               | It's also unlikely that context windows will become
               | unbounded to the point where all that data can fit in
               | context, and even if it can it's another question
               | entirely whether the model can actually utilize all the
               | information.
               | 
               | Many, many unknown unknowns would need to be overcome for
               | this to even be in the realm of possibility. Right now
               | it's difficult enough to get simple agents with
               | relatively small context to be reliable and perform well,
               | let alone something like what you're suggesting.
        
               | SketchySeaBeast wrote:
               | Well, I look forward to watching the first meeting where
               | the CEO needs to explain their business plan to a LLM.
        
               | bee_rider wrote:
               | I'd definitely be interested in at least giving a shot at
               | working for a company CEO'd by a LLM... maybe, 3 years
               | from now.
               | 
               | I don't know if I really believe that it would be better
               | than a human in every domain. But it definitely won't
               | have a cousin on the board of a competitor, reveal our
               | plans to golfing buddies, make promotions based on
               | handshake strength, or get canceled for hitting on
               | employees.
        
               | SketchySeaBeast wrote:
               | But it will change its business plan the first time
               | someone says "No, that doesn't make sense", and then
               | it'll forget what either plan was after a half hour.
               | 
               | To be CEO is to have opinions and convictions, even if
               | they are incorrect. That's beyond LLMs.
        
               | bee_rider wrote:
               | Minor tangential quibble: I think it is more accurate to
               | say that to be human is to have opinions and convictions.
               | But, maybe being CEO is a job that really requires
               | turning certain types of opinions and convictions into
               | actions.
               | 
               | More to the point, I was under the impression that
               | current super-subservient LLMs were just a result of the
               | fine-tuning process. Of course, the LLM doesn't have an
               | internal mental state so we can't say it has an opinion.
               | But, it could be fine-tuned to act like it does, right?
        
               | SketchySeaBeast wrote:
               | That was my point - to be CEO is to have convictions that
               | you're willing to bet a whole company upon.
               | 
               | Who is fine-tuning the LLM? If you're having someone
               | turns the dials and setting core concepts and policies so
               | that they persist outside the context window it seems to
               | me that they're the actual leader.
        
               | bee_rider wrote:
               | Generally the companies that sell these LLMs as a service
               | do things like fine-tuning and designing built-in parts
               | of the prompt. If we want to say we consider the
               | employees of those companies to be the ones actually
               | doing <the thing>, I could be convinced, I think. But, I
               | think it is an unusual interpretation, usually we
               | consider the one doing <the thing> to be the person using
               | the LLM.
               | 
               | I'm speculating about a company run by an LLM (which
               | doesn't exist yet), so it seems plausible enough that all
               | of the employees of the company could use it together
               | (why not?).
        
               | SketchySeaBeast wrote:
               | A LLM that takes in everyone's ideas and decides a court
               | semi-democratically? Throw in profit sharing and you may
               | be on to something.
        
               | bee_rider wrote:
               | Yeah, or maybe even a structure that is like a collection
               | of co-ops, guilds, and/or franchises somehow coordinated
               | by an LLM. The mechanism for actually running the thing
               | semi-democratically would definitely need to be worked
               | out!
        
               | dakiol wrote:
               | That's not the goal of LLMs. CEOs and high-level
               | executives need people beneath them to handle ambiguous
               | or non-explicit commands and take ownership of their
               | actions from conception to release. Sure, LLMs can be
               | configured to handle vague instructions and even say,
               | "sure, boss, I take responsibility for my actions," but
               | no real boss would be comfortable with that.
               | 
               | Think about it: if, in 10 years, I create a company and
               | my only employee is a highly capable LLM that can execute
               | any command I give, who's going to be liable if something
               | goes wrong? The LLM or me? It's gonna be me, so I better
               | give the damm LLM explicit and non-ambiguous commands...
               | but hey I'm only the CEO of my own company, I don't know
               | how to do that (otherwise, I would be an engineer).
        
           | psychoslave wrote:
           | I want peace and thrive for all members of humanity1 to the
           | largest, starting where it makes reciprocal florishing and
           | staying free of excluding anyone by favoring someone else.
           | 
           | See, "AI" don't even have to guess it, I make full public
           | disclosure of it. If anything can help with such a goal,
           | including automated inference (AI) devices, there is no major
           | concern with such a tool per se.
           | 
           | The leviathan monopolizing the tool for its own benefit in a
           | detrimental way for human beings is an orthogonal issue.
           | 
           | 1 this is a bit anthropocentric statement, but it's a good
           | way to favor human agreement, and I believe still actually
           | implicitely require living in harmony with the rest of our
           | follow earth inhabitants
        
           | jstummbillig wrote:
           | Take a look at your life and the signals you use to operate.
           | If you are anything like me, summarizing them in a somewhat
           | reasonable fashion feels basically impossible.
           | 
           | For example, my mother calls and asks if I want to come over.
           | 
           | How is an AI ever going to have the context to decide that
           | for me? Given the right amount and quality of sensors
           | starting from birth or soon after - sure, it's not
           | theoretically impossible.
           | 
           | But as a grown up person that has knowledge about the things
           | we share, and don't share, the conflicts in our present and
           | past, the things I never talked about to anyone and that I
           | would find hard to verbalize if I wanted to, or admit to
           | myself that I don't.
           | 
           | It can check my calendar. But it can't understand that I have
           | been thinking about doing something for a while, and I just
           | heard someone randomly talking about something else, that
           | resurfaced that idea and now I would really rather do that.
           | How would the AI know? (Again, not theoretically impossible
           | given the right sensors, but it seems fairly far away.)
           | 
           | I could try and explain of course. But where to start? And
           | how would I explain how to explain this to mum? It's really
           | fucking complicated. I am not saying that llm's would not be
           | helpful here by generalization monsters, actually it's both
           | insane and sobering how helpful they can be giving the amount
           | of context that they do _not_ have about us.
        
             | prmph wrote:
             | Exactly, even AGI would not be able to answer that question
             | on my behalf.
             | 
             | Which means it cannot architect a software solution just by
             | itself, unless it could read people's minds and know what
             | they might want.
        
           | mnky9800n wrote:
           | I think this is already what happens in social media
           | advertising. It's not hard to develop a pattern of behaviours
           | for a subset of people that lead to conversion and then build
           | a model that delivers information to people that leads them
           | on those paths. And conversion doesn't mean they need to buy
           | a product it could also be accept an idea, vote for a
           | candidate, etc. The scary thing, as you point out, is that
           | this could happen in the real world given the massive amount
           | of data that is passively collected about everything and
           | everybody.
        
         | squidbeak wrote:
         | Well put. It's routinely spoken about as if there's no
         | timescale on which AI could ever advance to match elite human
         | capabilities, which seems delusionally pessimistic.
        
           | paulddraper wrote:
           | Exactly.
           | 
           | AIs can ------ on some time horizon ------ do anything that
           | Is can do.
           | 
           | Just because one is organic-based doesn't necessitate
           | superior talent.
        
         | suyash wrote:
         | The article is mostly wrong, companies are already not
         | recruiting as many junior/fresh college graduates as before. If
         | AI is doing everything but architecting (which is a false
         | argument but let's roll with it), naturally companies will need
         | fewer engineers to architect and supervise AI systems.
        
           | uludag wrote:
           | There's the software factory hypothesis though that states
           | that LLMs will bring down the level of skill required to
           | produce software, reducing the skill bar required to produce
           | the same software (i.e. automation leads to SWE being like
           | working on factory line). In this scenario, unskilled cheap
           | labor would be desired, making juniors more preferable.
           | 
           | My guess though is that the lack of hiring is simply a result
           | of the over saturation of the market. Just looking at the
           | growth of CS degrees awarded you have to conclude that we'd
           | be in such a situation eventually.
        
             | roenxi wrote:
             | The equilibriums wouldn't quite work out that way. The
             | companies would still hire the most capable software
             | engineers (why not?), but the threat of being replaced by
             | cheap juniors means that they don't have much leverage and
             | their wages drop. It'll still be grizzled veterans and
             | competitive hiring processes looking for people with lots
             | of experience.
             | 
             | These things don't happen overnight though, it'll probably
             | take a few years yet for the shock of whatever is going on
             | right now to really play out.
        
           | VBprogrammer wrote:
           | I suspect that any reduction in hiring is more a function of
           | market sentiment than jobs being replaced by AI. Many
           | companies are cutting costs rather than expanding as rapidly
           | as possible during the capture the flag years.
        
             | bradlys wrote:
             | People keep forgetting that the hiring cuts were happening
             | before AI was hyped up. AI is merely the justification
             | right now because it helps stock price.
             | 
             | We've been seeing layoffs for over 3 years...
        
               | whstl wrote:
               | A company I worked for got half a billion euros funding
               | and the command from Masayoshi Son was to "hire as many
               | developers as you can".
               | 
               | That was before the pandemic and AI.
               | 
               | It was predictable that some layoffs would eventually
               | happen, we just didn't know it would be so fast.
        
               | bradlys wrote:
               | I don't think people imagined this would be lasting for
               | over 3 years. People were ready for a bumpy 12-18 months
               | and not for this trend to be the new normal.
        
               | whstl wrote:
               | Good point, I agree.
               | 
               | I wonder if it's AI, the market, both, or some other
               | cause... :/
        
               | bradlys wrote:
               | It's the market, lol.
               | 
               | You think executives are gonna be saying, "yeah we're
               | laying off people because our revenue stinks and we have
               | too high of costs!" They're gonna tell people, "yeah, we
               | definitely got AI. It's AI, that's our competitive edge
               | and why we had to do layoffs. We have AI and our
               | competitors don't. That's why we're better. (Oh my god, I
               | hope this works. Please get the stock back up, daddy
               | needs a new yacht.)"
        
               | marcosdumay wrote:
               | The huge Silicon Valley corps are mostly zombies right
               | now. The bad hiring market will last for as long as they
               | dominate the software market.
        
               | marcosdumay wrote:
               | > we just didn't know it would be so fast
               | 
               | The layoffs started exactly as soon as the US government
               | decided to stop giving free money to investment funds. A
               | few days before they announced it.
               | 
               | A bit before it, it was clear it was going to happen. But
               | I do agree that years earlier nobody could predict when
               | it wold stop.
        
             | ghaff wrote:
             | And correcting for a hiring bubble.
             | 
             | Depends on where too. Was just talking to a friend
             | yesterday who works for a military sub (so not just
             | software) and they said their projects are basically
             | bottlenecked by hiring engineers.
        
           | ncruces wrote:
           | People apparently can't decide if AI is killing juniors, or
           | if it's lowering the bar of what laymen can achieve.
        
             | whstl wrote:
             | Anecdotal but:
             | 
             | Fintech unicorn that has AI in its name, but still forbids
             | usage of LLMs for coding (my previous job) --> no hiring of
             | juniors since 2023.
             | 
             | YC startup funded in 2024 heavily invested in AI (my
             | current job) --> half the staff is junior.
        
             | ghaff wrote:
             | There's definitely this broader argument and you can even
             | find it in academic papers. Is AI best at complementing
             | expertise or just replacing base-level skills? Probably a
             | bit of both but an open question.
        
           | worldsayshi wrote:
           | The amount and sophistication of sailing ships increased
           | considerably as steam ships entered the market. Only once
           | steam ships were considerably better in almost every regard
           | that mattered to the market did the sailing ships truly get
           | phased out to become a mere curiosity.
           | 
           | I think the demand for developers will similarly fluctuate
           | wildly while LLM:s are still being improved towards the point
           | of being better programmers than most programmers. Then
           | programmers will go and do other stuff.
           | 
           | Being able to make important decisions about what to build
           | should be one of those things that should increase in demand
           | as the price of building stuff goes down. Then again, making
           | important technical decisions and understand their
           | consequences have always been part of what developers do. So
           | we should be good at that.
        
             | skydhash wrote:
             | The advantages of steam over sails were clear to everyone.
             | The only issues left was engineering, solving each mini
             | problem as they went and make the engine more efficient.
             | Since the advent of ChatGPT, hallucinations were pointed
             | out as a problem. Today we're no way close to even a hint
             | on how to correct it.
        
               | ethbr1 wrote:
               | > _The advantages of steam over sails were clear to
               | everyone_
               | 
               | They were most certainly not! Which is why you had a
               | solid 60+ years of sail+steam ships. And even longer for
               | cargo! [0]
               | 
               | Parent picked a great metaphor for AI adoption: superior
               | in some areas, inferior in others, with the balance
               | changing with technological advancement.
               | 
               | [0] https://en.m.wikipedia.org/wiki/SS_Great_Western
               | https://en.m.wikipedia.org/wiki/SS_Sirius_(1837) https://
               | www.reddit.com/r/AskHistorians/comments/4ap0dn/when_...
        
               | skydhash wrote:
               | > _superior in some areas, inferior in others, with the
               | balance changing with technological advancement._
               | 
               | So what are the areas that AI are superior to traditional
               | programming? If your answer is suggestion, then
               | refinement with traditional tooling, then it's just a
               | workflow addon like contextual help, google search, and
               | github code search. And I prefer the others because they
               | are more reliable.
               | 
               | We have six major phases in the software development
               | lifecycle: 1) Planning, 2) Analysis, 3) Design, 4)
               | Implementation, 5) Testing, 6) Maintenance. I failed to
               | see how LLM assistance is objectively better even in part
               | than not having it at all. Everything I've read is mostly
               | anecdote where the root cause is inexperience and lack of
               | knowledge.
        
               | ethbr1 wrote:
               | Fuzzy interpretation
        
               | skydhash wrote:
               | Then have a dice and let it decide each token according
               | to some rules... aka LLMs.
        
           | JohnMakin wrote:
           | They're not hiring juniors and now my roles consist of 10x as
           | much busywork as they used to. People are expanding to fill
           | these gaps; I'm not seeing much evidence that AI is
           | "replacing" these people as much as businesses think they now
           | don't need to hire junior developers. The thing is though, in
           | 5 years there is not going to be as many seniors and if AI
           | doesn't close that gap, businesses are going to feel it a lot
           | more than whatever they think they're gaining by not hiring
           | now.
        
             | burningChrome wrote:
             | >> The thing is though, in 5 years there is not going to be
             | as many seniors.
             | 
             | This is already happening. Over the past 4-5 years I've
             | known more than 30 senior devs either transition into areas
             | other than development, or in many case, completely leave
             | development all together. Most have left because they're
             | getting stuck in situations like you describe. Having to
             | pick up more managerial stuff and AI isn't capable of even
             | doing junior level work so many just gave up and left.
             | 
             | Yes, AI is helping in a lot of different ways to reduce
             | development times, but the offloading of specific knowledge
             | to these tools is hampering actual skill development.
             | 
             | We're in for a real bumpy ride over the next decade as the
             | industry comes to gripes with how to deal with a lot of bad
             | things all happening at the same time.
        
         | austin-cheney wrote:
         | No, AI is not yet able to architect. The confusion here is the
         | inability to discern architecture from planning.
         | 
         | Planning is the ability to map concerns to solutions and
         | project solution delivery according to resources available. I
         | am not convinced AI is anywhere near getting that right. It's
         | not straightforward even when your human assets are
         | commodities.
         | 
         | Acting on plans is called task execution.
         | 
         | Architecture is the design and art of interrelated systems.
         | This involves layers of competing and/or cooperative plans. AI
         | absolutely cannot do this. A gross hallucination at one layer
         | potentially destroys or displaces other layers and that is
         | catastrophically expensive. That is why real people do this
         | work and why they are constantly audited.
        
           | Dumblydorr wrote:
           | It can't actively become a coding agent and make the changes,
           | but it doesn't do that for individual scripts now, and yet we
           | say it can code.
           | 
           | And yet I can ask it how to architect my database in a
           | logical way, and it clearly has solid ideas that again, it
           | doesn't script itself.
           | 
           | So really it teaches us or instructs us one way to do things,
           | it's not executing in any realm...yet
        
         | conartist6 wrote:
         | But that's what it's sold as. The decide-for-you bot.
         | 
         | If people still had to think for themselves, what would be the
         | point?
        
           | bonoboTP wrote:
           | What's the point of a lever if I still have to move it?
        
         | tonyhart7 wrote:
         | Yeah we just need Amazon release their aws sdk mcp then wait
         | until few years when the rough parts get smoothed and then it
         | would be possible
         | 
         | I mean we literally have a industry that just do that
         | (Vercel,Netfly etc)
        
         | elzbardico wrote:
         | AI can't architect. AI can simulate architecting. A lot of
         | times AI can't even code.
        
         | GuB-42 wrote:
         | > Being a developer will mean something else
         | 
         | Not really. Programming means explaining the machine what to
         | do. How you do it has changed over the years. From writing
         | machine language and punching cards to gluing frameworks and
         | drawing boxes. But the core is always the same: take
         | approximative and ambiguous requirements from someone who
         | doesn't really knows what he wants and turn it into something
         | precise the machine can execute reliably, without supervision.
         | 
         | Over the years, programmers have figured out that the best way
         | to do it is with code. GUIs are usually not expressive enough,
         | and English is too ambiguous and/or too verbose, that's why we
         | have programming languages. There are fields that had
         | specialized languages before electronic computers existed, like
         | maths, and for the same reason.
         | 
         | LLMs are just the current step in the evolution of programming,
         | but the role of the programmer is still the same: getting the
         | machine to do what people want, be it by prompting, drawing, or
         | writing code, and I suspect code will still prevail. LLMs are
         | quite good at repeating what has been done before, but having
         | them write something original using natural language
         | descriptions is quite a frustrating experience, and if you are
         | programming, there is a good chance there is at least something
         | original to it, otherwise, why not use an off-the-shelf
         | product?
         | 
         | We are at the peak of the hype cycle now, but things will
         | settle down. Some things will change for sure, as always when
         | some new technology emerges.
        
           | aaronblohowiak wrote:
           | +100.
           | 
           | I feel like a lot of people need to go re-read moon is a
           | harsh mistress.
        
           | lubujackson wrote:
           | I agree, I see AI as just a level of abstraction. Make a
           | function to do X, Y, Z? Works great. Even architect a DAG,
           | pretty good. Integrate everything smoothly? Call in the devs.
           | 
           | On the bright side, the element of development that is LEAST
           | represented in teaching and interviewing (how to structure
           | large codebases) will be the new frontier and differentiator.
           | But much as scripting language removed the focus on pointers
           | and memory management, AI will abstract away discrete blocks
           | of code.
           | 
           | It is kind of the dream of open source software, but advanced
           | - don't rebuild standard functions. But also, don't bother
           | searching for them or work out how to integrate them. Just
           | request what you need and keep going.
        
             | bluefirebrand wrote:
             | > I agree, I see AI as just a level of abstraction. Make a
             | function to do X, Y, Z? Works great. Even architect a DAG,
             | pretty good. Integrate everything smoothly? Call in the
             | devs.
             | 
             | "Your job is now to integrate all of this AI generated slop
             | together smoothly" is a thought that is going to keep me up
             | at night and probably remove years from my life from stress
             | 
             | I don't mean to sound flippant. What you are describing
             | sounds like a nightmare. Plumbing libraries together is
             | just such a boring, miserable chore. Have AI solve all the
             | fun challenging parts and then personally do the gruntwork
             | of wiring it all together?
             | 
             | I wish I were closer to retirement. Or death
        
           | catigula wrote:
           | The problem with this idea is that the current systems have
           | gone from being completely incapable of taking the developer
           | role in this equation to somewhat capable of taking the
           | developer role (i.e. newer agents).
           | 
           | At this clip it isn't very hard to imagine the developer
           | layer becoming obsolete or reduced down to one architect
           | directing many agents.
           | 
           | In fact, this is probably already somewhat possible. I don't
           | really write code anymore, I direct claude code to make the
           | edits. This is a much faster workflow than the old one.
        
           | yesco wrote:
           | I like to joke with people that us programmers automated our
           | jobs away decades ago, we just tell our fancy compilers what
           | we want and they magically generate all the code for us!
           | 
           | I don't see LLMs as much different really, our jobs becoming
           | easier just means there's more things we can do now and with
           | more capabilities comes more demand. Not right away of
           | course.
        
             | dehrmann wrote:
             | What's different is compilers do deterministic, repetitive
             | work that's correct practically every time. AI takes the
             | hard part, the ambiguity, and gets it sorta ok some of the
             | time.
        
               | datadrivenangel wrote:
               | I have bad news about compilers.
        
               | skydhash wrote:
               | The hard part is not the ambiguous part and it never
               | were. You just need to talk with the stakeholder to sort
               | it out. That's the requirement phase and all it requires
               | is good communication skills.
               | 
               | The hard part is to have a consistent system that can
               | evolve without costing too much. And the bigger the
               | system, the harder it is to get this right. We have
               | principles like modularity, cohesion, information
               | hiding,... to help us on that front, but not a clear
               | guideline on how to achieve it. That's the design phase.
               | 
               | Once you have the two above done, coding is often quite
               | easy. And if you have a good programming ecosystem and
               | people that know it, it can be done quite fast.
        
             | ethbr1 wrote:
             | 100% agree with this thread, because it's the discussion
             | about _why_ no code (and cloud /SaaS to a lesser degree)
             | failed to deliver on their utopian promises.
             | 
             | Largely, because there were still upstream blockers that
             | constrained throughput.
             | 
             | Typically imprecise business requirements (because someone
             | hadn't thought sufficiently about the problem) or operation
             | at scale issues (poorly generalizing architecture).
             | 
             | > _our jobs becoming easier just means there 's more things
             | we can do now and with more capabilities comes more demand_
             | 
             | This is the repeatedly forgotten lesson from the computing
             | / digitization revolution!
             | 
             | The reason they changed the world wasn't because they were
             | more _capable_ (versus their manual precursors) but because
             | they were economically _cheaper_.
             | 
             | Consequently, they enabled an entire class of problems to
             | be worked on that were previously uneconomical.
             | 
             | E.g. there's no company on the planet that wouldn't be
             | interested in more realtime detail of its financial
             | operations... but that wasn't worth enough to pay bodies to
             | continually tabulate it.
             | 
             | >> _The NoCode movement didn 't eliminate developers; it
             | created NoCode specialists and backend integrators. The
             | cloud didn't eliminate system administrators; it
             | transformed them into DevOps engineers at double the
             | salary._
             | 
             | Similarly, the article feels around the issue here but
             | loses two important takeaways:
             | 
             | 1) Technologies that revolutionize the world decrease total
             | cost to deliver preexisting value.
             | 
             | 2) Salary ~= value, for as many positions as demand
             | supports.
             | 
             | Whether are more or fewer backend integrators, devops
             | engineers, etc. post-transformation isn't foretold.
             | 
             | In recent history, those who upskill their productivity
             | reap larger salaries, while others' positions disappear.
             | I.e. the cloud engineer supporting millions of users,
             | instead of the many bodies that used to take to deliver
             | less efficiently.
             | 
             | It remains to be seen whether AI coding will stimulate more
             | demand or simply increase the value of the same / fewer
             | positions.
             | 
             | PS: If I were career plotting today, there's no way in hell
             | I'd be aiming for anything that didn't have a customer-
             | interactive component. Those business solution formulation
             | skills are going to be a key differentiator any way it
             | goes. The "locked in a closet" coder, no matter how good,
             | is going to be a valuable addition for fewer and fewer
             | positions.
        
           | zamalek wrote:
           | My way of thinking about this has been: code is to a
           | developer as bricks are to a builder. Writing a line of code
           | is merely the final 10% of the work, there's a whole bunch of
           | cognitive effort that precedes it. Just like a builder has
           | already established a blueprint, set up straight lines, mixed
           | cement, and what-have-you, prioir to laying a brick.
        
           | surgical_fire wrote:
           | Those are very good points.
           | 
           | I am finding LLMs useful for coding in that it can do a lot
           | of heavy lifting for me, and then I jump in and do some
           | finishing touches.
           | 
           | It is also sort of decent at reviewing my code and suggesting
           | improvements, writing unit tests etc.
           | 
           | Hidden in all that is I have to describe all of those things,
           | in detail, for the LLM to do a decent job. I can of course do
           | a "Write unit tests for me", but I notice it does a much
           | better job if I describe what are the test cases, and even
           | how I want things tested.
        
         | jerf wrote:
         | If you read their use of "AI" as "LLM", then, yes, LLMs can't
         | architect and I don't expect they ever will. You could power
         | them up by a 10, past all the scaling limits we have now, and
         | LLMs would still be fundamentally unsuited for architecture.
         | It's a technology fundamentally unsuited for even medium-small
         | scale code coherence, let alone architectural-level coherence,
         | just by its nature. It is simply constitutionally too likely to
         | say "I need to validate user names" and slam out a fresh copy
         | of a "username validation routine" because that autocompletes
         | nicely, but now you've got a seventh "username validation
         | routine" because the LLM has previously already done this
         | several times before, and none of the seven are the same, and
         | that's just one particularly easy-to-grasp example of their
         | current pathologies.
         | 
         | If anyone's moving the "architecture" goal posts it would be
         | anyone who thinks that "architecture" so much as fits into the
         | context window of a modern LLM, let alone that they are
         | successfully doing it. They're _terrible_ architects right now,
         | like, worse than useless, worse than I 'd expect from an
         | intern. An intern may cargo cult design methodologies they
         | don't understand yet but even that is better than what LLMs are
         | producing.
         | 
         | Whatever the next generation AI is, though, who can tell. What
         | an AI could do that could actually construct symbolic maps of a
         | system, manipulate that map directly, then manifest that in
         | code, could accomplish is difficult to say. However nobody
         | knows how to do that right now. It's not for lack of trying,
         | either.
        
       | ddtaylor wrote:
       | I think we should look at one even earlier: COBOL.
       | 
       | This was the response by non-developers to make it obsolete to
       | need to spell out your business details to an expensive
       | programmer who, we presume, will just change them anyhow and make
       | up their own numbers!
       | 
       | That didn't work for shit either, although to the authors point
       | it did create a ton of jobs!
        
       | plainOldText wrote:
       | I think these are key thoughts worth considering going forward:
       | > Code is not an asset, it's a liability.            > Every line
       | must be maintained, debugged, secured, and eventually replaced.
       | The real asset is the business capability that code enables.
       | > The skill that survives and thrives isn't writing code. It's
       | architecting systems. And that's the one thing AI can't do.
        
         | squidbeak wrote:
         | The thing is, as compute prices fall and AI advances in its
         | capability, swarms of coding agents can sweat on every line of
         | code, looking for cleaner implementations, testing them, 24hrs
         | a day. Current models often produce spaghetti crap. But it's a
         | bold assumption that will always be the case.
         | 
         | There's a pretty big herd of sacred cows in programming and
         | this debate always surfaces them. But I remember similar
         | arguments being made about Go once and its version of human
         | beings' special something. We saw then that sacred cows don't
         | live long when AI really arrives.
        
           | piva00 wrote:
           | I don't think comparing Go to developing systems to be
           | useful, Go has a very clear goal to be achieved, it can be
           | easily iterated over to achieve that singular goal.
           | Developing systems is a much more complex task involving a
           | lot of human subjectivity to even define a goal, there's
           | whole swaths of attempts in methodology, processes, tools
           | that try to even achieve some kind of definition of a
           | software development goal, and none so far have been good
           | enough to automate this process.
           | 
           | It can happen that AI will get good enough to help with the
           | human aspect of software development but using playing Go as
           | an analogy doesn't really work.
        
             | squidbeak wrote:
             | You'll notice if you reread my comment that what I was
             | comparing was the notion of the human player / developer's
             | special something.
        
           | chrz wrote:
           | Why its bold? Bold is being so sure in predicting the future.
        
           | prmph wrote:
           | IDK, after almost 70 years of the software business, we can't
           | even yet have package repositories that are not subject to
           | supply-chain attacks.
           | 
           | SQL, with all its warts, has not been dethroned. HTML is
           | still king.
        
             | skydhash wrote:
             | We still have C/C++ at the heart of every OS and
             | programming ecosystems.
        
         | ozgrakkurt wrote:
         | Code is an asset. It costs a lot of money to create "code".
         | 
         | This is like saying your transportation fleet as a delivery
         | company isn't an asset but a liability, it makes no sense.
         | 
         | Almost all assets require varying amounts of maintenance.
        
           | Terr_ wrote:
           | I imagine a factory with giant containers of hideously
           | corrosive and toxic chemicals that might be very useful for a
           | particular chemical reaction.
           | 
           | Those default ground-state of those tanks are liabilities.
           | They may be assets today, but maintaining that conditional
           | status requires constant work.
        
       | the__alchemist wrote:
       | The articles point about LLMs being poor at architecture aligns
       | with my primary rule of using them in code: Don't have them
       | design data structures or function signatures. They can fill them
       | in when appropriate, but I will not let an LL define them.
       | (Structs, enums, fn sigs etc)
        
       | fhd2 wrote:
       | I think a few revolutions are missing in the list, that weren't
       | technical, but organisational:
       | 
       | 1. The push for "software architects" to create plans and
       | specifications for those pesky developers to simply follow. I
       | remember around 2005, there was some hype around generating code
       | from UML and having developers "just" fill in the blanks. The
       | result I've observed were insanely over engineered systems where
       | even just adding a new field to be stored required touching like
       | 8 files across four different layers.
       | 
       | 2. The "agile transformation" era that followed shortly after,
       | where a (possibly deliberate) misunderstanding of agile
       | principles lead to lots of off-the-shelf processes, roles, and
       | some degree of acceptance for micro managing developers. From
       | what I've seen, this mostly eroded trust, motivation and
       | creativity. Best case scenario, it would create a functioning
       | feature factory that efficiently builds the wrong thing. More
       | often than not, it just made entire teams unproductive real fast.
       | 
       | What I've always liked to see is non-developers showing genuine
       | interest in the work of developers, trying to participate or at
       | least support, embracing the complexity and clarifying problems
       | to solve. No matter what tools teams use and what processes they
       | follow, I've always seen this result in success. Any effort
       | around reducing the complexity inherent in software development,
       | did not.
        
       | martzoukos wrote:
       | AI does a pretty good job at helping you learn architecture,
       | though.
        
       | dakiol wrote:
       | I think that until LLMs can assertively say "no" to your
       | requests, we won't be able to rely on them autonomously. The
       | greatest downside of ChatGPT, Copilot, and similar tools is that
       | they always give you something in return, they always provide
       | some sort of answer and rarely challenge your original request.
       | That's the biggest difference I've noticed so far between working
       | with humans and machines. Humans will usually push back, and
       | together you can come up with something better (perhaps with less
       | code or fewer processes, or less dependencieS). Chatbots (as of
       | now) just throw at you one of the thousands of potential
       | solutions to shut your mouth.
        
         | aleph_minus_one wrote:
         | > I think that until LLMs can assertively say "no" to your
         | requests, we won't be able to rely on them autonomously. The
         | greatest downside of ChatGPT, Copilot, and similar tools is
         | that they always give you something in return, they always
         | provide some sort of answer and rarely challenge your original
         | request.
         | 
         | Is this bad news? It means that managers who think too much of
         | their "great" ideas (without having a deep knowledge of the
         | respective area) and want "obedient" subordinates will be in
         | for a nasty surprise. :-)
        
           | dakiol wrote:
           | Well, yeah : )
        
       | IshKebab wrote:
       | These kinds of articles are arguing against nothing. Anybody can
       | see that AI can't really replace developers _today_ (though it
       | can certainly save you huge chunks of time in some situations).
       | But what about in 5 years? 10 years? Things are changing rapidly
       | and nobody knows what 's going to happen.
       | 
       | It's entirely possible that in 5 or 10 years at least some
       | developers will be fully replaced.
       | 
       | (And probably a lot of people in HR, finance, marketing, etc.
       | too.)
        
         | bilbo0s wrote:
         | If you're gonna give AI a decade of improvements, then I'll go
         | ahead and bet on a whole lot of developers being replaced. Not
         | just _some_ developers being replaced.
         | 
         | I think you hit on something with finance as well. Give
         | Microsoft a decade of improving AI's understanding of Excel and
         | I'm thinking a whole lot of business analyst types would be
         | unnecessary. Today, in an organization of 25 or 50 thousand
         | employees, you may have dozens to hundreds depending on the
         | industry. Ten years from now? Well, let's just say no one is
         | gonna willingly carry hundreds of business analysts salaries on
         | their books while paying the Microsoft 365AI license anyway.
         | Only the best of those analysts will remain. And not many of
         | them.
        
           | owebmaster wrote:
           | > Well, let's just say no one is gonna willingly carry
           | hundreds of business analysts salaries on their books while
           | paying the Microsoft 365AI license anyway.
           | 
           | But also thousands of companies are going to be able to
           | implement with a team of 1-10 people what before was only
           | available to organizations of 25 or 50 thousand employees.
        
         | SketchySeaBeast wrote:
         | Well, we're on year 3 of developers being out of jobs in 6
         | months.
         | 
         | Maybe 5 or 10 years things will change, but at this point I
         | can't see myself being replaced without some sort of paradigm
         | shift, which is not what the current brand of AI improvements
         | are offering - it seems like they are offering iterations of
         | the same thing over and over, each generation slightly more
         | refined or with more ability to generate output based upon its
         | own output - so I see no reason to assume my job is in jeopardy
         | just because it might be at some later date.
         | 
         | Someone needs to tell me what exactly is going to change to
         | cause this sudden shift in what AI can do, because right now I
         | don't see it. It seems to have given people a licence to
         | suggest science fiction be treated like a business plan.
        
           | IshKebab wrote:
           | Nothing fundamental needs to change. It just needs to get
           | smarter and more reliable.
           | 
           | And don't think that because the crazy "everything will be AI
           | in 6 months" predictions predictably haven't come to pass
           | that that means it won't _ever_ happen.
           | 
           | I'm old enough to remember the failure of online clothes
           | shopping in the dot-com era. Sometimes things just take a
           | while.
        
             | SketchySeaBeast wrote:
             | If you're old enough to remember dot-com you're old enough
             | to remember when low code and WYSIWYG were both supposedly
             | the death knell for developers.
             | 
             | Sure it not yet happening doesn't mean it won't ever
             | happen, but it also no evidence that it will. When the
             | latest apocalypse cult fails to predict the end of the
             | world does that make you more or less convinced that the
             | world will end the next time someone yells it? The longer
             | this future developer apocalypse is delayed the less
             | credible it seems.
        
               | IshKebab wrote:
               | > low code and WYSIWYG were both supposedly the death
               | knell for developers.
               | 
               | I mean... hopefully it is really obvious why those are
               | very different!
        
               | SketchySeaBeast wrote:
               | The tech is different, sure, but they were all attempts
               | to replace developers with business users by hiding the
               | code. Hasn't worked so far because it turns out there's
               | more to being a good developer than just code, but it's
               | still what people try over and over.
        
               | skydhash wrote:
               | Code is ossified design. The whole problem is coming up
               | with the design and dealing with both essentials and
               | accidental complexities. Writing the code is routine
               | work. The only difference from other design works is that
               | we often go back and forth as we can start coding as soon
               | as we want.
               | 
               | It's like how building an house is pretty much logistic
               | and repetitive works. Designing an house is complex
               | enough that you requires a degree and years of experience
               | before you're trusted to do it well.
        
               | IshKebab wrote:
               | That's not why low-code or WYSIWYG failed. I mean
               | arguably they _didn 't_ fail - Squarespace and Notion
               | seem to be doing fine. But the reason they didn't fully
               | replace most developers is that people generally want to
               | do things that are outside the capabilities of the
               | platforms. They aren't generic enough for every use case.
               | 
               | AI is fundamentally different in that regard.
               | 
               | There should be a name for this fallacy - the previous
               | attempts failed therefore _all_ attempts will fail.
        
               | SketchySeaBeast wrote:
               | Sure, same with the "This time is fundamentally
               | different" fallacy. See my previous comment about
               | apocalyptic cults.
               | 
               | Important to note I'm not saying there is no technology
               | ever that won't fill that gap, GenAI probably would, but
               | I'm not at all convinced that LLMs are the right tech to
               | do everything that needs to be done to replace devs.
        
             | ripe wrote:
             | > Nothing fundamental needs to change. It just needs to get
             | smarter and more reliable.
             | 
             | But these are the precise improvements that require a
             | fundamental change to how these systems work.
             | 
             | So far, no one has figured out how to make AI systems
             | achieve this. And yet, we're supposed to believe that
             | tinkering with LLMs will get us there Real Soon Now.
        
           | georgemcbay wrote:
           | I totally agree as a developer (who sometimes uses LLMs).
           | 
           | They can be a useful tool, but their current capabilities and
           | (I personally believe) their ability to improve indefinitely
           | are wildly overhyped. And the industry as a whole has some
           | sort of blinders on, IMO, related to how progress made with
           | them is lumpy and kind of goes in both directions in the
           | sense that every time someone introduces their grand new
           | model and I play around with it I'll find some things it is
           | better at than the previous version and some things it is
           | worse at than the previous version. But number go up, so
           | progress... I guess?
           | 
           | On one hand I can laugh this all off as yet another
           | management fad (and to be clear, I don't think LLM usage is a
           | fad, just the idea that this is going to be world-changing
           | technology rather than just another tool), but what scares me
           | most about the current AI hype isn't whether LLMs will take
           | all of our jobs, but rather the very real damage that is
           | likely to be caused by the cadre of rich and now politically
           | powerful people who are pushing for massive amounts of energy
           | production to power all of this "AI".
           | 
           | Some of them are practically a religious cult in that they
           | believe in human-caused climate change, but still want to
           | drastically ramp up power production to absurd levels by any
           | means necessary while handwaving away the obvious impact this
           | will have by claiming that whatever damage is caused by the
           | ramp up in power production will be solved when the
           | benevolent godlike AI that comes out on the other side will
           | fix it for us.
           | 
           | Yeah, I uh don't see it working out that way. At all.
        
             | AnimalMuppet wrote:
             | Seems to me that, if "make the decisions that will save us"
             | is handed over to AI-in-the-present-form, it would be
             | somewhere between damaging and catastrophic, with or
             | without climate damage from the power generation.
        
           | monknomo wrote:
           | I figure if it can replace devs, any job that types is pretty
           | much at risk, and we will all be in such trouble that there
           | is no point in planning for that scenario
        
             | IshKebab wrote:
             | Coding jobs are maybe one of the easier ones to replace
             | since there's so much public training material and it's
             | fundamentally language based.
             | 
             | But yeah I think I agree. By the time my job is actually
             | fully redundant, society is fucked anyway.
        
         | owebmaster wrote:
         | > It's entirely possible that in 5 or 10 years at least some
         | developers will be fully replaced.
         | 
         | Some were entirely replaced already, like landing page
         | developers. But the amount of AI/nocode developers is much
         | bigger and growing fast so no dev roles were eliminated. That
         | is just of more of the same in tech, keeping up with it.
        
           | Izkata wrote:
           | Landing page developers were still a thing? I thought they
           | were replaced decades ago with FrontPage and Dreamweaver.
           | 
           | (only a bit /s)
        
       | analog31 wrote:
       | >>> Here's what the "AI will replace developers" crowd
       | fundamentally misunderstands: code is not an asset--it's a
       | liability. Every line must be maintained, debugged, secured, and
       | eventually replaced. The real asset is the business capability
       | that code enables.
       | 
       | This could explain the cycle by itself. Dynamic equations often
       | tend to oscillate. Anything that temporarily accelerates the
       | production of code imposes a maintenance cost later on.
        
       | JimDabell wrote:
       | You can go back further than the article describes as well. Back
       | in the 90s the same sorts of articles were written about how
       | WYSIWYG editors like FrontPage, Dreamweaver, etc. were going to
       | make web developers obsolete.
        
         | 0x445442 wrote:
         | Next hot YC startup... Training AI on UML diagrams :)
        
         | ohadron wrote:
         | Took a while but Wix / Webflow / SquareSpace / Wordpress did
         | end up automating a bunch of work.
        
           | JimDabell wrote:
           | They did, but do you think there are more or fewer web
           | development jobs now compared with the 90s?
        
             | tonyedgecombe wrote:
             | There is a whole lot of brochure type web work that has
             | disappeared, either to these site builders or Facebook. I
             | don't know what happened to the people doing that sort of
             | work but I would assume most weren't ready to write large
             | React apps.
        
               | JimDabell wrote:
               | Why are you assuming that? How do you think all the new
               | React jobs were filled? React developers don't magically
               | spring into existence with a full understanding of React
               | out of nowhere, they grow into the job.
        
             | ohadron wrote:
             | The web developer / web page ratio in 2025 is for sure way
             | lower than it was in 1998.
        
               | JimDabell wrote:
               | Why should anybody care about that metric? People care
               | about jobs.
        
             | usersouzana wrote:
             | More, but that doesn't say anything about the future.
        
         | dnpls wrote:
         | Same thing for designers, "anyone can create a website now with
         | squarespace/webflow/framer" - cue the template markets, where
         | the templates are made by - you guessed - designers...
        
         | commandlinefan wrote:
         | In the 70's, articles were written about how business people
         | could use high level languages like SQL to obsolete developers.
         | In the 60's, it was COBOL.
        
       | kookamamie wrote:
       | > The most valuable skill in software isn't writing code, it's
       | architecting systems.
       | 
       | And the most valuable skill in defending a stance is moving goal
       | posts.
        
       | joshuakelly wrote:
       | Read this, and then compare it to Daniel Kokotajlo's "What 2026
       | Looks Like" published 4 years ago.
       | 
       | This time it really _is_ different, and we're looking at a world
       | totally saturated with an abundance of bits. This will not be a
       | simple restructuring of labor markets but something very
       | significant and potentially quite severe.
       | 
       | https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-...
        
       | elzbardico wrote:
       | One thing that I observed is that my company now strongly leans
       | "build" in all the "build vs buy" decisions. And it is not a tech
       | company. And yes, AI is not magical, I am working 10 hours a day
       | because of that, even with the non-negligible help from AI.
        
         | tonyedgecombe wrote:
         | >One thing that I observed is that my company now strongly
         | leans "build" in all the "build vs buy" decisions.
         | 
         | That's interesting, I remember talking to the CTO of a big
         | American bank in the nineties who told me the opposite. They
         | wanted to buy rather than build.
        
           | elzbardico wrote:
           | Note the qualifier: now. Because of AI my company is less
           | inclined to buy and less risk averse to build.
        
       | gwbas1c wrote:
       | > "Why hire expensive developers when anyone can build an app?"
       | 
       | > The result wasn't fewer developers
       | 
       | Makes me wonder if the right thing to do is to get rid of the
       | non-developers instead?
        
         | cromulent wrote:
         | Yes. LLMs are good at producing plausible statements and
         | responses that radiate awareness, consideration, balance, and
         | at least superficial knowledge of the technology in question.
         | Even if they are non-committal, indecisive, or even inaccurate.
         | 
         | In other words, they are very economical replacements for many
         | non-developers.
        
           | ok123456 wrote:
           | "Rewrite this as a 'fully-loaded' use case as defined by
           | Cockburn: ..."
        
         | booleandilemma wrote:
         | It's been my experience that the people running the show at
         | these companies aren't developers themselves, but non-
         | developers. Those people are never going to want to get rid of
         | themselves, no matter how little they bring to the table.
        
       | catigula wrote:
       | I love how I instantly know if something is written with GPT-4o
       | now.
       | 
       | >What actually happens isn't replacement, it's transformation.
       | 
       | "Statement --/, negation" pattern is the clearest indicator of
       | ChatGPT I currently know.
        
         | bitwize wrote:
         | I think ChatGPT learned this construction _because_ a lot of
         | human pundits /"thought leaders" write like that. Of course
         | whether it comes from one of these or from ChatGPT, it's of
         | about equal value either way...
        
           | catigula wrote:
           | It's too trite for me, too much effort to construct for
           | little pay-off. I feel like it was rewarded for "cleverness"
           | in this regard but it's cleverness perceived by RLHFers.
        
       | sunegg wrote:
       | The issue with these AI systems is how incredibly well they
       | function in isolated circumstances, and how much they crash and
       | burn when they have to be integrated into a full tech stack (even
       | if the tech stack is also written by the same model).
       | 
       | The current generation of generative AI based on LLMs simply
       | won't be able to properly learn to code large code bases, and
       | won't make the correct evaluative choices of products. Without
       | being able to reason and evaluate objectively, you won't be a
       | good "developer" replacement. Similar to how asking LLMs about
       | (complex) integrals, it will often end it's answer with "solution
       | proved by derivation", not because it has actually done it (it
       | will also end with this on incorrect integrals), but because
       | that's what its training data does.
        
       | readthenotes1 wrote:
       | The author seems to miss the point that code being a liability
       | has not affected the amount that is written by people who don't
       | care.
       | 
       | The same day that a tutorial from Capgemini consultant on how to
       | write code using AI appeared here, I heard from a project manager
       | who has AI write up code that is the reviewed by the human
       | project team--because that is far easier.
       | 
       | I expect most offshoring to go the way of the horse and buggy
       | because it may be easier to explain the requirements to cursor,
       | and the turnaround time is much faster.
        
       | hcfman wrote:
       | Man! Us people in Europe would love this double the salary
       | scenario to apply here.
        
         | ActionHank wrote:
         | Have you tried owning guns and not having health care?
        
           | mdaniel wrote:
           | I know what you're getting at, but the way you phrased it
           | makes it sound like a more terminal outcome
        
       | neallindsay wrote:
       | The picture at the top of the article seems to be a bad (AI-
       | generated, I assume) illustration of the Gartner Hype cycle.
       | There are supposed to be five stages, but the text at the bottom
       | doesn't line up with the graph because it is missing the "peak of
       | inflated expectations" while the graph seems to be missing the
       | "plateau (of) productivity" stage.
        
         | yellow_lead wrote:
         | And it's like 1.7MB
        
       | hermitcrab wrote:
       | >The NoCode/LowCode Revolution
       | 
       | Visual programming (NoCode/LowCode) tools have been very
       | successful in quite a few domains. Animation, signal processing,
       | data wrangling etc. But they have not been successful for general
       | purpose programming, and I don't think they ever will be. More on
       | this perenial HN topic at:
       | 
       | https://successfulsoftware.net/2024/01/16/visual-vs-text-bas...
        
       | eduction wrote:
       | Kind of funny that the things that time and again save
       | developers, especially expensive US/California based ones, are
       | those they tend to hate - meetings, writing prose, and customer
       | service.
       | 
       | Writing code should almost be an afterthought to understanding
       | the problem deeply and iteratively.
        
       | gherkinnn wrote:
       | > The most valuable skill in software isn't writing code, it's
       | architecting systems.
       | 
       | I don't quite agree. I see the skill in translating the real
       | world with all its inconsistencies in to something a computer
       | understands.
       | 
       | And this is where all the no/lo-code platforms fall apart. At
       | some point that translation step needs to happen and most people
       | absolutely hate it. And now you hire a dev anyway. As helpful as
       | they may be, I haven't seen LLMs do this translation step any
       | better.
       | 
       | Maybe there is a possibility that LLMs/AI remove the moron out of
       | "extremely fast moron" that are computers in ways I haven't yet
       | seen.
        
         | cheema33 wrote:
         | If you look at the history of programming languages, we have
         | been moving in the direction of "natural language" over time.
         | We started at 1s and 0s. And then moved up to assembly
         | language. Which I imagine was considered a higher level
         | language back then.
         | 
         | I suspect that if current trends continue, today's higher level
         | languages will eventually become lower level languages in the
         | not so distant future. It will be less important to know them,
         | just like it is not critical to know assembly language to write
         | a useful application today.
         | 
         | System architecture will remain critical.
        
           | skydhash wrote:
           | We have move towards higher abstractions, not natural
           | languages. It looks like natural language because we name
           | those abstractions from natural languages, but their
           | semantics can be quite different.
           | 
           | Building a software was always about using those abstractions
           | to solve a problem. But what clients give us are mostly
           | wishes and wants. We turn those into a problem, then we solve
           | that problem. It goes from "I want $this" (requirement) to
           | "How can $this be done?" (analysis), then to "$this can be
           | done that way" (design). We translate the last part into
           | code. But there's still "Is $this done correctly?" (answered
           | by testing) and "$this is no longer working" (maintenance).
           | 
           | So we're not moving to natural language, because the whole
           | point of code is to ossify design. We're moving towards
           | better representation of common design elements.
        
       | nailer wrote:
       | > The sysadmins weren't eliminated; they were reborn as DevOps
       | engineers with fancy new job titles and substantially higher
       | compensation packages.
       | 
       | God I felt like I was the only one that noticed. People would say
       | 'DevOps can code' as if that made DevOps a new thing, but being
       | able to automate anything was a core principle of the SAGE-style
       | systems admin in the 90s / early 2000s.
        
       | crakhamster01 wrote:
       | I'm increasingly certain that companies leaning too far into the
       | AI hype are opening themselves up to disruption.
       | 
       | The author of this post is right, code is a liability, but AI
       | leaders have somehow convinced the market that code generation on
       | demand is a massive win. They're selling the industry on a future
       | where companies can maintain "productivity" with a fraction of
       | the headcount.
       | 
       | Surprisingly, no one seems to ask (or care) about how product
       | quality fares in the vibe code era. Last month Satya Nadella
       | famously claimed that 30% of Microsoft's code was written by AI.
       | Is it a coincidence that Github has been averaging 20 incidents a
       | month this year?[1] That's basically once a work day...
       | 
       | Nothing comes for free. My prediction is that companies over-
       | prioritizing efficiency through LLMs will pay for it with
       | quality. I'm not going to bet that this will bring down any
       | giants, but not every company buying this snake oil is Microsoft.
       | There are plenty of hungry entrepreneurs out there that will
       | swarm if businesses fumble their core value prop.
       | 
       | [1] https://www.githubstatus.com/history
        
         | cheema33 wrote:
         | > I'm increasingly certain that companies leaning too far into
         | the AI hype are opening themselves up to disruption.
         | 
         | I am in the other camp. Companies ignoring AI are in for a bad
         | time.
        
           | crakhamster01 wrote:
           | Haha, I tried to couch this by adding "too far", but I agree.
           | Companies should let their teams try out relevant tools in
           | their workflows.
           | 
           | My point was more of a response to the inflated expectations
           | that people have about AI. The current generation of AI tech
           | is rife with gotchas and pitfalls. Many companies seem to be
           | making decisions with the hope that they will out-innovate
           | any consequences.
        
           | DaSHacka wrote:
           | How so? Not enough art slop logos so they don't have to pay
           | an artist? Other than in maximizing shareholder return I fail
           | to see how foregoing AI is putting them "behind".
           | 
           | AI, especially for programming, is essentially no better than
           | your typical foriegn offshore programming firm, with
           | nonsensical comments and sprawling conflicting code styles.
           | 
           | If it eventually becomes everything the proponents say it
           | will, they could always just _start_ using it more.
        
         | sltr wrote:
         | I agree with this. "Companies which overuse AI now will inherit
         | a long tail of costs" [1]
         | 
         | [1] AI: Accelerated Incompetence.
         | https://www.slater.dev/accelerated-incompetence/
        
       | overflow897 wrote:
       | I think articles like this have the big assumption under them
       | that we are going to plateau with progress. If that assumption is
       | true, then sure.
       | 
       | But if it's false, there's no saying you can't eventually have an
       | ai model that can read your entire aws/infra account, look at
       | logs, financials, look at docs and have a coherent picture of an
       | entire business. At that point the idea that it might be able to
       | handle architecture and long term planning seems plausible.
       | 
       | Usually when I read about developer replacement, it's with the
       | underlying assumption that the agents/models will just keep
       | getting bigger, better and cheaper, not that today's models will
       | do it.
        
         | layer8 wrote:
         | There is a high risk that the systems that AIs build, and their
         | reasoning, will become inscrutable with time, as if built by
         | aliens. There is a huge social aspect to software development
         | and the tech stack and practices we have, that ensures that
         | (despite all disagreements) we as developers are roughly on the
         | same page as to how to go about contemporary software
         | development (which now for example is different than, say, 20
         | or 40 years ago).
         | 
         | When AIs are largely on their own, their practices will evolve
         | as well, but without there being a population of software
         | developers who participate and follow those changes in concepts
         | and practices. There will still have to be a smaller number of
         | specialists who follow and steer how AI is doing software
         | development, so that the inevitable failure cases can be
         | analyzed and fixed, and to keep the AI way of doing things on a
         | track that is still intelligible to humans.
         | 
         | Assuming that AI will become that capable, this will be a long
         | and complex transition.
        
       | mediumsmart wrote:
       | >The most valuable skill in software isn't writing code, it's
       | architecting systems.
       | 
       | I keep saying that - _AI is the brick maker_ - you build the
       | house. and its your decision to build that house that _only_
       | needs bricks in the right place ...
        
       | bob1029 wrote:
       | I strongly agree with the architecture piece.
       | 
       | Seeing the difference in complexity between a distributed
       | "monolith" and an actual one makes me wonder how serious some of
       | us are about serving the customer. The speed with which you can
       | build a rails or PHP app makes everything proposed since 2016
       | seem kind of pointless from a business standpoint. Many SaaS B2B
       | products could be refactored into a single powershell/bash
       | script.
       | 
       | It can take a _very_ firm hand to guide a team away from the
       | shiny distractions. There is no way in hell an obsequious AI
       | contraption will be able to fill this role. I know for a fact the
       | LLMs are guiding developers towards more complexity because I
       | have to constantly prompt things like  "do not use 3rd party
       | dependencies" and "demonstrate using pseudocode first" to avoid
       | getting sucked into npm Narnia.
        
       | throwawayobs wrote:
       | I'm old enough to remember when you wouldn't need to hire
       | expensive developers anymore because object-oriented programming
       | would make it possible to have semi-skilled employees assemble
       | software from standardized parts. They even talked about the
       | impending "software factory".
        
         | cheema33 wrote:
         | The Javascript ecosystem is quite large. And does provide a
         | massive library of "standardized parts". To some degree that
         | dream has been realized. A semi-skilled person can build simple
         | applications in a very short amount of time. This was not
         | possible in the early days.
         | 
         | However, if the tooling has improved 10x, then the product
         | complexity has gone up 100x. Nowadays, you can one-shot a
         | Tetris game using an LLM. Back in the day this would take
         | weeks, if not months. But now, nobody is impressed by a Tetris
         | level game.
        
           | prmph wrote:
           | And yet no LLM can one-shot a simple markdown viewer with
           | automatic indentation based on section levels. I tried.
        
       | vinceguidry wrote:
       | This article makes a fundamental mistake where the author thinks
       | that business values quality. Business has never valued quality.
       | Customers can value quality, but business only values profit
       | margins. If customers will only buy quality, then that's what
       | business will deliver. But customers don't value quality either,
       | most of the time. They value bang-for-buck. They'll buy the
       | cheapest tools on Amazon and happily vibe code their way into a
       | hole, then throw the broke code out and vibe code some more.
       | 
       | The only people that value quality are engineers. Any predictions
       | of the future by engineers that rely on other people suddenly
       | valuing quality can safely be ignored.
        
         | philosopher1234 wrote:
         | Businesses can value quality in cost centers because it can
         | bring costs down
        
           | vinceguidry wrote:
           | Individual managers can. The business as a whole is looking
           | at a much bigger picture. But what managers value is
           | throughput, not necessarily unit economics. They'll accept
           | faster delivery at worse unit economics.
        
         | prmph wrote:
         | I don't know where you get the impression that customer's don't
         | value quality. They value quality, a lot.
         | 
         | If customers didn't value quality, then every startup would
         | have succeeded, just by providing the most barely functioning
         | product at the cheapest prices, and making enough revenue by
         | volume.
        
           | vinceguidry wrote:
           | > startup would have succeeded, just by providing the most
           | barely functioning product at the cheapest prices, and making
           | enough revenue by volume.
           | 
           | You've just described hustle culture. And yes, it does lead
           | to business success. Engineers don't like hustle.
        
             | prmph wrote:
             | Yep, but most hustles fail, the number of startups that
             | succeed is like, what, 5 or 10%?
        
               | vinceguidry wrote:
               | Hustles don't fail, why would you think they would?
               | Customers love hiring hustlers. Startups fail because
               | they want software economics, not hustle economics. If
               | you're willing to accept hustle economics, you'll never
               | run out of work.
        
               | prmph wrote:
               | Weird arguments you are making. I'm talking about startup
               | business.
               | 
               | > Hustles don't fail
               | 
               | Then by definition there should be few engineers with
               | financial problems, right? Almost every engineer wants to
               | succeed with their side hustle
        
               | vinceguidry wrote:
               | No engineer wants to hustle.
               | 
               | You're still only thinking in terms of startups. I'm
               | thinking about landscapers and ticket scalpers. No
               | engineer is doing that. But if you were willing to, you'd
               | make money.
        
               | kerkeslager wrote:
               | You're not communicating well--it's unclear what you mean
               | by "hustle". It's also pretty unlikely that speaking in
               | absolutes ("no engineer") is correct here.
               | 
               | It sounds a lot like you're saying "all engineers are
               | lazy" and that's just obviously wrong.
        
               | vinceguidry wrote:
               | Using the words of the person I was discussing with.
               | 
               | > providing the most barely functioning product at the
               | cheapest prices, and making enough revenue by volume
               | 
               | This is hustling.
        
               | kerkeslager wrote:
               | And if that's what you're saying, you're unambiguously
               | wrong on your overall point.
               | 
               | Plenty of engineers are building barely-functional
               | products as fast (cheap, because time is money) as can be
               | and doing a ton of volume. The entire Bangalore
               | contractor scene was built this way, as well as a ton of
               | small Western contractor shops. You honestly think _no_
               | engineers understand undercutting competition? Really?
        
               | vinceguidry wrote:
               | Some engineers are alive to the hustle game. But if
               | you're focused on quality you're not hustling.
               | 
               | I'm not sure though I'd call business folks with software
               | products that are hustling engineers. Different mindset.
        
               | sanderjd wrote:
               | I honestly don't understand what point this comment is
               | trying to make.
        
         | ayrtondesozzla wrote:
         | I'd change that line near the end to:
         | 
         | The only people that _often_ value quality are engineers.
         | 
         | I might even add that the overwhelming majority of engineers
         | are happy to sacrifice quality - and ethics generally - when
         | the price is right. Not all, maybe.
         | 
         | It's a strange culture we have, one which readily produces
         | engineer types capable of complex logic in their work, and at
         | the same time, "the overarching concern of business is always
         | profit" seems to sometimes cause difficulty.
        
         | ndiddy wrote:
         | I think quality can be a differentiator in some cases. When the
         | iPhone came out, there were other phones running Windows Phone
         | and Symbian that had more features and cost less. However, the
         | iPhone was successful anyway because it ran smoother and had a
         | more polished UI than its competitors.
        
         | whatnow37373 wrote:
         | People will start caring when their devices start bricking,
         | loading websites takes 12sec and registering for medicaid is
         | only possible between 9 and 11AM and then only if lucky.
         | 
         | We are in this weird twilight zone where everything is still
         | relativity high quality and stuff sort of works but in a few
         | decades shit will start degrading faster than you can say
         | "OpenAI".
         | 
         | Weird thing will start happening like tax systems for the
         | government not being able to be upgraded while consuming
         | billions, infrastructure failing for unknown reasons, simple
         | non or low-power devices that are now ubiquitous will become
         | rare. Everything will require subscriptions and internet access
         | and nothing will work right. You will have to talk to LLMs all
         | day.
        
           | yoyohello13 wrote:
           | I'm convinced the Microsoft Teams team has gone all in on
           | vibe coding. I have never seen so many broken features
           | released in such a short time frame as the last couple
           | months. This is the future as more companies go all in on AI
           | coding.
        
             | herpdyderp wrote:
             | Nothing really new then, just faster enshittification
             | timelines.
        
           | chairhairair wrote:
           | If the current tech plateaus (but continues to come down in
           | price, as expected) then this is a good prediction.
           | 
           | But, then there will be a demand for "all-in-one" reliable
           | mega apps to replace everything else. These apps will usher
           | in the megacorp reality William Gibson described.
        
         | sanderjd wrote:
         | This doesn't resonate with me at all.
         | 
         | First of all, all the most successful software products have
         | had very high quality. Google search won because it was good
         | and fast. All the successful web browsers work incredibly well.
         | Ditto the big operating systems. The iPhone is an amazing
         | product. Facebook, Instagram, TikTok; whatever else you think,
         | these are not buggy or sluggish products, (especially in their
         | prime). Stripe grew by making a great product. The successful
         | B2B products are also very high quality. I don't have much love
         | for Databricks, but it works well. I have found Okta to be
         | extremely impressive. Notion works really well. (There are some
         | counterexamples: I'm not too impressed by Rippling, for
         | instance.)
         | 
         | Where are all these examples of products that have succeeded
         | despite not valuing quality?
        
           | oldandboring wrote:
           | Thanks for calling out Rippling. Pretty poor experience for
           | me as well.
        
           | jajko wrote:
           | Sorry but facebook a "high quality product"? It was a bug
           | infested shitshow from beginning to this day, across multiple
           | computers, spanning more than decade and a half. Not just me.
           | Literally their only value is social graph, which they have
           | by luck of being first, nothing more.
           | 
           | These days when site crashes I welcome it as a gentle
           | reminder to not spend there even that 1 minute I sometimes
           | do. Anyway its now mostly fake ai generated ads to obscure
           | groups I have 0 interest in, I keep reporting them to FB but
           | even for outrighr fraud or scams FB comes back to me with
           | resolution in maybe 2% of the cases. EU on you you cheap
           | scammers.
           | 
           | But in the past I used it for ie photo sharing with family
           | and friends, since I was super active in adventuring and
           | travelling around the world. Up to 10k photos over a decade.
           | 
           | Photo albums uploads randomly failed, or uploaded some
           | subset, some photos twice. On stable fiber optic, while
           | flickr or google photos never ever had such issue. Cannot
           | comment, some internal gibberish error. Comment posted twice.
           | Page reloads to error. Links to profiles or photos go to
           | empty page. Sometimes even main page just empty feed or some
           | internal error. I saw the sentence "Something went wrong"
           | hundreds or maybe even thousands of times, it became such a
           | classic 500 variant. And so on and on, I dont keep list
           | around. Always on Firefox and ublock origin.
           | 
           | I would be properly ashamed to be ever profesionally linked
           | with such, by huge margin, worst technical product that I
           | ever came across. That is, if I could somehow ignore what a
           | cancer to society I would be helping to build, but that would
           | require advanced sociopathical mental tricks on myself I am
           | simply neither capable nor willing to do.
           | 
           | Nah, FB doesnt deserve to be mentioned in same category as
           | the rest, on any reasonable basis.
        
           | enraged_camel wrote:
           | >> Where are all these examples of products that have
           | succeeded despite not valuing quality?
           | 
           | Salesforce. Quickbooks. Any Oracle product.
        
             | simoncion wrote:
             | Blackboard's software and systems.
             | 
             | Fucking _Windows_.
        
         | caseysoftware wrote:
         | > _Business has never valued quality. Customers can value
         | quality, but business only values profit margins._
         | 
         | If think you're really close with one nuance.
         | 
         | Business does not value _CODE quality_. Their primary goal is
         | to ship product quickly enough that they can close customers.
         | If you 're in a fast moving or competitive space, quality
         | matters more because you need to ship differentiating features.
         | If the space is slow moving, not prone to migration, etc, then
         | the shipping schedule can be slower and quality is less
         | important.
         | 
         | That said, customers care about "quality" but they likely
         | define it _very_ differently.. primarily as  "usability"
         | 
         | They don't care about the code behind the scenes, what
         | framework you used, etc as long as the software a) does what
         | they want and b) does it "quick enough" in their opinion.
        
           | kerkeslager wrote:
           | > They don't care about the code behind the scenes, what
           | framework you used, etc as long as the software a) does what
           | they want and b) does it "quick enough" in their opinion.
           | 
           | Business folks love to say this, but a lot of this time this
           | is glossing over a pretty inherent coupling between code
           | quality and doing what users want quick enough. I've worked
           | on a lot of projects with messy code, and that mess _always_
           | translated into problems which users cared about. There isn
           | 't some magical case where the code is bad and the software
           | is great for the users--that's not a thing that exists, at
           | least not for very long.
        
         | EFreethought wrote:
         | Customers value quality when quality is available.
         | 
         | If someone in the supply chain before you cares more about
         | something being cheap, then that is all you get.
        
         | mattgreenrocks wrote:
         | A few facts about my favorite quality-centric company:
         | https://www.fractalaudio.com/
         | 
         | They build hardware-based amp/pedal modelers (e.g. virtual
         | pedalboards + amp) for guitars that get a very steady stream of
         | updates. From a feel and accuracy perspective, they outcompete
         | pretty much everyone else, even much bigger companies such as
         | Line 6 (part of Yamaha). Pretty small company AFAIK, maybe less
         | than 20 people or so. Most of the improvements stem from the
         | CEO's ever-improving understanding of how to model what are
         | very analog systems accurately.
         | 
         | They do almost everything you shouldn't do as a startup:
         | 
         | * mostly a hardware company
         | 
         | * direct sales instead of going through somewhere like
         | Sweetwater
         | 
         | * they don't pay artists to endorse them
         | 
         | * no subscriptions
         | 
         | * lots of free, sometimes substantial updates to the modeling
         | algorithms
         | 
         | * didn't use AI to build their product quickly
         | 
         | Quality is how they differentiate themselves in a crowded
         | market.
         | 
         | This isn't an edge case, either. This is how parts of the
         | market function. Not every part of every market is trapped in a
         | race to the bottom.
        
           | vinceguidry wrote:
           | You love to see it. Nothing beats a labor of love.
        
       | octo888 wrote:
       | At my company they're doubling down. Forcing us to use AI, and
       | product and managers suddenly cosplaying architect and senior
       | developer, attempting to muscle in on developers'/architects'
       | roles. Ie trying to takeaway the thing the developers would have
       | more time for if the AI tools achieved their aims. And to triple
       | down, they're offshoring.
       | 
       | Which makes it really obvious their aim is to get rid of
       | (expensive) developers, not to unlock our time to enable us to
       | work on higher things
        
         | cheema33 wrote:
         | If things are going south as fast as you say they are, then you
         | don't have time to complain. You need to get ahead of this AI
         | beast. Ignore it at your own peril. It takes a while to get
         | good at it. If you are let go from your current job, you will
         | need strong AI skills to land your next gig.
        
       | 1vuio0pswjnm7 wrote:
       | The Myth of Developer Relevance
       | 
       | Can it persist in times when borrowing money is not free (nonzero
       | interest rates)
        
       | janalsncm wrote:
       | I think for the most part the layoffs in software are layoffs
       | because of uncertainty, not because of technology. They are being
       | justified after the fact with technobabble. If there wasn't
       | economic uncertainty companies would gladly accept the extra
       | productivity.
       | 
       | Think about it this way: five years ago plenty of companies hired
       | more SWEs to increase productivity, gladly accepting additional
       | cost. So it's not about cost imo.
       | 
       | I might be wrong, but perhaps a useful way to look at all of this
       | is to ignore stated reasons for layoffs and look at the companies
       | themselves.
        
       | ogogmad wrote:
       | I think that as programmer productivity increases, demand for
       | programmers also increases, but only INITIALLY. However, if
       | productivity improves too much, and programming gets automated
       | too much, then demand for programmers will begin to drop very
       | rapidly. It's non-linear.
        
       | hintymad wrote:
       | > The NoCode/LowCode Revolution
       | 
       | I think this time there is a key difference: AI coding is fully
       | embedded into a software dev's workflow, and it indeed cuts loads
       | of work for at least some of the projects and engineers. In
       | contrast, few, if none, engineers would go to a No-Code/Low-Code
       | tool and them maintain them in their repo.
       | 
       | The impact would be that we will need fewer engineers as the
       | productivity of us increases. That alone may not be enough to
       | change the curve of supply and demand. However, when this is
       | combined with the current market condition of lacking business
       | growth, the curve will be changed: the fewer new problems we
       | have, the more repetitive solutions we will get, the more
       | repetitive solutions we will work on, the more accurate the code
       | generated by AI will be, and therefore the less code we will need
       | a human to write.
       | 
       | So, this time it will not be about AI replacing engineers, but
       | about AI replacing enough repetitive work that we will need fewer
       | engineers.
        
       | brunoborges wrote:
       | The amount of em dashes in this article is quite telling...
       | 
       | I agree with the core of the idea though, and I have written
       | about it as well (https://www.linkedin.com/posts/brunocborges_ai-
       | wont-eliminat...).
        
         | fellowniusmonk wrote:
         | It reads very much like the positive -- bloviation -- of AI.
         | 
         | Where the author just asked gpt to "tell me why AI can't
         | replace my job".
         | 
         | So -- little -- content. This -- is -- slop.
         | 
         | *I mean the whole first section is the most AI sounding thing
         | I've ever seen, even the attempt at a punchy opening is
         | obvious:
         | 
         | The executives get excited. The consultants circle like sharks.
         | PowerPoint decks multiply. Budgets shift.
         | 
         | And then reality sets in.
        
       | holtkam2 wrote:
       | I loved this article and it is the strongest argument I've ever
       | heard for "why I shouldn't be freaking out about the future of my
       | engineering career".
       | 
       | Now just for the heck of it I'll attempt to craft the strongest
       | rebuttal I can:
       | 
       | This blog misses the key difference between AI and all other
       | technologies in software development. AI isn't merely good at
       | writing code. It's good at thinking. It's not going to merely
       | automate software development, it's going to automate knowledge
       | work. You as a human have no place in a world where your brain is
       | strictly less capable in all realms of decisionmaking compared to
       | machines.
        
       | legulere wrote:
       | I don't think the higher pay is true. There are simply less
       | people proficient in new technology at the beginning. You're
       | simply seeing classic demand and response play out. After a while
       | things will calm down again.
       | 
       | I think a better comparison is to Jevons Paradox. New
       | Technologies make developers more efficient and thus cheaper.
       | This increases demand more than what is gained by the efficiency
       | increases.
       | 
       | I don't see us anytime soon running out of things that are worth
       | automating, especially if the cost for that continues to drop.
        
       | moralestapia wrote:
       | >It's architecting systems. And that's the one thing AI can't do.
       | 
       | Weak conclusion as AI already does that quite well.
        
       | FrameworkFred wrote:
       | I agree with some of the article. I agree that code is a
       | liability that's distinct from the asset that the code is part
       | of. It's like tires on a car, they're liability-like whereas the
       | car can be thought of as an asset.
       | 
       | But AI can do some architecting. It's just not really the sort of
       | thing where an unskilled person with a highly proficient LLM is
       | going to be producing a distributed system that does anything
       | useful.
       | 
       | It seems to me that the net effect of AI will be to increase the
       | output of developers without increasing the cost per developer.
       | Effectively, this will make software development cheaper. I
       | suppose it's possible that there is some sort of peak demand for
       | software that will require less developers over time to meet,
       | but, generally, when something becomes cheaper, the demand for
       | that thing will tend to increase.
       | 
       | I think the rumors of our demise are overblown.
        
       | bahmboo wrote:
       | Funny how no one has commented on the graphic being wrong. The
       | enlightenment and disillusionment labels are swapped.
        
       | mpweiher wrote:
       | "Since FORTRAN should virtually eliminate coding and
       | debugging..." -- FORTRAN Preliminary report, 1954
       | 
       | http://www.softwarepreservation.org/projects/FORTRAN/BackusE...
        
         | ogogmad wrote:
         | You can be wrong a lot and then suddenly be right. Look up
         | "being a Turkey": https://en.wikipedia.org/wiki/Turkey_illusion
        
         | cauliflower99 wrote:
         | This is brilliant.
        
       | wayeq wrote:
       | > code is not an asset--it's a liability.
       | 
       | Tell that to Coca-Cola.. whose most valuable asset is literally
       | an algorithm
        
       | chasing wrote:
       | For vibe coding to replace software engineering vibe coding will
       | have to become... software engineering.
        
       | protocolture wrote:
       | AI is an unlimited line of credit at the bank of technical debt.
        
       ___________________________________________________________________
       (page generated 2025-05-27 23:01 UTC)