[HN Gopher] Ask HN: SWEs how do you future-proof your career in ...
       ___________________________________________________________________
        
       Ask HN: SWEs how do you future-proof your career in light of LLMs?
        
       LLMs are becoming a part of software engineering career.  The more
       I speak with fellow engineers, the more I hear that some of them
       are either using AI to help them code, or feed entire projects to
       AI and let the AI code, while they do code review and adjustments.
       I didn't want to believe in it, but I think it's here. And even
       arguments like "feeding proprietary code" will be eventually solved
       by companies hosting their own isolated LLMs as they become better
       and hardware becomes more available.  My prediction is that junior
       to mid level software engineering will disappear mostly, while
       senior engineers will transition to be more of a guiding hand to
       LLMs output, until eventually LLMs will become so good, that senior
       people won't be needed any more.  So, fellow software engineers,
       how do you future-proof your career in light of, the inevitable,
       LLM take over?  --- EDIT ---  I want to clarify something, because
       there seems to be slight misunderstanding.  A lot of people have
       been talking about SWE being not only about code, and I agree with
       that. But it's also easier to sell this idea to a young person who
       is just starting in this career. And while I want this Ask HN to be
       helpful to young/fresh engineers as well, I'm more interested in
       getting help for myself, and many others who are in a similar
       position.  I have almost two decades of SWE experience. But despite
       that, I seem to have missed the party where they told us that
       "coding is not a means to an end", and realized it in the past few
       years. I bet there are people out there who are in a similar
       situations. How can we future-proof our career?
        
       Author : throwaway_43793
       Score  : 155 points
       Date   : 2024-12-16 14:11 UTC (8 hours ago)
        
       | ldjkfkdsjnv wrote:
       | I'm working as if in 2-3 years the max comp I will be able to get
       | as a senior engineer will be 150k. And it will be hard to get
       | that. It's not that it will disappear, its that the bar to
       | produce working software will go way down. Most knowledge and
       | skill sets will be somewhat commoditized.
       | 
       | Also pretty sure this will make outsourcing easier since foreign
       | engineers will be able to pick up technical skills easier
        
         | code_for_monkey wrote:
         | yeah I think youre correct, I see a quick ceiling to senior
         | software engineer. On the other hand I think a lot of junior
         | positions are going to get removed, and for a while having the
         | experience to be at a senior level will be rarer. So, there
         | that.
        
         | allan_s wrote:
         | > Also pretty sure this will make outsourcing easier since
         | foreign engineers will be able to pick up technical skills
         | easier
         | 
         | Most importantly it will be easier to have your code comment,
         | class etc. translated into English.
         | 
         | i.e I used to work in country where the native language is not
         | related to english (i.e not Spanish, German, French etc.) and
         | it was incredibly hard for student and developers to name
         | things in English and instead it was more natural to name
         | things in their language.
         | 
         | So even a LLM that take the code and "translate it" (that
         | before no translation tool was able to do) is opening a huge
         | chunk of developers to the world.
        
       | busterarm wrote:
       | Most organizations don't move that fast. Certainly not fast
       | enough to need this kind of velocity.
       | 
       | As it is I spend 95% of my time working out what needs to be done
       | with all of the stakeholders and 5% of my time writing code. So
       | the impact of AI on that is negligible.
        
         | marpstar wrote:
         | This is consistent with my experience. We're far from a
         | business analyst or product engineer being able to prompt an
         | LLM to write the software themselves. It's their job to know
         | the domain, not the technical details.
         | 
         | Maybe we all end up being prompt engineers, but I think that
         | companies will continue to have experts on the business side as
         | well as the tech side for any foreseeable future.
        
       | abdljasser2 wrote:
       | My plan is to become a people person / ideas guy.
        
       | allendoerfer wrote:
       | > My prediction is that junior to mid level software engineering
       | will disappear mostly, while senior engineers will transition to
       | be more of a guiding hand to LLMs output, until eventually LLMs
       | will become so good, that senior people won't be needed any more.
       | 
       | A steeper learning curve in a professional field generally
       | translates into higher earnings. The longer you have to be
       | trained to be helpful, the more a job generally earns.
       | 
       | I am already trained.
        
       | okasaki wrote:
       | Make lots of incompatible changes to libraries. No way LLMs keep
       | up with that since their grasp on time is weak at best.
        
       | sweetheart wrote:
       | Learning woodworking in order to make fine furniture. This is
       | mostly a joke, but the kind that I nervously laugh at.
        
         | torlok wrote:
         | You'll go from competing with Google to competing with IKEA.
        
           | anticensor wrote:
           | IKEA is a hybrid venture.
        
       | rvz wrote:
       | > My prediction is that junior to mid level software engineering
       | will disappear mostly, while senior engineers will transition to
       | be more of a guiding hand to LLMs output, until eventually LLMs
       | will become so good, that senior people won't be needed any more.
       | 
       | It is more like across the board beyond engineers, including both
       | junior and senior roles. We have heard first hand from Sam Altman
       | that in the future that Agents will be more advanced and will
       | work like a "senior colleague" (for cheap).
       | 
       | Devin is already going after everyone. Juniors were already
       | replaced with GPT-4o and mid-seniors are already worried that
       | they are next. To executives and management, they see you as a
       | "cost".
       | 
       | So frankly, I'm afraid that the belief that software engineers of
       | any level are safe in the intelligence age is 100% cope. In 2025,
       | I predict that there will be more layoffs because of this.
       | 
       | Then (mid-senior or higher) engineers here will go back to these
       | comments a year later and ask themselves:
       | 
       |  _" How did we not see this coming?"_
        
         | angoragoats wrote:
         | > So frankly, I'm afraid that the belief that software
         | engineers of any level are safe in the intelligence age is 100%
         | cope. In 2025, I predict that there will be more layoffs
         | because of this.
         | 
         | If this point could be clarified into a proposal that was
         | easily testable with a yes/no answer, I would probably be
         | willing to bet real money against it. Especially if the time
         | frame is only until the end of 2025.
        
           | tinthedev wrote:
           | I'd gladly double up on your bet.
           | 
           | Frankly, I think it's ridiculous that anyone who has done any
           | kind of real software work would predict this.
           | 
           | Layoffs? Probably. Layoffs of capable senior developers, due
           | to AI replacing them? Inconceivable, with the currently
           | visible/predictable technology.
        
             | angoragoats wrote:
             | Yeah, I agree. Let me take a stab at a statement that I'd
             | bet against:
             | 
             | There will publicly-announced layoffs of 10 or more senior
             | software engineers at a tech company sometime between now
             | and December 31st, 2025. As part of the announcement of
             | these layoffs, the company will state that the reason for
             | the layoffs is the increasing use of LLMs replacing the
             | work of these engineers.
             | 
             | I would bet 5k USD of my own money, maybe more, against the
             | above occurring.
             | 
             | I hesitate to jump to the "I'm old and I've seen this all
             | before" trope, but some of the points here feel a lot to me
             | like "the blockchain will revolutionize everything" takes
             | of the mid-2010s.
        
               | cookingrobot wrote:
               | https://www.msn.com/en-us/money/companies/klarna-ceo-
               | says-th...
        
               | angoragoats wrote:
               | This article:
               | 
               | 1) Does not describe a layoff, which is an active action
               | the company has to take to release some number of current
               | employees, and instead describes a recent policy of "not
               | hiring." This is a passive action that could be
               | undertaken for any number of reasons, including those
               | that might not sound so great for the CEO to say (e.g.
               | poor performance of the company);
               | 
               | 2) Cites no sources other than the CEO himself, who has a
               | history of questionable actions when talking to the press
               | [0];
               | 
               | 3) Specifically mentions at the end of the article that
               | they are still hiring for engineering positions, which,
               | you know, kind of refutes any sort of claim that AI is
               | replacing engineers.
               | 
               | Though, this does make me realize a flaw in the language
               | of my proposed bet, which is that any CEO who claims to
               | be laying off engineers due to advancement of LLMs could
               | be lying, and CEOs are in fact incentivized to scapegoat
               | LLMs if the real reason would make the company look worse
               | in the eyes of investors.
               | 
               | [0] https://fortune.com/2022/06/01/klarna-ceo-sebastian-
               | siemiatk...
        
         | paulyy_y wrote:
         | Have you checked out the reception to Devin last week? The only
         | thing it's going after is being another notch on the
         | leaderboard of scams, right next to the Rabbit R1.
        
       | finebalance wrote:
       | Not a clue.
       | 
       | I'm a decent engineer working as a DS in a consulting firm. In my
       | last two projects, I checked in (or corrected) so much more code
       | than the other two junior DS's in my team, that at the end some
       | 80%-90% of the ML-related stuff had been directly built,
       | corrected or optimized by me. And most of the rest that wasn't,
       | was mostly because it was boilerplate. LLMs were pivotal in this.
       | 
       | And I am only a moderately skilled engineer. I can easily see
       | somebody with more experience and skills doing this to me, and
       | making me nearly redundant.
        
         | busterarm wrote:
         | You're making the mistake of overvaluing volume of work output.
         | In engineering, difference of perspective is valuable. You want
         | more skilled eyeballs on the problem. You won't be redundant
         | just as your slower coworkers aren't now.
         | 
         | It's not a race, it's a marathon.
        
           | markus_zhang wrote:
           | For most of the business, they don't really need
           | exceptionally good solutions. Something works is fine. I'm
           | pretty sure AI can replace at least do 50% of my coding work.
           | It's not going to replace me right now, but it's there in the
           | foreseeable future, especially when companies realize they
           | can have some setup like 1 good PM + a couple of seniors +
           | bunch of AI agents instead of 1 good PM + a few seniors +
           | bunch of juniors.
        
         | TechDebtDevin wrote:
         | Once again, this seems to only apply to Python / ML SWEs. Try
         | to get any of these models to write decent Rust, Go or C
         | boilerplate.
        
           | finebalance wrote:
           | I can't speak to Rust, Go or C, but for me LLMs have greatly
           | accelerated the process of learning and building projects in
           | Julia.
        
             | selimthegrim wrote:
             | Can you give some more specific examples? I am currently
             | learning Julia...
        
       | nerder92 wrote:
       | As for every job done well the most important thing is to truly
       | understand the essence of your job, why it exist in the first
       | place and which problem truly solves when done it well.
       | 
       | A good designers is not going to be replaced by
       | Dall-e/Midjourney, becuase the essence of design is to understand
       | the true meaning/purpose of something and be able to express it
       | graphically, not align pixels with the correct HEX colour
       | combination one next to the other.
       | 
       | A good software engineer is not going to be replaced by
       | Cursor/Co-pilot, because the essence of programming is to
       | translate the business requirements of a real world problem that
       | other humans are facing into an ergonomic tool that can be used
       | to solve such problem at scale, not writing characters on an IDE.
       | 
       | Neither Junior nor Seniors Dev will go anywhere, what we'll for
       | sure go away is all the "code-producing" human-machines such as
       | Fiver Freelance/Consultants which completely
       | misunderstand/neglect the true essence of their work. Becuase
       | code (as in a set of meaningful 8-bits symbols) was never the
       | goal, but always the means to an end.
       | 
       | Code is an abstraction, allegedly our best abstraction to date,
       | but it's hard to believe that is the last iteration of it.
       | 
       | I'll argue that software itself will be a completely different
       | concept in 100 years from now, so it's obvious that the way of
       | producing it will change too.
       | 
       | There is a famous quote attributed to Hemingway that goes like
       | this:
       | 
       | "Slowly at first, then all at once"
       | 
       | This is exactly what is happening and and what always happens.
        
         | hnthrowaway6543 wrote:
         | this is the correct answer
         | 
         | i can only assume software developers afraid of LLMs taking
         | their jobs have not been doing this for long. being a software
         | developer is about writing code in the same way that being a
         | CEO is about sending emails. and i haven't seen any CEOs get
         | replaced even thought chatgpt can write better emails than most
         | of them
        
           | throwaway_43793 wrote:
           | But the problem is that the majority of SWs are like that.
           | You can blame them, or the industry, bust most engineers are
           | writing code most of the time. For every Tech Lead who does
           | "people stuff", there are 5-20 engineers who, mostly, write
           | code and barely know that entire scope/context of the product
           | they are working on.
        
             | hnthrowaway6543 wrote:
             | > bust most engineers are writing code most of the time.
             | 
             | the _physical act of writing code_ is different than the
             | process of developing software. 80%+ of the time working on
             | a feature is designing, reading existing code, thinking
             | about the best way to implement your feature in the
             | existing codebase, etc. not to mention debugging, resolving
             | oncall issues, and other software-related tasks which are
             | not _writing_ code
             | 
             | GPT is awesome at spitting out unit tests, writing one-off
             | standalone helper functions, and scaffolding brand new
             | projects, but this is realistically 1-2% of a software
             | developer's time
        
               | throwaway_43793 wrote:
               | Everything you have described, apart from on-call, I
               | think LLMs can/will be able to do. Explaining code,
               | reviewing code, writing code, writing test, writing tech
               | docs. I think we are approaching a point where all these
               | will be done by LLMs.
               | 
               | You could argue about architecture/thinking about the
               | correct/proper implementations, but I'd argue that for
               | the past 7 decades of software engineering, we are not
               | getting close to a perfect architecture singularity where
               | code is maintainable and there is no more tech debt left.
               | Therefor, arguments such as "but LLMs produce spaghetti
               | code" can be easily thrown away by saying that humans do
               | as well, except humans waste time by thinking about ways
               | to avoid spaghetti code, but eventually end up writing it
               | anyways.
        
               | hnthrowaway6543 wrote:
               | > Explaining code, reviewing code, writing code, writing
               | test, writing tech docs.
               | 
               | people using GPT to write tech docs at real software
               | companies get fired, full stop lol. good companies
               | understand the value of concise & precise communication
               | and slinging GPT-generated design docs at people is
               | massively disrespectful to people's time, the same way
               | that GPT-generated HN comments get downvoted to oblivion.
               | if you're at a company where GPT-generated communication
               | is the norm you're working for/with morons
               | 
               | as for everything else, no. GPT can explain a few
               | thousand lines of code, sure, but it can't explain how
               | every component in a 25-year-old legacy system with
               | millions of lines and dozens/scores of services works
               | together. "more context" doesn't help here
        
         | throwaway_43793 wrote:
         | It's a good point, and I keep hearing it often, but it has one
         | flaw.
         | 
         | It assumes that most engineers are in contact with the end
         | customer, while in reality they are not. Most engineers are
         | going through a PM whose role is to do what you described:
         | speak with customers, understand what they want and somehow
         | translate it to a language that the engineers will understand
         | and in turn translate it into code. (Edit), the other part are
         | "IC" roles like tech-lead/staff/etc, but the ratio between ICs
         | and Engineers is, my estimate, around 1:10/20. So the majority
         | of engineers are purely writing code, and engage in supporting
         | actions around code (tech documentation, code reviews, pair
         | programming, etc).
         | 
         | Now, my questions is as follows -- who has a bigger rate of
         | employability in post LLM-superiority world: (1) a good
         | technical software engineer with poor people/communication
         | skills or (2) a good communicator (such as a PM) with poor
         | software engineering skills?
         | 
         | I bet on 2, and as one of the comments says, if I had to future
         | proof my career, I would move as fast as possible to a position
         | that requires me to speak with people, be it other people in
         | the org or customers.
        
           | nerder92 wrote:
           | (1) is exactly the misunderstanding i'm talking about, most
           | creative jobs are not defined by their output (which is
           | cheap) but by the way they reach that output. Software
           | engineers that thought they could write their special
           | characters in their dark room without the need to actually
           | understand anything will go away in breeze (for good).
           | 
           | This entire field was full of hackers, deeply passionate and
           | curious individuals who want to understand every little
           | detail of the problem they were solving and why, then
           | software becomes professionalized and a lot of amateurs
           | looking for a quick buck came in commoditizing the industry.
           | With LLM will go full-circe and push out a lot of amateurs to
           | give again space to the hackers.
           | 
           | Code was never the goal, solving problem was.
        
         | rvz wrote:
         | > A good software engineer is not going to be replaced by
         | Cursor/Co-pilot, because the essence of programming is to
         | translate the business requirements of a real world problem
         | that other humans are facing into an ergonomic tool that can be
         | used to solve such problem at scale, not writing characters on
         | an IDE.
         | 
         | Managers and executives only see engineers and customer service
         | as an additional cost and will find an opportunity to trim down
         | roles and they do not care.
         | 
         | This year's excuse is now anything that uses AI, GPTs or Agents
         | and they will try to do it anyway. Companies such as Devin and
         | Klarna are not hiding this fact.
         | 
         | There will just be less engineers and customer service roles in
         | 2025.
        
           | nerder92 wrote:
           | From a financial point of view, engineers are considered
           | assets not costs, because they contribute to grow the
           | valuation of the company assets.
           | 
           | The right thing to do economically (in capitalism) is to do
           | more of the same, but faster. So if you as a software
           | engineer or customer service rep can't do more of the same
           | faster you will replaced by someone (or something) that
           | alleggedly can.
        
             | timr wrote:
             | > From a financial point of view, engineers are considered
             | assets not costs
             | 
             | At Google? Perhaps. At most companies? No. At most places,
             | software engineering is a pure cost center. The _software
             | itself_ may be an asset, but the engineers who are churning
             | it out are not. That 's part of the reason that it's almost
             | always better to buy than build -- externalizing shared
             | costs.
             | 
             | Just for an extreme example, I worked at a place that made
             | us break down our time on new code vs. maintenance of
             | existing code, because a big chunk of our time was
             | accounted for _literally as a cost_ , and could not be
             | depreciated.
        
           | AnimalMuppet wrote:
           | Some will. Some won't. The ones that cut engineering will be
           | hurting by 2027, though, maybe 2026.
           | 
           | It's almost darwinian. The companies whose managers are less
           | fit for running an organization that produces what matters
           | will be less likely to survive.
        
           | samatman wrote:
           | So what you're saying is that some of us should be gearing up
           | to pull in ludicrous amounts of consultant money in 2026,
           | when the chickens come home to roost, and the managers
           | foolish enough to farm out software development to LLMs need
           | to hire actual humans at rate to exorcize their demon-haunted
           | computers?
           | 
           | Yeah that will be a lucrative niche if you have the stomach
           | for it...
        
         | dmortin wrote:
         | > A good designers is not going to be replaced by
         | Dall-e/Midjourney, becuase the essence of design is to
         | understand the true meaning/purpose of something and be able to
         | express it graphically, not align pixels with the correct HEX
         | colour combination one next to the other.
         | 
         | Yes, but Dall-e, etc. output will be good enough for most
         | people and small companies if it's cheap or free even.
         | 
         | Big companies with deep pockets will still employ talented
         | designers, because they can afford it and for prestige, but in
         | general many average designer jobs are going to disappear and
         | get replaced with AI output instead, because it's good enough
         | for the less demanding customers.
        
       | cooljacob204 wrote:
       | Imo LLMs are dumb and our field is far from away from having LLMs
       | smart enough to automate it. Even at a junior level. I feel like
       | the gap is so big personally that I'm not worried at all for the
       | next 10 years.
        
       | hnthrow90348765 wrote:
       | The simple answer is to use LLMs so you can put it on your
       | resume. Another simple answer is to transition to a job where
       | it's mostly about people.
       | 
       | The complex answer is we don't really know how good things will
       | get and we could be at the peak for the next 10-20 years, or
       | there could be some serious advancements that make the current
       | generation look like finger-painting toddlers by comparison.
       | 
       | I would say the fear of no junior/mid positions is unfounded
       | though since in a generation or two, you'd have no senior
       | engineers.
        
       | chirau wrote:
       | With every new technology comes new challenges. The role will
       | evolve to tackle those new challenges as long as they are
       | software/programming/engineering specific
        
       | code_for_monkey wrote:
       | Im hoping I can transition to some kind of product or management
       | role since frankly Im not that good at coding anyways (I dont
       | feel like I can pass a technical interview anymore, tbh.)
       | 
       | I think a lot of engineers are in for some level of rude
       | awakening. I think a lot of engineers havent applied some level
       | of business/humanities thinking in this, and I think a lot of
       | corporations care less about code quality than even our most
       | pessimistic estimates. It wouldnt surprise me if cuts over the
       | next few years get even deeper, and I think a lot of high
       | performing (re: high paying) jobs are going to get cut. Ive seen
       | so many comments like "this will improve engineering overall, as
       | bad engineers get laid off" and I dont think its going to work
       | like that.
       | 
       | Anecdotal, but no one from my network actually recovered from the
       | post covid layoffs and they havent even stopped. I know loads of
       | people who dont feel like theyll ever get a job as good as they
       | had in 2021.
        
         | throwaway_43793 wrote:
         | What's your plan to transition into product/management?
        
           | code_for_monkey wrote:
           | right now? Keeping my eyes very peeled for what people in my
           | network post about needing. Unfortunately I dont' have much
           | of a plan right now, sorry.
        
       | wolvesechoes wrote:
       | Writing code and making commits is only a part of my work. I also
       | have to know ODEs/DAEs, numerical solvers, symbolic
       | transformations, thermodynamics, fluid dynamics, dynamic systems,
       | controls theory etc. So basically math and physics.
       | 
       | LLMs are rather bad at those right now if you go further than
       | trivialities, and I believe they are not particularly good at
       | code either, so I am not concerned. But overall I think this is
       | somewhat good advice, regardless of the current hype train: do
       | not be just a "programmer", and know something else besides main
       | Docker CLI commands and APIs of your favorite framework. They
       | come and go, but knowledge and understanding stays for much
       | longer.
        
       | ThrowawayR2 wrote:
       | LLMs are most capable where they have a lot of good data in their
       | training corpus and not much reasoning is required. Migrate to a
       | part of the software industry where that isn't true, e.g. systems
       | programming.
       | 
       | The day LLMs get smart enough to read a chip datasheet and then
       | realize the hardware doesn't behave the way the datasheet claims
       | it does is the day they're smart enough to send a Terminator to
       | remonstrate with whoever is selling the chip anyway so it's a
       | win-win either way, dohohoho.
        
       | angoragoats wrote:
       | I think there's been a lot of fear-mongering on this topic and
       | "the inevitable LLM take over" is not as inevitable as it might
       | seem, perhaps depending on your definition of "take over."
       | 
       | I have personally used LLMs in my job to write boilerplate code,
       | write tests, make mass renaming changes that were previously
       | tedious to do without a lot of grep/sed-fu, etc. For these types
       | of tasks, LLMs are already miles ahead of what I was doing before
       | (do it myself by hand, or have a junior engineer do it and get
       | annoyed/burnt out).
       | 
       | However, I have yet to see an LLM that can understand an already
       | established large codebase and reliably make well-designed
       | additions to it, in the way that an experienced team of engineers
       | would. I suppose this ability could develop over time with large
       | increases in memory/compute, but even state-of-the-art models
       | today are so far away from being able to act like an actual
       | senior engineer that I'm not worried.
       | 
       | Don't get me wrong, LLMs are incredibly useful in my day-to-day
       | work, but I think of them more as a leap forward in developer
       | tooling, not as an eventual replacement for me.
        
         | m_ke wrote:
         | Those models will be here within a year.
         | 
         | Long context is practically a solved problem and there's a ton
         | of work now on test time reasoning motivated by o1 showing that
         | it's not that hard to RL a model into superhuman performance as
         | long as the task is easy / cheap to validate (and there's works
         | showing that if you can define the problem you can use an LLM
         | to validate against your criteria).
        
           | angoragoats wrote:
           | I intentionally glossed over a lot in my first comment, but I
           | should clarify that I don't believe that increased context
           | size or RL is sufficient to solve the problem I'm talking
           | about.
           | 
           | Also "as long as the task is easy / cheap to validate" is a
           | problematic statement if we're talking about the replacement
           | of senior software engineers, because problem definition and
           | development of validation criteria are core to the duties of
           | a senior software engineer.
           | 
           | All of this is to say: I could be completely wrong, but I'll
           | believe it when I see it. As I said elsewhere in the comments
           | to another poster, if your points could be expressed in
           | easily testable yes/no propositions with a timeframe
           | attached, I'd likely be willing to bet real money against
           | them.
        
             | m_ke wrote:
             | Sorry I wasn't clear enough, the cheap to validate part is
             | only needed to train a large base model that can handle
             | writing individual functions / fix bugs. Planning a whole
             | project, breaking it down into steps and executing each one
             | is not something that current LLMs struggle at.
             | 
             | Here's a recipe for a human level LLM software engineer:
             | 
             | 1. Pretrain an LLM on as much code and text as you can
             | (done already)
             | 
             | 2. Fine tune it on synthetic code specific tasks like: (a)
             | given a function, hide the body, make the model implement
             | it and validate that it's functionally equivalent to the
             | target function (output matching), can also have an
             | objective to optimize the runtime of the implementation (b)
             | introduce bugs in existing code and make the LLM fix it,
             | (c) make LLM make up problems, write tests / spec for it,
             | then have it attempt to implement it many times until it
             | comes up with a method that passes the tests, (d-z) a lot
             | of other similar tasks that use linters, parsers, AST
             | modifications, compilers, unit tests, specs validated by
             | LLMs, profilers to check that the produced code is valid
             | 
             | 3. Distill this success / failure criteria validator to a
             | value function that can predict probability of success at
             | each token to give immediate reward instead of requiring
             | full roll out, then optimize the LLM on that.
             | 
             | 4. At test time use this final LLM to produce multiple
             | versions until one passes the criteria, for the cost of an
             | hour of a software engineer you can have an LLM produce
             | millions of different implementations.
             | 
             | See papers like: https://arxiv.org/abs/2409.15254 or slides
             | from NeurIPS that I mentioned here
             | https://news.ycombinator.com/item?id=42431382
        
               | angoragoats wrote:
               | > At test time use this final LLM to produce multiple
               | versions until one passes the criteria, for the cost of
               | an hour of a software engineer you can have an LLM
               | produce millions of different implementations.
               | 
               | If you're saying that it takes one software engineer one
               | hour to produce comprehensive criteria that would allow
               | this whole pipeline to work for a non-trivial software
               | engineering task, this is where we violently disagree.
               | 
               | For this reason, I don't believe I'll be convinced by any
               | additional citations or research, only by an actual
               | demonstration of this working end-to-end with minimal
               | human involvement (or at least, meaningfully less human
               | involvement than it would take to just have engineers do
               | the work).
               | 
               | edit: Put another way, what you describe here looks to me
               | to be throwing a huge number of "virtual" low-skilled
               | junior developers at the task and optimizing until you
               | can be confident that one of them will produce a good-
               | enough result. My contention is that this is not a valid
               | methodology for reproducing/replacing the work of senior
               | software engineers.
        
               | m_ke wrote:
               | That's not what I'm saying at all. I'm saying that
               | there's a trend showing that you can improve LLM
               | performance significantly by having it generate multiple
               | responses until it produces one that meets some criteria.
               | 
               | As an example, huggigface just posted an article showing
               | this for math, where with some sampling you can get a 3B
               | model to outperform a 70B one:
               | https://huggingface.co/spaces/HuggingFaceH4/blogpost-
               | scaling...
               | 
               | Formalizing the criteria is not as hard as you're making
               | it out to be. You can have an LLM listen to a
               | conversation with the "customer", ask follow up questions
               | and define a clear spec just like a normal engineer. If
               | you doubt it open up chatGPT, tell it you're working on X
               | and ask it to ask you clarifying questions, then come up
               | with a few proposal plans and then tell it which plan to
               | follow.
        
               | angoragoats wrote:
               | > That's not what I'm saying at all. I'm saying that
               | there's a trend showing that you can improve LLM
               | performance significantly by having it generate multiple
               | responses until it produces one that meets some criteria.
               | 
               | I apologize for misinterpreting what you were saying -- I
               | was clearly taking "for the cost of an hour of a software
               | engineer" to mean something that you didn't intend.
               | 
               | > As an example, huggigface just posted an article
               | showing this for math, where with some sampling you can
               | get a 3B model to outperform a 70B one
               | 
               | This is not relevant to our discussion. Again, I'm
               | reasonably sure that I'm not going to be convinced by any
               | research demonstrating that X new tech can increase Y
               | metric by Z%.
               | 
               | > Formalizing the criteria is not as hard as you're
               | making it out to be. You can have an LLM listen to a
               | conversation with the "customer", ask follow up questions
               | and define a clear spec just like a normal engineer. If
               | you doubt it open up chatGPT, tell it you're working on X
               | and ask it to ask you clarifying questions, then come up
               | with a few proposal plans and then tell it which plan to
               | follow.
               | 
               | This is much more relevant to our discussion. Do you
               | honestly feel this is an accurate representation of how
               | you'd define the requirements for the pipeline you
               | outlined in your post above? Keep in mind that we're
               | talking about having LLMs work on already-existing large
               | codebases, and I conceded earlier that writing
               | boilerplate/base code for a brand new project is
               | something that LLMs are already quite good at.
               | 
               | Have you worked as a software engineer for a long time? I
               | don't want to assume anything, but all of your points
               | thus far read to me like they're coming from a place of
               | not having worked in software much.
        
               | m_ke wrote:
               | > Have you worked as a software engineer for a long time?
               | I don't want to assume anything, but all of your points
               | thus far read to me like they're coming from a place of
               | not having worked in software much.
               | 
               | Yes I've been a software engineer working in deep
               | learning for over 10 years, including as an early
               | employee at a leading computer vision company and a
               | founder / CTO of another startup that built multiple
               | large products that ended up getting acquired.
               | 
               | > I apologize for misinterpreting what you were saying --
               | I was clearly taking "for the cost of an hour of a
               | software engineer" to mean something that you didn't
               | intend.
               | 
               | I meant that unlike a software engineer, the LLM can do a
               | lot more iterations on the problem given the same budget.
               | So if your boss comes and says build me new dashboard
               | page it can generate 1000s of iterations and use a human
               | aligned reward model to rank them based on which one your
               | boss might like best. (that's what the test time compute
               | / sampling at inference does).
               | 
               | > This is not relevant to our discussion. Again, I'm
               | reasonably sure that I'm not going to be convinced by any
               | research demonstrating that X new tech can increase Y
               | metric by Z%.
               | 
               | These are not just research papers, people are
               | reproducing these results all over the place. Another
               | example from a few minutes ago:
               | https://x.com/DimitrisPapail/status/1868710703793873144
               | 
               | > This is much more relevant to our discussion. Do you
               | honestly feel this is an accurate representation of how
               | you'd define the requirements for the pipeline you
               | outlined in your post above? Keep in mind that we're
               | talking about having LLMs work on already-existing large
               | codebases,
               | 
               | I'm saying this will be solved pretty soon, working with
               | large codebases doesn't work well right now because last
               | years models had shorter context and were not trained to
               | deal with anything longer than a few thousand tokens.
               | Training these models is expensive so all of the coding
               | assistant tools like cursor / devin are sitting around
               | and waiting for the next iteration of models from
               | Anthropic / OpenAI / Google to fix this issue. We will
               | most likely have announcements of new long context LLMs
               | in the next 1-2 weeks from Google / OpenAI / Deepseek /
               | Qwen that will make major improvements on large code
               | bases.
               | 
               | I'd also add that we probably don't want huge sprawling
               | code bases, when the cost of a small custom app that
               | solves just your problem goes to 0 we'll have way more
               | tiny apps / microservices that are much easier to
               | maintain and replace when needed.
        
               | angoragoats wrote:
               | > These are not just research papers, people are
               | reproducing these results all over the place. Another
               | example from a few minutes ago:
               | https://x.com/DimitrisPapail/status/1868710703793873144
               | 
               | Maybe I'm not making myself clear, but when I said
               | "demonstrating that X new tech can increase Y metric by
               | Z%" that of course included reproduction of results.
               | Again, this is not relevant to what I'm saying.
               | 
               | I'll repeat some of what I've said in several posts
               | above, but hopefully I can be clearer about my position:
               | while LLMs can generate code, I don't believe they can
               | satisfactorily replace the work of a senior software
               | engineer. I believe this because I don't think there's
               | any viable path from (A) an LLM generates some code to
               | (B) a well-designed, complete, maintainable system is
               | produced that can be arbitrarily improved and extended,
               | with meaningfully lower human time required. I believe
               | this holds true no matter how powerful the LLM in (A)
               | gets, how much it's trained, how long its context is,
               | etc, which is why showing me research or coding
               | benchmarks or huggingface links or some random twitter
               | post is likely not going to change my mind.
               | 
               | > I'd also add that we probably don't want huge sprawling
               | code bases
               | 
               | That's nice, but the reality is that there are lots of
               | monoliths out there, including new ones being built every
               | day. Microservices, while solving some of the problems
               | that monoliths introduce, also have their own problems.
               | Again, your claims reek of inexperience.
               | 
               | Edit: forgot the most important point, which is that you
               | sort of dodged my question about whether you really think
               | that "ask ChatGPT" is sufficient to generate requirements
               | or validation criteria.
        
       | tuyiown wrote:
       | > the more I hear that some of them are either using AI to help
       | them code, or feed entire projects to AI and let the AI code,
       | while they do code review and adjustments.
       | 
       | It's not enough to make generalizations yet. What kind of
       | projects ? What tuning does it need ? What kind of end users ?
       | What kind of engineers ?
       | 
       | In the field I work with, I can't see how LLMs can help with a
       | clear path to convergence to a reliable product. I anything, I
       | suspect we will need more manual analysis to fix insanity we
       | receive from our providers if they start working with LLMs.
       | 
       | Some jobs will disappear, but I've yet to see signs of anything
       | serious emerge yet. You're right for juniors though, but I
       | suspect those who stop training will loose their life insurance
       | and will starve under LLMs either by competition, our the amount
       | of operational instability it will bring.
        
       | m_ke wrote:
       | I've been thinking about this a bunch and here's what I think
       | will happen as cost of writing software approaches 0:
       | 
       | 1. There will be way more software
       | 
       | 2. Most people / companies will be able to opt out of predatory
       | VC funded software and just spin up their own custom versions
       | that do exactly what they want without having to worry about
       | being spied on or rug pulled. I already do this with chrome
       | extensions, with the help of claude I've been able to throw
       | together things like time based website blocker in a few minutes.
       | 
       | 3. The best software will be open source, since it's easier for
       | LLMs to edit and is way more trustworthy than a random SaaS tool.
       | It will also be way easier to customize to your liking
       | 
       | 4. Companies will hire way less and probably mostly engineers to
       | automate routine tasks that would have previously be done by
       | humans (ex: bookkeeping, recruiting, sales outreach, HR,
       | copywriting / design). I've heard this is already happening with
       | a lot of new startups.
       | 
       | EDIT: for people who are not convinced that these models will be
       | better than them soon, look over these sets of slides from
       | NeurIPS:
       | 
       | - https://michal.io/notes/ml/conferences/2024-NeurIPS#neurips-...
       | 
       | - https://michal.io/notes/ml/conferences/2024-NeurIPS#fine-tun...
       | 
       | - https://michal.io/notes/ml/conferences/2024-NeurIPS#math-ai-...
        
         | brodouevencode wrote:
         | Good points - my company has already committed to #2
        
         | ThrowawayR2 wrote:
         | What's the equivalent of @justsayinmice for NeurIPS papers? A
         | lot of things in papers don't pan out in the real world.
        
           | m_ke wrote:
           | There's a lot of work showing that we can reliably get to or
           | above human level performance on tasks where it's easy to
           | sample at scale and the solution is cheap to verify.
        
         | from-nibly wrote:
         | > that do exactly what they want
         | 
         | This presumes that they know exactly what they want.
         | 
         | My brother works for a company and they just ran into this
         | issue. They target customer retention as a metric. The result
         | is that all of their customers are the WORST, don't make them
         | any money, but they stay around a long time.
         | 
         | The company is about to run out of money and crash into the
         | ground.
         | 
         | If people knew exactly what they wanted 99% of all problems in
         | the world wouldn't exist. This is one of the jobs of a
         | developer, to explore what people actually want with them and
         | then implement it.
         | 
         | The first bit is WAY harder than the second bit, and LLMs only
         | do the second bit.
        
       | taylodl wrote:
       | Back in the late 80s and early 90s there was a craze called CASE
       | - Computer-Aided Software Engineering. The idea was humans really
       | _suck_ at writing code, but we 're really good at modeling and
       | creating specifications. Tools like Rational Rose arose during
       | this era, as did Booch notation which eventually became part of
       | UML.
       | 
       | The problem was it never worked. When generating the code, the
       | best the tools could do was create all the classes for you and
       | maybe define the methods for the class. The tools could not
       | provide an implementation unless it provided the means to manage
       | the implementation within the tool itself - which was awful.
       | 
       | Why have you likely not heard of any of this? Because the fad
       | died out in the early 2000's. The juice simple wasn't worth the
       | squeeze.
       | 
       | Fast-forward 20 years and I'm working in a new organization where
       | we're using ArchiMate extensively and are starting to use more
       | and more UML. Just this past weekend I started wondering given
       | the state of business modeling, system architecture modeling, and
       | software modeling, could an LLM (or some other AI tool) take
       | those models and produce code like we could never dream of back
       | in the 80s, 90s, and early 00s? Could we use AI to help create
       | the models from which we'd generate the code?
       | 
       | At the end of the day, I see software architects and software
       | engineers still being engaged, but in a different way than they
       | are today. I suppose to answer your question, if I wanted to
       | future-proof my career I'd learn modeling languages and start
       | "moving to the left" as they say. I see being a code slinger as
       | being less and less valuable over the coming years.
       | 
       | Bottom line, you don't see too many assembly language developers
       | anymore. We largely abandoned that back in the 80s and let the
       | computer produce the actual code that runs. I see us doing the
       | same thing again but at a higher and more abstract level.
        
         | neilv wrote:
         | I worked on CASE, and generally agree with this.
         | 
         | I think it's important to note that there were a couple
         | distinct markets for CASE:
         | 
         | 1. Military/aerospace/datacomm/medical type technical
         | development. Where you were building very complex things, that
         | integrated into larger systems, that had to work, with teams,
         | and you used higher-level formalisms when appropriate.
         | 
         | 2. "MIS" (Management Information Systems) in-house/intranet
         | business applications. Modeling business processes and flows,
         | and a whole lot of data entry forms, queries, and printed
         | reports. (Much of the coding parts already had decades of work
         | on automating them, such as with WYSIWYG form painters and
         | report languages.)
         | 
         | Today, most Web CRUD and mobile apps are the descendant of #2,
         | albeit with branches for in-house vs. polished graphic design
         | consumer appeal.
         | 
         | My teams had some successes with #1 technical software, but UML
         | under IBM seemed to head towards #2 enterprise development. I
         | don't have much visibility into where it went from there.
         | 
         | I did find a few years ago (as a bit of a methodology expert
         | familiar with the influences that went into UML, as well as
         | familiar with those metamodels as a CASE developer) that the
         | UML specs were scary and huge, and mostly full of stuff I
         | didn't want. So I did the business process modeling for a
         | customer logistics integration using a very small subset, with
         | very high value. (Maybe it's a little like knowing hypertext,
         | and then being teleported 20 years into the future, where the
         | hypertext technology has been taken over by evil advertising
         | brochures and surveillance capitalism, so you have to work to
         | dig out the 1% hypertext bits that you can see are there.)
         | 
         | Post-ZIRP, if more people start caring about complex systems
         | that really have to work (and fewer people care about lots of
         | hiring and churning code to make it look like they have
         | "growth"), people will rediscover some of the better modeling
         | methods, and be, like, whoa, this ancient DeMarco-Yourdon thing
         | is most of what we need to get this process straight in a way
         | everyone can understand, or this Harel thing makes our crazy
         | event loop with concurrent activities tractable to implement
         | correctly without a QA nightmare, or this Rumbaugh/Booch/etc.
         | thing really helps us understand this nontrivial schema, and
         | keep it documented as a visual for bringing people onboard and
         | evolving it sanely, and this Jacobson thing helps us integrate
         | that with some of the better parts of our evolving Agile
         | process.
        
           | taylodl wrote:
           | As I recall, the biggest problem from the last go-around was
           | the models and implementation were two different sets of
           | artifacts and therefore were guaranteed to diverge. If we
           | move to a modern incarnation where the AI is generating the
           | implementation from the models and humans are no longer doing
           | that task, then it may work as the models will now be the
           | only existing set of artifacts.
           | 
           | But I was definitely in camp #2 - the in-house business
           | applications. I'd love to hear the experiences from those in
           | camp #1. To your point, once IBM got involved it all went
           | south. There was a place I was working for in the early 90s
           | that really turned me off against anything "enterprise" from
           | IBM. I had yet to learn that would apply to pretty much every
           | vendor! :)
        
       | Terretta wrote:
       | For thousands of years, the existence of low cost or even free
       | apprentices for skilled trades meant there was no work left for
       | experts with mastery of the trade.
       | 
       | Except, of course, that isn't true.
        
       | johanam wrote:
       | I think in some sense the opposite could occur, where it
       | democratizes access to becoming a sort of pseudo-junior-software
       | engineer. In the sense that a lot more people are going to be
       | generating code and bespoke little software systems for their own
       | ends and purposes. I could imagine this resulting in a Cambrian
       | Explosion of small software systems. Like @m_ke says, there will
       | be way more software.
       | 
       | Who maintains these systems? Who brings them to the last mile and
       | deploys them? Who gets paid to troubleshoot and debug them when
       | they reach a threshold of complexity that the script-kiddie LLM
       | programmer cannot manage any longer? I think this type of person
       | will definitely have a place in the new LLM-enabled economy.
       | Perhaps this is a niche role, but figuring out how one can take
       | experience as a software engineer and deploy it to help people
       | getting started with LLM code (for pay, ofc) might be an
       | interesting avenue to explore.
        
         | askonomm wrote:
         | I tend to agree. I also think that the vast majority of code
         | out there is quite frankly pretty bad, and all that LLM's do is
         | learn from it, so while I agree that LLM's will help make a lot
         | more software, I doubt it would increase the general quality in
         | any significant way, and thus there will always be a need for
         | people who can do actual programming as opposed to just
         | prompting to fix complex problems. That said, not sure if I
         | want my future career to be swimming in endless piles of LLM-
         | copy-paste-spaghetti. Maybe it's high time to learn a new
         | degree. Hmm.
        
       | yehosef wrote:
       | use them
        
       | brodouevencode wrote:
       | LLMs will just write code without you having to go copy-pasta
       | from SO.
       | 
       | The real secret is talent stacks: have a combination of talents
       | and knowledge that is desirable and unique. Be multi-faceted. And
       | don't be afraid to learn things that are way outside of your
       | domain. And no, you wouldn't be pigeon-holing yourself either.
       | 
       | For example there aren't many SWEs that have good SRE knowledge
       | in the vehicle retail domain. You don't have to be an expert SRE,
       | just be good enough, and understand the business in which you're
       | operating and how those practices can be applied to auto sales
       | (knowing the laws and best practices of the industry).
        
       | indigoabstract wrote:
       | I remember John Carmack talking about this last year. Seems like
       | it's still pretty good advice more than a year later:
       | 
       | "From a DM, just in case anyone else needs to hear this."
       | 
       | https://x.com/ID_AA_Carmack/status/1637087219591659520
        
         | randall wrote:
         | This is by far the best advice I've seen.
        
           | archagon wrote:
           | Except I suspect that Carmack would not be where he is today
           | without a burning intellectual draw to programming in
           | particular.
        
             | yodsanklai wrote:
             | Exactly... I read "masters of doom" and Carmack didn't
             | strike me as the product guy who cares about people needs.
             | He was more like a coding machine.
        
               | watt wrote:
               | In "Rocket Jump: Quake and the Golden Age of First-Person
               | Shooters" id guys figure out that their product is the
               | cutting-edge graphics, and being first, and are able to
               | pull that off for a while. Their games were not great,
               | but the product was idTech engines. With Rage however (id
               | Tech 5) the streak ran cold.
        
               | indigoabstract wrote:
               | Yet, they were able to find a market for their products.
               | He knew both how to code and what to code.
               | 
               | Ultima Underword was technologically superior to
               | Wolfenstein 3D.
               | 
               | System Shock was technologically superior to Doom and a
               | much better game for my taste. I also think it has aged
               | better.
               | 
               | Doom, Wolf 3D and Quake were less sophisticated, but
               | kicked ass. They captured the spirit of the times and
               | people loved it.
               | 
               | They're still pretty good games too, 30 years later.
        
         | throwaway_43793 wrote:
         | It's a good advice indeed. But there is a slight problem with
         | it.
         | 
         | Young people can learn and fight for their place in the
         | workforce, but what is left for older people like myself? I'm
         | in this industry already, I might have missed the train of
         | "learn to talk with people" and been sold on the "coding is a
         | means to an end" koolaid.
         | 
         | My employability is already damaged due to my age and
         | experience. What is left for people like myself? How can I
         | compete with a 20 something years old who has sharper memory,
         | more free time (due to lack of obligations like
         | family/relationships), who got the right advice from Carmack in
         | the beginning of his career?
        
           | Rotundo wrote:
           | The 20-year-old is, maybe, just like you at that age: eager
           | and smart, but lacking experience. Making bad decisions, bad
           | designs, bad implementations left and right. Just like you
           | did, way back when.
           | 
           | But you have made all those mistakes already. You've learned,
           | you've earned your experience. You are much more valuable
           | than you think.
           | 
           | Source: Me, I'm almost 60, been programming since I was 12.
        
             | throwaway_43793 wrote:
             | I think the idea of meritocracy has died in me. I wish I
             | could be rewarded for my knowledge and expertise, but it
             | seems that capitalism, as in maximizing profit, has won
             | above everything else.
        
           | indigoabstract wrote:
           | It's good advice, but not easy to follow, since knowing what
           | to do and doing it are very different things.
           | 
           | I think that what he means is that how successful we are in
           | work is closely related to our contributions, or to the
           | perceived "value" we bring to other people.
           | 
           | The current gen AI isn't the end of programmers. What matters
           | is still what people want and are willing to pay for and how
           | can we contribute to fulfill that need.
           | 
           | You are right that young folks have the time and energy to
           | work more than older ones and for less money. And they can
           | soak up knowledge like a sponge. That's their strong point
           | and older folks cannot really compete with that.
           | 
           | You (and everyone else) have to find your own strong point,
           | your "niche" so to speak. We're all different, so I'm pretty
           | sure that what you like and are good at is not what I like
           | and I'm good at and vice-versa.
           | 
           | All the greats, like Steve Jobs and so on said that you've
           | got to love what you do. Follow your intuition. That may even
           | be something that you dreamed about in your childhood.
           | Anything that you really want to do and makes you feel
           | fulfilled.
           | 
           | I don't think you can get to any good place while disliking
           | what you do for a living.
           | 
           | That said, all this advice can seem daunting and unfeasible
           | when you're not in a good place in life. But worrying only
           | makes it worse.
           | 
           | If you can see yourself in a better light and as having
           | something valuable to contribute, things would start looking
           | better.
           | 
           | This is solvable. Have faith!
        
           | extr wrote:
           | ?? Not sure what you mean. Carmack's advice is not specific
           | to any particular point in your career. You can enact the
           | principle he's talking about just as much with 30 YOE as you
           | can with 2. It's actually easier advice to follow for older
           | people than younger, since they have seen more of the world
           | and probably have a better sense of where the "rough edges"
           | are. Despite what you see on twitter and HN and YC batches,
           | most successful companies are started by people in their 40s.
        
       | slavapestov wrote:
       | Find an area to specialize in that has more depth to it than just
       | gluing APIs together.
        
       | xinu2020 wrote:
       | >junior to mid level software engineering will disappear mostly,
       | while senior engineers will transition
       | 
       | It's more likely the number of jobs at all level of seniority
       | will decrease, but none will disappear.
       | 
       | What I'm interested to see is how the general availability of LLM
       | will impact the "willingness" of people to learn coding. Will
       | people still "value" coding as an activity worth their time?
       | 
       | For me as an already "senior" engineer, using LLMs feel like a
       | superpower, when I think of a solution to a problem, I can test
       | and explore some of my ideas faster by interacting with it.
       | 
       | For a beginner, I feel that having all of this available can be
       | super powerful too, but also truly demotivating. Why bother to
       | learn coding when the LLM can already do better than you? It
       | takes years to become "good" at coding, and motivation is key.
       | 
       | As a low-dan Go player, I remember feeling a bit that way when
       | AlphaGo was released. I'm still playing Go but I've lost the
       | willingness to play competitively, now it's just for fun.
        
         | throwaway_43793 wrote:
         | I think coding will stay as a hobby. You know, like there are
         | people who still build physical stuff with wires and diodes.
         | None of them are doing it for commercial reasons, but the
         | ability to produces billions of transistors on a silicon die
         | did not stop people from taking electrical engineering as a
         | hobby.
        
       | blablabla123 wrote:
       | I've been quite worried about it at this point. However I see
       | that "this is not going to happen" is likely not going to help
       | me. So I rather go with the flow, use it where reasonable even if
       | it's not clear to me whether AI is truly ever leaving the hype
       | stage.
       | 
       | FWIW I was allowed to use AI at work since ChatGPT appeared and
       | usually it wasn't a big help for coding. However for education
       | and trying to "debug" funny team interactions, I've surely seen
       | some value.
       | 
       | My guess is though that some sort of T-shaped skillset is going
       | to be more important while maintaining a generalist perspective.
        
       | tinthedev wrote:
       | Real software engineering is as far from "only writing code", as
       | construction workers are from civil engineering.
       | 
       | > So, fellow software engineers, how do you future-proof your
       | career in light of, the inevitable, LLM take over?
       | 
       | I feel that software engineering being taken over by LLM is a
       | pipe dream. Some other, higher, form of AI? Inevitably. LLMs, as
       | current models exists and expand? They're facing a fair few
       | hurdles that they cannot easily bypass.
       | 
       | To name a few: requirement gathering, scoping, distinguishing
       | between different toolsets, comparing solutions objectively,
       | keeping up with changes in software/libraries... etc. etc.
       | 
       | Personally? I see LLMs tapering off in new developments over the
       | following few years, and I see salesmen trying to get a lot of
       | early adopters to hold the bag. They're overpromising, and the
       | eventual under-delivery will hurt. Much like the AI winter did.
       | 
       | But I also see a new paradigm coming down the road, once we've
       | got a stateful "intelligent" model that can learn and adapt
       | faster, and can perceive time more innately... but that might
       | take decades (or a few years, you never know with these things).
       | I genuinely don't think it'll be a direct evolution of LLMs we're
       | working on now. It'll be a pivot.
       | 
       | So, I future-proof my career simply: I keep up with the tools and
       | learn how to work around them. When planning my teams, I don't
       | intend to hire 5 juniors to grind code, but 2 who'll utilize LLMs
       | to teach them more.
       | 
       | I ask more of my junior peers for their LLM queries before I go
       | and explain things directly. I also teach them to prompt better.
       | A lot of stuff we've had to explain manually in the past can now
       | be prompted well, and stuff that can't - I explain.
       | 
       | I also spend A LOT of time teaching people to take EVERYTHING AI-
       | generated with generous skepticism. Unless you're writing toys
       | and tiny scripts, hallucinations _will_ waste your time. Often
       | the juice won't be worth the squeeze.
       | 
       | More than a few times I've spent a tedious hour navigating 4o's
       | or Claude's hallucinated confident failures, instead of a
       | pleasant and productive 45 minutes writing the code myself... and
       | from peer discussions, I'm not alone.
        
       | zerop wrote:
       | I fear that in the goal of going from "manual coding" to "fully
       | automated coding", we might end up in the middle, where we are
       | "semi manual coding" assisted by AI, which would need different
       | software engineer skill.
        
       | polotics wrote:
       | This question is so super weird, because:
       | 
       | Ask an LLM to generate you 100 more lines of code, no problem you
       | will get something. Ask the same LLM to look at 10000 lines of
       | code and intelligently remove 100... good luck with that!
       | 
       | seriously, I tried uploading some (but not all) source code of my
       | company to our private Azure OpenAI GPT 4o for analysis, as a 48
       | MB cora-generated context file, and really the usefulness is not
       | that great. And don't get me started about Copilot's suggestions.
       | 
       | Someone really has to know their way around the beast, and LLM's
       | cover a very very small part of the story.
       | 
       | I fear that the main effect of LLMs will be that developers that
       | have already for so long responded to their job-security fears
       | with obfuscation and monstrosity... will be empowered to produce
       | even more of that.
        
         | jerjerjer wrote:
         | > Ask an LLM to generate you 100 more lines of code, no problem
         | you will get something. Ask the same LLM to look at 10000 lines
         | of code and intelligently remove 100... good luck with that!
         | 
         | These two tasks have a very different difficulty level though.
         | It will be the same with a human coder. If you give me a new
         | 10k sloc codebase and ask to add a method, to cover some new
         | case I can probably do it in a hour to a day, depending on my
         | familiarity with the language, subject matter, codebase overall
         | state, documentation, etc.
         | 
         | New 10k codebase and a task of removing 100 lines? That's
         | probably at least half a week to understand how it all works
         | (disregarding simple cases like a hundred-line comment bloc
         | with old code), before I can make such a change safely.
        
       | archagon wrote:
       | I have as much interest in the art of programming as in building
       | products, and becoming some sort of AI whisperer sounds
       | tremendously tedious to me. I opted out of the managerial track
       | for the same reason. Fortunately, I have enough money saved that
       | I can probably just work on independent projects for the rest of
       | my career, and I'm sure they'll attract customers whether or not
       | they were built using AI.
       | 
       | With that said, looking back on my FAANG career in OS framework
       | development, I'm not sure how much of my work could have actually
       | been augmented by AI. For the most part, I was designing and
       | building brand new systems, not gluing existing parts together.
       | There would not be a lot of precedent in the training data.
        
       | ilaksh wrote:
       | It's not going to be about careers anymore. It's going to be
       | about leveraging AI and robotics as very cheap labor to provide
       | goods and services.
        
       | johnea wrote:
       | Become a plumber or electrician...
        
         | Clubber wrote:
         | I'm going the Onlyfans route, or perhaps record myself playing
         | video games saying witty quips.
        
       | ojr wrote:
       | Create a SaaS and charge people $20/month, time-consuming but
       | more possible with LLMs. Subscriptions are such a good business
       | model for the reasons people hate subscriptions.
        
         | throwaway_43793 wrote:
         | Are you doing it? What business are you running? How do you
         | find customers?
        
       | markus_zhang wrote:
       | I try to go to the lowest level I could. During my recent
       | research into PowerPC 32-bit assembly language I have found 1)
       | Not many material online, and what available are usually PDF with
       | pictures which could be difficult for LLMs to pick up, and 2)
       | Indeed ChatGPT didn't give good answer even for a Hello, World
       | example.
       | 
       | I think hardware manufacturers, including ones that produce
       | chips, are way less encouraged to put things online and thus has
       | a wide moat. "Classic" ones such as 6502 or 8086 definitely have
       | way more material. "Modern" popular ones such as x86/64 too have
       | a lot of material online. But "obscure" ones don't.
       | 
       | On software side, I believe LLMs or other AI can easily replace
       | juniors who only knows how to "fill-in" the code designed by
       | someone else, in a popular language (Python, Java, Javascript,
       | etc.), in under 10 years. In fact it has greatly supported my
       | data engineering work in Python and Scala -- does it always
       | produce the most efficient solution? No. Does it greatly reduces
       | the time I need to get to a solution? Yes, definitely!
        
         | tetha wrote:
         | I've been noticing similar patterns as well.
         | 
         | One instructive example was when I was implementing a terraform
         | provider for an in-house application. This thing can template
         | the boilerplate for a terraform resource implementation in
         | about 3-4 auto completes and only gets confused a bit by the
         | plugin-sdk vs the older implementation way. But once it deals
         | with our in-house application, it can guess some things, but
         | it's not good. Here it's ok.
         | 
         | In my private gaming projects on Godot... I tried using CoPilot
         | and it's just terrible to the point of turning it off. There is
         | Godot code out there how an entity handles a collision with
         | another entitiy, and there are hundreds of variations out
         | there, and it wildly hallucinates between all of them. It's
         | just so distracting and bad. ChatGPT is OK at navigating the
         | documentation, but that's about it.
         | 
         | If I'm thinking about my last job, which -- don't ask why --
         | was writing Java Code with low-level concurrency primitives
         | like thread pools, raw synchronized statements and atomic
         | primitives... if I think about my experience with CoPilot about
         | code like this, I'm honestly feeling strength leaving my body
         | because that would be so horrible. I've spend literal months
         | chasing a once-in-a-billion concurrency bug in that code once.
         | 
         | IMO, the most simple framework-fill-in code segment will suffer
         | from LLMs. But a well-coached junior can move past that stage
         | quite quickly.
        
           | markus_zhang wrote:
           | Yeah I basically treat LLM as a better Google search. It is
           | indeed a lot better than Google if I want to find some public
           | information, but I need to be careful and double-check.
           | 
           | Other than that it completely depends on luck I guess. I'm
           | pretty sure if companies feed in-house information to it that
           | will make it much more useful, but those agents would be
           | privately owned and maintained.
        
       | thegrim33 wrote:
       | I don't worry about it, because:
       | 
       | 1) I believe we need true AGI to replace developers.
       | 
       | 2) I don't believe LLMs are currently AGI or that if we just feed
       | them more compute during training that they'll magically become
       | AGI.
       | 
       | 3) Even if we did invent AGI soon and replace developers, I
       | wouldn't even really care, because the invention of AGI would be
       | such an insanely impactful, world changing, event that who knows
       | what the world would even look like afterwards. It would be
       | massively changed. Having a development job is the absolute least
       | of my worries in that scenario, it pales in comparison to the
       | transformation the entire world would go through.
        
         | akira2501 wrote:
         | Even if AGI suddenly appears we will most likely have an energy
         | feed and efficiency problem with it. These scaling problems are
         | just not on the common roadmap at all and people forget how
         | much effort typically has to be spent here before a new
         | technology can take over.
        
         | janalsncm wrote:
         | To replace all developers, we need AGI yes. To replace many
         | developers? No. If one developer can do the same work as 5
         | could previously, unless the amount of work expands then 4
         | developers are going to be looking for a job.
         | 
         | Therefore, unless you for some reason believe you will be in
         | the shrinking portion that cannot be replaced I think the
         | question deserves more attention than "nothing".
        
           | j45 wrote:
           | I think counting the number of devs might not be the best way
           | to go considering not all teams are equally capable or
           | skilled in each person, and in enterprises, some people are
           | inevitably hiding in a project or team.
           | 
           | Comparing only the amount of forward progress in a codebase
           | and AI's ability to participate or cover in it might be
           | better.
        
       | nyrikki wrote:
       | 1) have books like 'The Art of Programming' on my shelf, as AI
       | seems to propagate solutions that are related to code golf more
       | than robustness due to coverage in the corpus.
       | 
       | 2) Force my self to look at existing code as abstract data types,
       | etc... to help reduce the cost of LLMs failure mode (confident,
       | often competent, and inevitable wrong)
       | 
       | 3) curry whenever possible to support the use of coding
       | assistants and to limit their blast radius.
       | 
       | 4) Dig deep into complexity theory to understand what LLMs can't
       | do, either for defensive or offensive reasons.
       | 
       | 5) Realize that SWE is more about correctness and context than
       | code.
       | 
       | 6) Realize what many people are already discovering, that LLM
       | output is more like clip art than creation.
        
       | Xophmeister wrote:
       | My anecdata shows people who have no/limited experience in
       | software engineering are suddenly able to produce "software".
       | That is, code of limited engineering value. It technically works,
       | but is a ultimately an unmaintainable, intractable Heath Robinson
       | monstrosity.
       | 
       | Coding LLMs will likely improve, but what will happen first: a
       | good-at-engineering LLM; or a negative feedback cycle of training
       | data being polluted with a deluge of crap?
       | 
       | I'm not too worried at the moment.
        
         | sokoloff wrote:
         | I can imagine a world, not far from today, where business
         | experts can create working programs similar in complexity to
         | what they do with Excel today, but in domains outside of "just
         | spreadsheets". Excel is the most used no-code/low-code
         | environment by far and I think we could easily see that same
         | level of complexity [mostly low] be accessible to a lot more
         | people.
        
           | layer8 wrote:
           | I don't quite buy the Excel analogy, because the business
           | experts do understand the Excel formulas that they write, and
           | thus can maintain them and reason about them. The same
           | wouldn't be the case with programs written by LLMs.
        
         | bhaak wrote:
         | Something similar happened when Rails showed up. Lots of people
         | were able to build somewhat complex websites than ever before.
         | 
         | But there are still specialized people being paid for doing
         | websites today.
        
       | michaelmrose wrote:
       | > ...or feed entire projects to AI and let the AI code, while
       | they do code review and adjustments.
       | 
       | Is there some secret AI available that isn't by OpenAI or
       | Microsoft because this this sounds like complete hogwash.
        
       | simianparrot wrote:
       | Nothing because I'm a senior and LLM's never provide code that
       | pass my sniff test, and it remains a waste of time.
       | 
       | I have a job at a place I love and get more people in my direct
       | network and extended contacting me about work than ever before in
       | my 20 year career.
       | 
       | And finally I keep myself sharp by always making sure I challenge
       | myself creatively. I'm not afraid to delve into areas to
       | understand them that might look "solved" to others. For example I
       | have a CPU-only custom 2D pixel blitter engine I wrote to make 2D
       | games in styles practically impossible with modern GPU-based
       | texture rendering engines, and I recently did 3D in it from
       | scratch as well.
       | 
       | All the while re-evaluating all my assumptions and that of
       | others.
       | 
       | If there's ever a day where there's an AI that can do these
       | things, then I'll gladly retire. But I think that's generations
       | away at best.
       | 
       | Honestly this fear that there will soon be no need for human
       | programmers stems from people who either themselves don't
       | understand how LLM's work, or from people who do that have a
       | business interest convincing others that it's more than it is as
       | a technology. I say that with confidence.
        
         | rybosworld wrote:
         | > Nothing because I'm a senior and LLM's never provide code
         | that pass my sniff test, and it remains a waste of time.
         | 
         | I am constantly surprised how prevalent this attitude is.
         | ChatGPT was only just released in 2022. Is there some
         | expectation that these things won't improve?
         | 
         | > LLM's never provide code that pass my sniff test
         | 
         | This is ego speaking.
        
           | eschaton wrote:
           | They shouldn't be expected to improve in accuracy because of
           | what they are and how they work. Contrary to what the average
           | HackerNews seems to believe, LLMs don't "think," they just
           | predict. And there's nothing in them that will constrain
           | their token prediction in a way that improves accuracy.
        
             | rybosworld wrote:
             | > Contrary to what the average HackerNews seems to believe,
             | LLMs don't "think," they just predict.
             | 
             | Anecdotally, I can't recall ever seeing someone on
             | HackerNews accuse LLM's of thinking. This site is probably
             | one of the most educated corners of the internet on the
             | topic.
             | 
             | > They shouldn't be expected to improve in accuracy because
             | of what they are and how they work.
             | 
             | > And there's nothing in them that will constrain their
             | token prediction in a way that improves accuracy.
             | 
             | These are both incorrect. LLM's are already quite better
             | today than they were in 2022.
        
             | cozzyd wrote:
             | If anything, they may regress due to being trained on
             | lower-quality input.
        
           | sitzkrieg wrote:
           | ego? LLMs goof on basic math and cant even generate code for
           | many non public things. theyre not useful to me whatsoever
        
             | agilob wrote:
             | LLMs aren't supposed to do basic math, but be chat agents.
             | Wolfram Alpha can't do chat.
        
               | simianparrot wrote:
               | Math is a major part of programming. In fact programming
               | without math is impossible. And if you go all the way
               | down to bare metal it's all math. We are shifting bits
               | through incredibly complex abstractions.
        
               | agilob wrote:
               | No, math is major part of writing good code, but when was
               | the last time you've seen somebody put effort into
               | writing O(n) algorithm? 99% of programming is "import
               | sort from sort; sort.sortThisReallyQuick". Programming is
               | mostly writing code that just compiles and eventually
               | gives correct results (and has bugs). You can do a lot of
               | programming just buy copy-pasting results from
               | stackoverflow.
               | 
               | https://en.wikipedia.org/wiki/Npm_left-pad_incident
               | 
               | https://old.reddit.com/r/web_design/comments/35prfv/desig
               | ner...
               | 
               | https://www.youtube.com/watch?v=GC-0tCy4P1U
        
               | simianparrot wrote:
               | In any real-world application you'll sooner than later
               | run into optimization challenges where if you don't
               | understand the foundational challenges, googling "fastly
               | do the thing" won't help you ;)
               | 
               | Much like asking an LLM to solve a problem for you.
        
             | vitorsr wrote:
             | This... for my most important use case (applied numerical
             | algorithms) it is in fact beyond not useful, it is negative
             | value - even for highly available methods' codes.
             | 
             | Sure, I can ask for it to write (wrong) boilerplate but it
             | is hardly where work ends. It is up to me to spend the time
             | doing careful due diligence at each and every step. I could
             | ask for it to patch each mistake but, again, it relies on a
             | trained, skillful, many times formally educated domain
             | expert on the other end puppeteering the generative
             | copywriter.
             | 
             | For the many cases where computer programming is similar to
             | writing boilerplate, it could indeed be quite useful but I
             | find the long tail of domain expertises will always be
             | outside the reach of data-driven statistical learners.
        
           | nicoburns wrote:
           | > > LLM's never provide code that pass my sniff test
           | 
           | > This is ego speaking.
           | 
           | That's been my experience of LLM-generated code that people
           | have submitted to open source projects I work on. It's all
           | been crap. Some of it didn't even compile. Some of it changed
           | comments that were previously correct to say something
           | untrue. I've yet to see a single PR that implemented
           | something useful.
        
             | mh- wrote:
             | Isn't this a kind of survivor bias? You wouldn't know if
             | you approved (undisclosed) LLM-generated code that was
             | good..
        
             | cies wrote:
             | > LLM-generated code that people have submitted to open
             | source projects I work on
             | 
             | Are you sure it was people? Maybe the AI learned how to
             | make PRs, or _is learning_ how to do so by using your
             | project as a test bed.
        
           | IshKebab wrote:
           | > Is there some expectation that these things won't improve?
           | 
           | I definitely expect them to improve. But I also think the
           | point at which they can _actually replace_ a senior
           | programmer is pretty much the exact point at which they can
           | replace any knowledge worker, at which point western society
           | (possibly all society) is in way deeper shit than just me
           | being out of a job.
           | 
           | > This is ego speaking.
           | 
           | It definitely isn't. LLMs are _useful_ for coding now, but
           | they can 't really do the whole job without help - at least
           | not for anything non-trivial.
        
             | dingnuts wrote:
             | > LLMs are useful for coding now
             | 
             | *sort of, sometimes, with simple enough problems with
             | sufficiently little context, for code that can be easily
             | tested, and for which sufficient examples exist in the
             | training data.
             | 
             | I mean hey, two years after being promised AGI was
             | literally here, LLMs are almost as useful as traditional
             | static analysis tools!
             | 
             | I guess you could have them generate comments for you based
             | on the code as long as you're happy to proofread and
             | correct them when they're wrong.
             | 
             | Remember when CPUs were obsolete after three years? GPT has
             | shown zero improvement in its ability to generate novel
             | content since it was first released as GPT2 almost ten
             | years ago! I would know because I spent countless hours
             | playing with that model.
        
               | margalabargala wrote:
               | Firstly, GPT-2 was released in 2019. Five years is not
               | "almost ten years".
               | 
               | Secondly, LLMs are objectively useful for coding now.
               | That's not the same thing as saying they are replacements
               | for SWEs. They're a tool, like syntax highlighting or
               | real-time compiler error visibility or even context-aware
               | keyword autocompletion.
               | 
               | Some individuals don't find those things useful, and
               | prefer to develop in a plain text editor that does not
               | have those features, and that's fine.
               | 
               | But all of those features, and LLMs are now on that list,
               | are broadly useful in the sense that they generally
               | improve productivity across the industry. They already
               | right now save enormous amounts of developer time, and to
               | ignore that because _you_ are not one of the people whose
               | time is currently being saved, indicates that you may not
               | be keeping up with understanding the technology of your
               | field.
               | 
               | There's an important difference between a tool being
               | useful for generating novel content, and a tool being
               | useful. I can think of a lot of useful tools that are not
               | useful for generating novel content.
        
               | bawolff wrote:
               | > are broadly useful in the sense that they generally
               | improve productivity across the industry. They already
               | right now save enormous amounts of developer time,
               | 
               | But is that actually a true statement. Are there actual
               | studies to back that up?
               | 
               | AI is hyped to the moon right now. It is really difficult
               | to separate the hype from reality. There are ancedotal
               | reports of ai helping with coding, but there are also
               | ancedotal reports that they get things almost right but
               | not quite, which often leads to bugs which wouldn't
               | otherwise happen. I think its unclear if that is a net
               | win for productivity in software engineering. It would be
               | interesting if there was a robust study about it.
        
               | margalabargala wrote:
               | > Are there actual studies to back that up?
               | 
               | I am aware of an equal number of studies about the time
               | saved overall by use of LLMs, and time saved overall by
               | use of syntax highlighting.
               | 
               | In fact, here's a study claiming syntax highlighting in
               | IDEs does not help code comprehension: https://link.sprin
               | ger.com/article/10.1007/s10664-017-9579-0
               | 
               | Shall we therefore conclude that syntax highlighting is
               | not useful, that developers who use syntax highlighting
               | are just part of the IDE hype train, and that anecdotal
               | reports of syntax highlighting being helpful are
               | counterbalanced by anecdotal reports of $IDE having
               | incorrect syntax highlighitng on $Esoteric_file_format?
               | 
               | Most of the failures of LLMs with coding that I have seen
               | has been a result of asking too much of the LLM. Writing
               | a hundred context-aware unit tests is something that an
               | LLM is excellent at, and would have taken a developer a
               | long time previously. Asking an LLM to write a novel
               | algorithm to speed up image processing of the output of
               | your electron microscope will go less well.
        
               | bdangubic wrote:
               | exactly. many SWEs currently are fighting this fight of
               | "oh it is not good enough bla bla..." on my team
               | currently (50-ish people) you would not last longer than
               | 3 months if you tried to do your work "manually" like we
               | did before. several have tried, no longer around. I
               | believe SWEs fighting LLMs are doing themselves a huge
               | disservice in that they should be full-on embracing it
               | and trying to figure out how to more effectively use
               | them. just like any other tool, it is as good as the user
               | of the tool :)
        
             | munk-a wrote:
             | Intellisense style systems were a huge feature leap when
             | they gained wider language support and reliability. LLMs
             | are yet another step forward for intellisense and the
             | effort of comprehending the code you're altering. I don't
             | think I will ever benefit from code generation in a serious
             | setting (it's excellent for prototyping) simply due to the
             | fact that it's solving the easy problem (write some code)
             | while creating a larger problem (figure out of the code
             | that was generated is correct).
             | 
             | As another senior developer I won't say it's impossible
             | that I'll ever benefit from code generation but I just
             | think it's a terrible space to try and build a solution -
             | we don't need a solution here - I can already type faster
             | than I can think.
             | 
             | I am _keenly_ interested in seeing if someone can leverage
             | AI for query performance tuning or, within the RDBMS, query
             | planning. That feels like an excellent (if highly specific)
             | domain for an LLM.
        
               | tlarkworthy wrote:
               | > I can already type faster than I can think.
               | 
               | But can you write tickets faster than you can implement
               | them? I certainly can.
        
               | vinnymac wrote:
               | Depends on the ticket.
               | 
               | If it's "Get us to the moon", it's gonna take me years to
               | write that ticket.
               | 
               | If it was "Make the CTA on the homepage red", it is up
               | for debate whether I needed a ticket at all.
        
             | rybosworld wrote:
             | > LLM's never provide code that pass my sniff test
             | 
             | If that statement isn't coming from ego, then where is it
             | coming from? It's provably true that LLM's can generate
             | working code. They've been trained on billions of examples.
             | 
             | Developers seem to focus on the set of cases that LLM's
             | produce code that doesn't work, and use that as evidence
             | that these tools are "useless".
        
               | hmillison wrote:
               | there's a lot more involved in senior dev work beyond
               | producing code that works.
               | 
               | if the stakeholders knew how to do what they needed to
               | build and how, then they could use LLMs, but translating
               | complex requirements into code is something that these
               | tools are not even close to cracking.
        
               | rybosworld wrote:
               | > there's a lot more involved in senior dev work beyond
               | producing code that works.
               | 
               | Completely agree.
               | 
               | What I don't agree with is statements like these:
               | 
               | > LLM's never provide code that pass my sniff test
               | 
               | To me, these (false) absolutions about chat bot
               | capabilities, are being rehashed so frequently, that it
               | derails every conversation about using LLM's for dev
               | work. You'll find similar statements in nearly every
               | thread about LLM's for coding tasks.
               | 
               | It's provably true that LLM's can produce working code.
               | It's also true, that some increasingly large portion of
               | coding is being offloaded to LLM's.
               | 
               | In my opinion, developers need to grow out of this
               | attitude that they are John Henry and they'll outpace the
               | mechanical drilling machine. It's a tired conversation.
        
               | rurp wrote:
               | > It's provably true that LLM's can produce working code.
               | 
               | You've restated this point several times but the reason
               | it's not more convincing to many people is that simply
               | producing code that works is rarely an actual goal on
               | many projects. On larger projects it's much more about
               | producing code that is consistent with the rest of the
               | project, and is easily extensible, and is readable for
               | your teammates, and is easy to debug when something goes
               | wrong, is testable, and so on.
               | 
               | The code working is a necessary condition, but is
               | insufficient to tell if it's a valuable contribution.
        
               | munk-a wrote:
               | > It's provably true that LLM's can produce working code.
               | 
               | This is correct - but it's also true that LLMs can
               | produce flawed code. To me the cost of telling whether
               | code is correct or flawed is larger than the cost of me
               | just writing correct code. This may be an AuDHD thing but
               | I can better comprehend the correctness of a solution if
               | I'm watching (and doing) the making of that solution than
               | if I'm reading it after the fact.
        
               | simianparrot wrote:
               | The code working is the bare minimum. The code being
               | right for the project and context is the basic
               | expectation. The code being _good_ at solving its
               | intended problem is the desired outcome, which is a
               | combination of tradeoffs between performance,
               | readability, ease of refactoring later, modularity, etc.
               | 
               | LLM's can sometimes provide the bare minimum. And then
               | you have to refactor and massage it all the way to the
               | good bit, but unlike looking up other people's endeavors
               | on something like Stack Overflow, with the LLM's code I
               | have no context why it "thought" that was a good idea. If
               | I ask it, it may parrot something from the relevant
               | training set, or it might be bullshitting completely. The
               | end result? This is _more_ work for a senior dev, not
               | less.
               | 
               | Hence why it has never passed my sniff test. Its code is
               | at best the quality of code even junior developers
               | wouldn't open a PR for yet. Or if they did they'd be
               | asked to explain how and why and quickly learn to not
               | open the code for review before they've properly
               | considered the implications.
        
               | lukev wrote:
               | Code is a liability, not an asset. It is a necessary evil
               | to create functional software.
               | 
               | Senior devs know this, and factor code down to the
               | minimum necessary.
               | 
               | Junior devs and LLMs think that writing code is the point
               | and will generate lots of it without worrying about
               | things like leverage, levels of abstraction, future
               | extensibility, etc.
        
               | pooper wrote:
               | > if the stakeholders knew how to do what they needed to
               | build and how, then they could use LLMs, but translating
               | complex requirements into code is something that these
               | tools are not even close to cracking.
               | 
               | They don't have to _replace_ you to reduce headcount.
               | They could increase your workload so where they needed
               | five senior developers, they can do with maybe three.
               | That 's like six one way and half a dozen the other way
               | because two developers lost a job, right?
        
               | n4r9 wrote:
               | Yeah. Code that works is a fraction of the aim. You also
               | want code that a good junior can read and debug in the
               | midst of a production issue, is robust against new or
               | updated requirements, has at least as good performance as
               | the competitors, and uses appropriate libraries in a
               | sparse manner. You also need to be able to state when a
               | requirement would loosen the conceptual cohesion of the
               | code, and to push back on requirements thdt can already
               | be achieved in just as easy a way.
        
               | oops wrote:
               | My experience so far has been: if I know what I want well
               | enough to explain it to an LLM then it's been easier for
               | me to just write the code. Iterating on prompts, reading
               | and understanding the LLM's code, validating that it
               | works and fixing bugs is still time consuming.
               | 
               | It has been interesting as a rubber duck, exploring a new
               | topic or language, some code golf, but so far not for
               | production code for me.
        
               | IshKebab wrote:
               | > It's provably true that LLM's can generate working
               | code.
               | 
               | Yeah for simple examples, especially in web dev. As soon
               | as you step outside those bounds they make mistakes all
               | the time.
               | 
               | As I said, they're still _useful_ because roughly correct
               | but buggy code is often quite helpful when you 're
               | programming. But there's zero chance you can just say
               | "write me an driver for the nRF905 using Embassy and
               | embedded-hal" and get something working. Whereas I, a
               | human, can do that.
        
             | JKCalhoun wrote:
             | I imagine though they might replace 3 out of 4 senior
             | programmers (keep one around to sanity check the AI).
        
               | munk-a wrote:
               | That's the same figuring a lot of business folks had when
               | considering off-shoring in the early 2000s - those
               | companies ended up hiring twice as many senior
               | programmers to sanity check and correct the code they got
               | back. The same story can be heard from companies that
               | fired their expensive seniors to hire twice as many
               | juniors at a quarter the price.
               | 
               | I think that software development is just an extremely
               | poor market segment for these kinds of tools - we've
               | already got mountains of productivity tools that minimize
               | how much time we need to spend doing the silly rote
               | programming stuff - most of software development is
               | problem solving.
        
               | mrbungie wrote:
               | Oof, the times I've heard something like that with X
               | tech.
        
             | PeterisP wrote:
             | If a LLM (or any other tool) makes so that team of 8 can
             | get the same results in the same time as it used to take a
             | team of 10 to do, then I would count that as "replaced 2
             | programmers" - even if there's no particular person for
             | which the _whole_ job has been replaced, that 's not a
             | meaningful practical difference, replacing a significant
             | fraction of every programmer's job has the same outcomes
             | and impacts as replacing a significant fraction of
             | programmers.
        
               | RangerScience wrote:
               | Fav anecdote from ages ago:
               | 
               | When hand-held power tools became a thing, the Hollywood
               | set builder's union was afraid of this exact same thing -
               | people would be replaced by the tools.
               | 
               | Instead, productions built bigger sets (the ceiling was
               | raised) and smaller productions could get in on things
               | (the floor was lowered).
               | 
               | I always took that to mean "people aren't going to spend
               | less to do the job - they'll just do a bigger job."
        
               | sigmarule wrote:
               | This could very well prove to be the case in software
               | engineering, but also could very well not; what is the
               | equivalent of "larger sets" in our domain, and is that
               | something that is even preferable to begin with? Should
               | we build larger codebases just because we _can_? I'd say
               | likely not, while it does make sense to build larger/more
               | elaborate movie sets because they could.
               | 
               | Also, a piece missing from this comparison is a set of
               | people who don't believe the new tool will actually have
               | a measurable impact on their domain. I assume few-to-none
               | could argue that power tools would have no impact on
               | their profession.
        
               | pixeltechie wrote:
               | This is a good example of what could happen to software
               | development as a whole. In my experience large companies
               | tend to more often buy software rather than make it. Ai
               | could drastically change the "make or buy" decision in
               | favour of make. Because you need less developers to
               | create a perfect tailored solution that directly fits the
               | needs of the company. So "make" becomes affordable and
               | more attractive.
        
               | karaterobot wrote:
               | Another anecdote: when mechanical looms became a thing,
               | textile workers were afraid that the new tools would
               | replace them, and they were right.
        
               | IshKebab wrote:
               | > then I would count that as "replaced 2 programmers"
               | 
               | Well then you can count IDEs, static typing, debuggers,
               | version control etc. as replacing programmers too. But I
               | don't think any of those performance enhancers have
               | really reduced the number of programmers needed.
               | 
               | In fact it's a well known paradox that making a job more
               | efficient can _increase_ the number of people doing that
               | job. It 's called the Jevons paradox (thanks ChatGPT -
               | probably wouldn't have been able to find that with
               | Google!)
               | 
               | Making people 20% more efficient is very different to
               | entirely replacing them.
        
               | hn_throwaway_99 wrote:
               | That's actually not accurate. See Jevons paradox,
               | https://en.m.wikipedia.org/wiki/Jevons_paradox. In the
               | short term, LLMs should have the effect of making
               | programmers more productive, which means more customers
               | will end up demanding software that was previously
               | uneconomic to build (this is not theoretical - e.g. I
               | work with some non-profits who would _love_ a
               | comprehensive software solution, they simply can 't
               | afford it, or the risk, at present).
        
               | hnthrowaway6543 wrote:
               | yes, this. the backlog of software that needs to be built
               | is _fucking enormous_.
               | 
               | you know what i'd do if AI made it so i could replace 10
               | devs with 8? use the 2 newly-freed up developers to work
               | on some of the other 100000 things i need done
        
           | gspencley wrote:
           | >> > LLM's never provide code that pass my sniff test
           | 
           | > This is ego speaking.
           | 
           | Consider this, 100% of AI training data is human-generated
           | content.
           | 
           | Generally speaking, we apply the 90/10 rule to human
           | generated content: 90% of (books, movies, tv shows, software
           | applications, products available on Amazon) is not very good.
           | 10% shines.
           | 
           | In software development, I would say it's more like 99 to 1
           | after working in the industry professionally for over 25
           | years.
           | 
           | How do I divorce this from my personal ego? It's easy to
           | apply objective criteria:
           | 
           | - Is the intent of code easy to understand?
           | 
           | - Are the "moving pieces" isolated, such that you can change
           | the implementation of one with minimal risk of altering the
           | others by mistake?
           | 
           | - Is the solution in code a simple one relative to
           | alternatives?
           | 
           | The majority of human produced code does not pass the above
           | sniff test. Most of my job, as a Principal on a platform
           | team, is cleaning up other peoples' messes and training them
           | how to make less of a mess in the future.
           | 
           | If the majority of human generated content fails to follow
           | basic engineering practices that are employed in other
           | engineering disciplines (i.e: it never ceases to amaze me how
           | much of an uphill battle it is just to get some SWEs just to
           | break down their work into small, single responsibility,
           | easily testable and reusable "modules") then we can't
           | logically expect any better from LLMs because this is what
           | they're being trained on.
           | 
           | And we are VERY far off from LLMs that can weigh the merits
           | of different approaches within the context of the overall
           | business requirements and choose which one makes the most
           | sense for the problem at hand, as opposed to just "what's the
           | most common answer to this question?"
           | 
           | LLMs today are a type of magic trick. You give it a whole
           | bunch of 1s and 0s so that you can input some new 1s and 0s
           | and it can use some fancy proability maths to predict "based
           | on the previous 1s and 0s, what are the statistically most
           | likely next 1s and 0s to follow from the input?"
           | 
           | That is useful, and the result can be shockingly impressive
           | depending on what you're trying to do. But the limitations
           | are so limited that the prospect of replacing an entire high-
           | skilled profession with that magic trick is kind of a joke.
        
             | m_ke wrote:
             | Your customers don't care how your code smells, as long as
             | it solves their problem and doesn't cost an arm and a leg.
             | 
             | A ton of huge business full of Sr Principal Architect SCRUM
             | masters are about to get disrupted by 80 line ChatGPT
             | wrappers hacked together by a few kids in their dorm room.
        
               | gspencley wrote:
               | > Your customers don't care how your code smells, as long
               | as it solves their problem and doesn't cost an arm and a
               | leg.
               | 
               | Software is interesting because if you buy a
               | refrigerator, even an inexpensive one, you have certain
               | expectations as to its basic functions. If the compressor
               | were to cut out periodically in unexpected ways,
               | affecting your food safety, you would return it.
               | 
               | But in software customers seem to be conditioned to just
               | accept bugs and poor performance as a fact of life.
               | 
               | You're correct that customers don't care about "code
               | quality", because they don't understand code or how to
               | evaluate it.
               | 
               | But you're assuming that customers don't care about the
               | quality of the product they are paying for, and you're
               | divorcing that quality from the quality of the code as if
               | the code doesn't represent THE implementation of the
               | final product. The hardware matters too, but to assume
               | that code quality doesn't directly affect product quality
               | is to pretend that food quality is not directly impacted
               | by its ingredients.
        
               | throwaway_43793 wrote:
               | Code quality does not affect final product quality IMHO.
               | 
               | I worked in companies with terrible code, that deployed
               | on an over-engineered cloud provider using custom
               | containers hacked with a nail and a screwdriver, but the
               | product was excellent. Had bugs here and there, but
               | worked and delivered what needs to be delivered.
               | 
               | SWEs need to realize that code doesn't really matter. For
               | 70 years we are debating the best architecture patterns
               | and yet the biggest fear of every developer is working on
               | legacy code, as it's an unmaintainable piece of ...
               | written by humans.
        
               | gspencley wrote:
               | > Code quality does not affect final product quality
               | IMHO.
               | 
               | What we need, admittedly, is more research and study
               | around this. I know of one study which supports my
               | position, but I'm happy to admit that the data is sparse.
               | 
               | https://arxiv.org/abs/2203.04374
               | 
               | From the abstract:
               | 
               | "By analyzing activity in 30,737 files, we find that low
               | quality code contains 15 times more defects than high
               | quality code."
        
               | PeterisP wrote:
               | The parent point isn't that shitty code doesn't have
               | defects but rather that there's usually a big gap between
               | the code (and any defects in that code) and the actual
               | service or product that's being provided.
               | 
               | Most companies have no relation between their code and
               | their products at all - a major food conglomerate will
               | have hundreds or thousands of IT personnel and no direct
               | link between defects in their business process automation
               | code (which is the #1 employment of developers) and the
               | quality of their products.
               | 
               | For companies where the product does have some tech
               | component (e.g. refrigerators mentioned above) again, I'd
               | bet that most of that companies developers don't work on
               | any code that's intended to be _in_ the product, in such
               | a company there simply is far more programming work
               | outside of that product than inside of one. The companies
               | making a software-first product (like startups on
               | hackernews) where a software defect implies a product
               | defect are an exception, not the mainstream.
        
               | fzeroracer wrote:
               | Code quality absolutely does matter, because when
               | everything is on fire and your service is down and no one
               | is able to fix it customers WILL notice.
               | 
               | I've seen plenty of companies implode because they fired
               | the one guy that knew their shitty codebase.
        
               | simianparrot wrote:
               | Much like science in general, these topics are never --
               | and can never be -- considered settled. Hence why we
               | still experiment with and iterate on architectural
               | patterns, because reality is ever-changing. The real
               | world from whence we get our input to produce desired
               | output is always changing and evolving, and thus so are
               | the software requirements.
               | 
               | The day there is no need to debate systems architecture
               | anymore is the heat death of the universe. Maybe before
               | that AGI will be debating it for us, but it will be
               | debated.
        
             | rybosworld wrote:
             | > That is useful, and the result can be shockingly
             | impressive depending on what you're trying to do. But the
             | limitations are so limited that the prospect of replacing
             | an entire high-skilled profession with that magic trick is
             | kind of a joke.
             | 
             | The possible outcome space is not binary (at least in the
             | near term), i.e. either AI replace devs, or it doesn't.
             | 
             | What I'm getting at is this: There's a pervasive attitude
             | among some developers (generally older developers, in my
             | experience) that LLM's are effectively useless. If we're
             | being objective, that is quite plainly not true.
             | 
             | These conversations tend to start out with something like:
             | "Well _my_ work in particular is so complex that LLM's
             | couldn't possibly assist."
             | 
             | As the conversation grows, the tone gradually changes to
             | admitting: "Yes there are some portions of a codebase where
             | LLM's can be helpful, but they can't do _everything_ that
             | an experienced dev does."
             | 
             | It should not even be controversial to say, that AI will
             | only improve at this task. That's what technology does,
             | over the long run.
             | 
             | Fundamentally, there's ego involved whenever someone says
             | "LLM's have _never_ produced useable code." That statement,
             | is provably false.
        
           | groby_b wrote:
           | > Is there some expectation that these things won't improve?
           | 
           | Sure. But the expectation is quantitative improvement -
           | qualitative improvement has not happened, and is unlikely to
           | happen without major research breakthroughs.
           | 
           | LLMs are useful. They still need a lot of supervision & hand
           | holding, and they'll continue to for a long while
           | 
           | And no, it's not "ego speaking". It's long experience. There
           | is fundamentally no reason to believe LLMs will take a leap
           | to "works reliably in subtle circumstances, and will elicit
           | requirements as necessary". (Sure, if you think SWE work is
           | typing keys and make some code, any code, appear, then LLM
           | are a threat)
        
           | akira2501 wrote:
           | > Is there some expectation that these things won't improve?
           | 
           | Yes. The current technology is at a dead end. The costs for
           | training and for scaling the network are not sustainable.
           | This has been obvious since 2022 and is related to the way in
           | which OpenAI created their product. There is no path
           | described for moving from the current dead end technology to
           | anything that could remotely be described as "AGI."
           | 
           | > This is ego speaking.
           | 
           | This is ignorance manifest.
        
           | simianparrot wrote:
           | At my job I review a lot of code, and I write code as well.
           | The only type of developer an LLM's output comes close to is
           | a fresh junior usually straight out of university in their
           | first real development job, with little practical experience
           | in a professional code-shipping landscape. And the majority
           | of those juniors improve drastically within a few weeks or
           | months, with handholding only at the very start and then less
           | and less guidance. This is because I teach them to reason
           | about their choices and approaches, to question assumptions,
           | and thus they learn quickly that programming rarely has one
           | solution to a problem, and that the context matters so much
           | in determining the way forward.
           | 
           | A human junior developer can learn from this tutoring and
           | rarely regress over time. But the LLM's all by design cannot
           | and do not rewire their understanding of the problem space
           | over time, nor do they remember examples and lessons from
           | previous iterations to build upon. I have to handhold them
           | forever, and they never learn.
           | 
           | Even when they use significant parts of the existing codebase
           | as their context window they're still blind to the whole
           | reality and history of the code.
           | 
           | Now just to be clear, I do use LLM's at my job. Just not to
           | code. I use them to parse documents and assist users with
           | otherwise repetitive manual tasks. I use their strength as
           | language models to convert visual tokens parsed by an OCR to
           | grasp the sentence structure and convert that into text
           | segments which can be used more readily by users. At that
           | they are incredible, even something smaller like llama 7b.
        
           | code_for_monkey wrote:
           | I agree with you tbh, and it also just misses something huge
           | that doesnt get brought up: its not about your sniff test,
           | its about your bosses sniff test. Are you making 300k a year?
           | Thats 300 thousand reasons to replace you for a short term
           | boost in profit, companies love doing that.
        
             | simianparrot wrote:
             | I'm in a leadership role and one of the primary parties
             | responsible for hiring. So code passing my sniff test is
             | kind of important.
        
           | gosub100 wrote:
           | The reason it's so good at "rewrite this C program in Python"
           | is because it was trained on a huge corpus of code at GitHub.
           | There is no such corpus of examples of more abstract
           | commands, thus a limited amount by which it can improve.
        
           | Bjorkbat wrote:
           | > I am constantly surprised how prevalent this attitude is.
           | ChatGPT was only just released in 2022. Is there some
           | expectation that these things won't improve?
           | 
           | I mean, in a way, yeah.
           | 
           | Last 10 years were basically one hype-cycle after another
           | filled with lofty predictions that never quite panned out.
           | Besides the fact that many of these predictions kind of fell
           | short, there's also the perception that progress on these
           | various things kind of ground to a halt once the interest
           | faded.
           | 
           | 3D printers are interesting. Sure, they have gotten
           | incrementally better after the hype cycle died out, but
           | otherwise their place in society hasn't changed, nor will it
           | likely ever change. It has its utility for prototyping and as
           | a fun hobbyist machine for making plastic toys, but otherwise
           | I remember people saying that we'd be able to just 3D print
           | whatever we needed rather than relying on factories.
           | 
           | Same story with VR. We've made a lot of progress since the
           | first Oculus came out, but otherwise their role in society
           | hasn't changed much since then. The latest VR headsets are
           | still as useless and still as bad for gaming. The metaverse
           | will probably never happen.
           | 
           | With AI, I don't want to be overly dismissive, but at the
           | same time there's a growing consensus that pre-training
           | scaling laws are plateauing, and AI "reasoning" approaches
           | always seemed kind of goofy to me. I wouldn't be surprised if
           | generative AI reaches a kind of equilibrium where it
           | incrementally improves but improves in a way where it gets
           | continuously better at being a junior developer but never
           | quite matures beyond that. The world's smartest beginner if
           | you will.
           | 
           | Which is still pretty significant mind you, it's just that
           | I'm not sure how much this significance will be felt. It's
           | not like one's skillset needs to adjust that much in order to
           | use Cursor or Claude, especially as they get better over
           | time. Even if it made developers 50% more productive, I feel
           | like the impact of this will be balanced-out to a degree by
           | declining interest in programming as a career (feel like
           | coding bootcamp hype has been dead for a while now), a lack
           | of enough young people to replace those that are aging out,
           | the fact that a significant number of developers are,
           | frankly, bad at their job and gave up trying to learn new
           | things a long time ago, etc etc.
           | 
           | I think it really only matters in the end if we actually
           | manage to achieve AGI, once that happens though it'll
           | probably be the end of work and the economy as we know it, so
           | who cares?
           | 
           | I think the other thing to keep in mind is that the history
           | of programming is filled with attempts to basically replace
           | programmers. Prior to generative AI, I remember a lot of
           | noise over low-code / no-code tools, but they were just the
           | latest chapter in the evolution of low-code / no-code. Kind
           | of surprised that even now in Anno Domini 2024 one can make a
           | living developing small-business websites due to the
           | limitations of the latest batch of website builders.
        
           | cies wrote:
           | > ChatGPT was only just released in 2022.
           | 
           | Bitcoin was released in what year? I still cannot use it for
           | payments.
           | 
           | No-code solutions exist since when? And still programmers
           | work...
           | 
           | I dont think all hyped techs are fads. For instance: we use
           | SaaS now instead of installing software locally. This
           | transition took the world by storm.
           | 
           | But those tech that needs lots of ads, and lots of zealots,
           | and make incredible promises: they usually are fads.
        
           | jcranmer wrote:
           | This is at least the third time in my life that we've seen a
           | loudly-heralded purported the-end-of-programming technology.
           | The previous two times both ended up being damp squibs that
           | barely mention footnotes in the history of computing.
           | 
           | Why do we expect that LLMs are going to buck this trend? It's
           | not for accuracy--the previous attempts, when demonstrating
           | their proof-of-concepts, actually _reliably_ worked, whereas
           | with  "modern LLMs", virtually every demonstration manages to
           | include "well, okay, the output has a bug here."
        
             | simianparrot wrote:
             | I do seem to vaguely remember a time when there was a fair
             | amount of noise proclaiming "visual programming is making
             | dedicated programmers obsolete." I think the implication
             | was that now everybody's boss could just make the software
             | themselves or something.
             | 
             | LLM's as a product feel practically similar, because _even
             | if_ they could write code that worked in large enough
             | quantities to constitute any decently complex application,
             | the person telling them what problem to solve has to
             | understand the problem space since the LLM's can't reason.
             | 
             | Given that neither of those things are true, it's not much
             | different from visual programming tools, practically
             | speaking.
        
           | deathanatos wrote:
           | > _This is ego speaking._
           | 
           | No, it really isn't. Repeatedly, the case is that people are
           | trying to pass off GPT's work as good without actually
           | verifying the output. I keep seeing "look at this wonderful
           | script GPT made for me to do X", _and it does not pass code
           | review_ , and is generally extremely low quality.
           | 
           | In one example, a bash script was generated to count number
           | SLoC changed by author; it was extremely convoluted, and
           | after I simplified it, I noticed that the output of the
           | simplified version differed, _because the original was
           | omitted changes that were only a single line_.
           | 
           | In another example it took several back & forths during a
           | review to ask "where are you getting this code? / why do you
           | think this code works, when nothing in the docs supports
           | that?" and after _several_ back and forths, it was admitted
           | that GPT wrote it. The dev who wrote it would have been far
           | better served RTFM, than a several cycle long review that
           | ended up with most of GPT 's hallucinations being stripped
           | from the PR.
           | 
           | Those who think LLM's output is good have not reviewed the
           | output strenuously enough.
           | 
           | > _Is there some expectation that these things won 't
           | improve?_
           | 
           | Because randomized token generation inherently lacks actual
           | reasoning about the behavior of the code. _My_ code generator
           | does not.
        
             | AlphaSite wrote:
             | I think fundamentally if all you do is glue together
             | popular OSS libraries in well understood way, then yes. You
             | may be replaced. But really you probably could be replaced
             | by a Wordpress plugin at that point.
             | 
             | The moment you have some weird library that 4 people in the
             | world know (which happens more than you'd expect) or hell
             | even something without a lot of OSS code what exactly is an
             | LLM going to do? How is it supposed to predict code that's
             | not derived from its training set?
             | 
             | My experience thus far is that it starts hallucinating and
             | it's not really gotten any better at it.
             | 
             | I'll continue using it to generate sed and awk commands,
             | but I've yet to find a way to make my life easier with the
             | "hard bits" I want help with.
        
           | surgical_fire wrote:
           | > I am constantly surprised how prevalent this attitude is.
           | ChatGPT was only just released in 2022. Is there some
           | expectation that these things won't improve?
           | 
           | Is there any expectations that things will? Is there more
           | untapped great quality data that LLMs can ingest? Will a
           | larger model perform meaningfully better? Will it solve the
           | pervasive issue of generating plausibly sounding bullshit?
           | 
           | I used LLMs for a while, I found them largely useless for my
           | job. They were helpful for things I don't really need help
           | with, and they qere mostly damaging for things I actually
           | needed.
           | 
           | > This is ego speaking.
           | 
           | Or maybe it was an accurate assessment for his use case, and
           | your wishful thinking makes you think it was his ego
           | speaking.
        
             | rybosworld wrote:
             | > Is there any expectations that things will?
             | 
             | Seems like an odd question. The answer is obviously yes:
             | There is a very pervasive expectation that LLM's will
             | continue to improve, and it seems odd to suggest otherwise.
             | There is hundreds of billions of dollars being spent on AI
             | training and that number is increasing each year.
             | 
             | > Is there more untapped great quality data that LLMs can
             | ingest?
             | 
             | Why wouldn't there be? AI's are currently trained on the
             | internet but that's obviously not the only source of data.
             | 
             | > Will a larger model perform meaningfully better?
             | 
             | The answer to this, is also yes. It is well established
             | that, all else being equal, a bigger model is better than a
             | smaller model, assuming that the smaller model hasn't
             | already captured all of the available information.
        
               | tobias3 wrote:
               | We recently had a few submissions about this topic. Most
               | recently Ilyas talk. Further improvement will be a
               | research type problem. This trend was clear for a while
               | already, but is reaching the mainstream now. The billions
               | of dollar spend goes into scaling existing technology. If
               | it doesn't scale anymore and becomes a resarch problem
               | again, rational companies will not continue to invest in
               | this area (at least without the usual research
               | arrangements).
        
               | surgical_fire wrote:
               | > The answer is obviously yes: There is a very pervasive
               | expectation that LLM's will continue to improve, and it
               | seems odd to suggest otherwise. There is hundreds of
               | billions of dollars being spent on AI training and that
               | number is increasing each year.
               | 
               | That makes an assumption that throwing dollars on AI
               | training is a surefire way to solve the many shortcomings
               | of LLMs. It is a very optimistic assumption.
               | 
               | > Why wouldn't there be? AI's are currently trained on
               | the internet but that's obviously not the only source of
               | data.
               | 
               | "The Internet" basically encompasses all meaningful
               | sources of data available, especially if we are talking
               | specifically about software development. But even beyond
               | that, it is very unclear what other high quality data it
               | would consume that would improve the things.
               | 
               | > The answer to this, is also yes. It is well established
               | that, all else being equal, a bigger model is better than
               | a smaller model, assuming that the smaller model hasn't
               | already captured all of the available information.
               | 
               | I love how you conveniently sidestepped the part where I
               | ask if it would improve the pervasive issue of generating
               | plausibly sounding bullshit.
               | 
               | The assumption that generative AI will improve is as
               | valid as the assumption that it will plateau. It is quite
               | possible that what we are seeing is "as good as it gets",
               | and some major breakthrough, that may or may not happen
               | on our lifetime, is needed.
        
           | deegles wrote:
           | LLMs as they currently exist will never yield a true,
           | actually-sentient AI. maybe they will get better in some
           | ways, but it's like asking if a bird will ever fly to the
           | moon. Something else is needed.
        
             | mlboss wrote:
             | A bird can fly to moon if it keeps on improving every
             | month.
        
               | VeejayRampay wrote:
               | it literally cannot though, unless it becomes some other
               | form of life that doesn't need oxygen, that's the whole
               | thing with this analogy, it's ironically suited to the
               | discourse
        
           | bdangubic wrote:
           | no ego but incompetence :)
        
           | shakezooola wrote:
           | >This is ego speaking.
           | 
           | Very much so. These things are moving so quickly and agentic
           | systems are already writing complete codebases. Give it a few
           | years. No matter how 1337 you think you are, they are very
           | likely to surpass you in 5-10 years.
        
           | jazz9k wrote:
           | Have you ever used LLMs to generate code? It's not good
           | enough yet.
           | 
           | In addition to this, most companies aren't willing to give
           | away all off their proprietary IP and knowledge through 3rd
           | party servers.
           | 
           | It will be awhile before engineering jobs are at risk.
        
         | ksdnjweusdnkl21 wrote:
         | Hard to believe anyone is getting contacted more now than in
         | 2020. But I agree with the general sentiment. I'll do nothing
         | and if I get replaced then I get replaced and switch to
         | woodworking or something. But if LLMs do not pan out then I'll
         | be ahead of all the people who wasted their time with that.
        
         | ricardobeat wrote:
         | That's short term thinking in my opinion. LLMs will not replace
         | developers by writing better code: it's the systems we work on
         | that will start disappearing.
         | 
         | Every SaaS, marketplace is at risk of extinction, superseded by
         | AI agents communicating ad-hoc. Management and business
         | software replaced by custom, one-off programs built by AI. The
         | era of large teams painstakingly building specialized software
         | for niche use cases will end. Consequently we'll have millions
         | of unemployed developers, except for the ones maintaining the
         | top level orchestration for all of this.
        
           | dimgl wrote:
           | > most of the actual systems we work on will simply start
           | disappearing.
           | 
           | What systems do you think are going to start disappearing?
           | I'm unclear how LLMs are contributing to systems becoming
           | redundant.
        
             | rqtwteye wrote:
             | I think a lot of CRUD apps will disappear. A lot of the
             | infrastructure may also be done by AI instead of some dude
             | writing tons of YAML code.
        
               | betaby wrote:
               | The infrastructure is not a 'some dude writing tons of
               | YAML code'.
        
               | munk-a wrote:
               | CRUD apps should already be disappearing. You should be
               | using a framework that auto-generates the boilerplate
               | stuff.
        
             | idopmstuff wrote:
             | Recovering enterprise SaaS PM here. I don't necessarily
             | know that a lot of enterprise SaaS will disappear, but I do
             | think that a lot of the companies that build it will go out
             | of business as their customers start to build more of their
             | internal systems with LLMs vs. buy from an existing vendor.
             | This is probably more true at the SMB level for now than
             | actual enterprise, both for technical and internal politics
             | reasons, but I expect it to spread.
             | 
             | As a direct example from myself, I now acquire and run
             | small e-commerce brands. When I decided to move my
             | inventory management from Google Sheets into an actual
             | application, I looked at vendors but ultimately just
             | decided to build my own. My coding skills are pretty
             | minimal, but sufficient that I was able to produce what I
             | needed with the help of LLMs. It has the advantages of
             | being cheaper than buying and also purpose-built to my
             | needs.
             | 
             | So yeah, basically the tl;dr is that for internal tools, I
             | believe that LLMs giving non-developers sufficient coding
             | skills will shift the build vs. buy calculus squarely in
             | the direction of build, with the logical follow-on effects
             | to companies trying to sell internal tools software.
        
               | dingnuts wrote:
               | > go out of business as their customers start to build
               | more of their internal systems with LLMs vs. buy from an
               | existing vendor.
               | 
               | there is going to be so much money to make as a
               | consultant fixing these setups, I can't wait!
        
               | achrono wrote:
               | Long-time enterprise SaaS PM here, and sorry, this does
               | not make any sense. The SMB segment is likely to be the
               | least exposed to AI, and software, and the concept of DIY
               | software through AI.
               | 
               | As you visualize whole swaths of human workers getting
               | automated away, also visualize the nitty gritty of day-
               | to-day work with AI. If it gets something wrong, it will
               | say "I apologize" until you, dear user, are blue in the
               | face. If an actual person tried to do the same, the
               | blueness would instead be on their, not your, face.
               | Therein lies the value of a human worker. The big
               | question, I think, is going to be: is that value
               | commensurate to what we're making on our paycheck right
               | now?
        
               | latentsea wrote:
               | For trivial setups this might work, but for anything
               | sufficiently complex that actually hits on real
               | complexity in the domain, it's hard to see any LLM doing
               | an adequate job. Especially if the person driving it
               | doesn't know what they don't know about the domain.
        
             | Terr_ wrote:
             | Not parent poster, but I imagine it will be a bit like the
             | horror stories of companies (ab)using spreadsheets in lieu
             | of a proper program or database: They will use an LLM to
             | get half-working stuff "for free" and consider it a
             | bargain, especially if the detectable failures can be spot-
             | fixed by an intern doing data-entry.
             | 
             | I think we'll see it first in internal reporting tools,
             | where the stakeholder tries to explain something very
             | specific they want to see (logical or not) and when it's
             | visibly wrong they can work around it privately.
        
           | asdev wrote:
           | you do realize that these so called "one-off" AI programs
           | would need to be maintained? Most people paying for Saas are
           | paying for the support/maintenance rather than features,
           | which AI can't handle. No one will want to replace any Saas
           | they depend on with a poorly generated variant that they want
           | to maintain
        
             | mlinhares wrote:
             | Nah, you only write it and it runs by itself forever in the
             | AI cloud.
             | 
             | Sometimes I wonder if people saying this stuff have
             | actually worked in development at all.
        
             | m_ke wrote:
             | Most people don't want cloud hosted subscription software,
             | we do it that way because VCs love vendor lock in and
             | recurring revenue.
             | 
             | Old school desktop software takes very little maintenance.
             | Once you get rid of user tracking, AB testing, monitoring,
             | CICD pipelines, microservices, SOC, multi tenant
             | distributed databases, network calls and all the other crap
             | things get pretty simple.
        
         | idopmstuff wrote:
         | > But I think that's generations away at best.
         | 
         | I'm not sure whether you mean human generations or LLM
         | generations, but I think it's the latter. In that case, I agree
         | with you, but also that doesn't seem to put you particularly
         | far off from OP, who didn't provide specific timelines but also
         | seems to be indicating that the elimination of most engineers
         | is still a little ways away. Since we're seeing a new
         | generation of LLMs every 1-2 years, would you agree that in ~10
         | years at the outside, AI will be able to do the things that
         | would cause you to gladly retire?
        
           | simianparrot wrote:
           | I mean human generations because to do system architecture,
           | design and development well you need something that can at
           | least match an average human brain in reasoning, logic and
           | learning plasticity.
           | 
           | I don't think that's impossible but I think we're quite a few
           | human generations away from that. And scaling LLM's is not
           | the solution to that problem; an LLM is just a small but
           | important part of it.
        
             | munk-a wrote:
             | I'd be cautious as describing anything in tech as human
             | generations away because we're only about a single human
             | generation into a lot of this industry existing.
        
         | rdrsss wrote:
         | +1 to this sentiment for now, I give them a try every 6 months
         | or so to see how they advance. And for pure code generation,
         | for my workflow, I don't find them very useful yet. For parsing
         | large sets of documentation though, not bad. They haven't
         | creeped their way into my usual research loop just yet, but I
         | could see that becoming a thing.
         | 
         | I do hear some of my junior colleagues use them now and again,
         | and gain some value there. And if llm's can help get people up
         | to speed faster that'd be a good thing. Assuming we continue to
         | make the effort to understand the output.
         | 
         | But yeah, agree, I raise my eyebrow from time to time, but I
         | don't see anything jaw dropping yet. Right now they just feel
         | like surrogate googler's.
        
         | modeless wrote:
         | > If there's ever a day where there's an AI that can do these
         | things, then I'll gladly retire. But I think that's generations
         | away at best.
         | 
         | People really believe it will be generations before an AI will
         | approach human level coding abilities? I don't know how a
         | person could seriously consider that likely given the pace of
         | progress in the field. This seems like burying your head in the
         | sand. Even the whole package of translating high level ideas
         | into robust deployed systems seems possible to solve within a
         | decade.
         | 
         | I believe there will still be jobs for technical people even
         | when AI is good at coding. And I think they will be enjoyable
         | and extremely productive. But they will be different.
        
           | handzhiev wrote:
           | I've heard similar statements about human translation - and
           | look where the translators are now
        
         | TZubiri wrote:
         | "Nothing because I'm a senior and LLM's never provide code that
         | pass my sniff test, and it remains a waste of time"
         | 
         | That's why the question is future proof. Models get better with
         | time, not worse.
        
           | layer8 wrote:
           | Models don't get better just by time passing. The specific
           | reasons for why they've been getting better don't necessarily
           | look like they'll extend indefinitely into the future.
        
           | latentsea wrote:
           | If the last-mile problems of things like autonomous vehicles
           | have been anything to go by, it seems the last mile problems
           | of entrusting your entire business operations to complete
           | black box software, or software written by a novices talking
           | to complete black box, will be infinitely worse.
           | 
           | There's plenty of low-code, no-code solutions around, and yet
           | still lots of software. The slice of the pie will change, but
           | it's very hard to see it being eliminated entirely.
           | 
           | Ultimately it's going to come down to "do I feel like I can
           | trust this?" and with little to no way to be certain you can
           | completely trust it, that's going to be a harder and harder
           | sell as risk increases with the size, complexity, and value
           | of the business processes being managed.
        
             | janalsncm wrote:
             | Even if seniors still do the last mile, that's a
             | significant reduction from the full commute they were paid
             | for previously. Are you saying seniors should concede this?
        
         | luddite2309 wrote:
         | This is a fascinating comment, because it shows such a mis-
         | reading of the history and point of technology (on a tech
         | forum). Technological progress always leads to loss of skilled
         | labor like your own, usually resulting in lower quality (but
         | higher profits and often lower prices). Of COURSE an LLM won't
         | be able to do work as well as you, just as industrial textile
         | manufacturing could not, and still does not, produce the
         | quality of work of 19th century cottage industry weavers; that
         | was in fact one of their main complaints.
         | 
         | As an aside, at the top of the front page right now is a
         | sprawling essay titled "Why is it so hard to buy things that
         | work well?"...
        
           | packetlost wrote:
           | This is a take that shows a completely lack of understanding
           | on what software engineering is actually about.
        
             | munk-a wrote:
             | The truth is somewhere in the middle. Do you remember the
             | early 2000s boom of web developers that built custom
             | websites for clients ranging from e-commerce sites to pizza
             | restaurants? Those folks have found new work as the
             | pressure from one-size fits all CMS providers (like
             | Squarespace) and much stronger frameworks for simple front-
             | ends (like node) have squeezed that market down to just
             | businesses that actually need complex custom solutions and
             | reduced the number of people required to maintain those.
             | 
             | It's likely we'll see LLMs used to build a lot of the cheap
             | stuff that previously existed as arcane excel macros (I've
             | already seen less technical folks use it to analyze
             | spreadsheets) but there will remain hard problems that
             | developers are needed to solve.
        
           | brink wrote:
           | Comparing an LLM to an industrial textile machine is
           | laughable, because one is consistent and reliable while the
           | other is not.
        
         | markerdmann wrote:
         | isn't "delve" a classic tell of gpt-generated output? i'm
         | pretty sure simianparrot is just trolling us. :-)
        
         | jensensbutton wrote:
         | The question isn't about what you'll do when you're replaced by
         | an LLM, it's what you're doing to future proof your job. There
         | is a difference. The risk to hedge against is the productivity
         | boost brought by LLMs resulting in a drop in the needs for new
         | software engineers. This will put pressure on jobs (simply
         | don't need as many as we used to so we're cutting 15%) AND
         | wages (more engineers looking for fewer jobs with a larger part
         | of their utility being commoditized).
         | 
         | Regardless of how sharp you keep yourself you're still at
         | subject to the macro environment.
        
           | simianparrot wrote:
           | I'm future proofing my job by ensuring I remain someone whose
           | brain is tuned to solving complex problems, and to do that
           | most effectively I find ways to keep being engaged in both
           | the fundamentals of programming (as already mentioned) and
           | the higher-level aspects: Teaching others (which in turn
           | teaches me new things) and being in leadership roles where I
           | can make real architectural choices in terms of what hardware
           | to run our software on.
           | 
           | I'm far more worried about mental degradation due to any
           | number of circumstances -- unlucky genetics, infections, what
           | have you. But "future proofing" myself against some of that
           | has the same answer: Remain curious, remain mentally
           | ambidextrous, and don't let other people (or objects) think
           | for me.
           | 
           | My brain is my greatest asset both for my job and my private
           | life. So I do what I can to keep it in good shape, which
           | incidentally also means replacing me with a parrot is
           | unlikely to be a good decision.
        
           | luckylion wrote:
           | Are you though? Until the AI-augmented developer provides
           | better code at lower cost, I'm not seeing it. Senior
           | developers aren't paid well because they can write code very
           | fast, it's because they can make good decision and deliver
           | projects that not only work, but can be maintained and built
           | upon for years to come.
           | 
           | I know a few people who have been primarily programming for
           | 10 years but are not seniors. 5 of them (probably 10 or more,
           | but let's not overdo it), with AI, cannot replace one senior
           | developer unless you make that senior do super basic tasks.
        
         | arisAlexis wrote:
         | So your argument is:
         | 
         | There is some tech that is getting progressively better.
         | 
         | I am high on the linear scale
         | 
         | Therefore I don't worry about it cathing up to me ever
         | 
         | And this is the top voted argument.
        
         | swishman wrote:
         | The arrogance of comments like this is amazing.
         | 
         | I think it's an interesting psychological phenomenon similar to
         | virtue signalling. Here you are signalling to the programmer
         | in-group how good of a programmer you are. The more dismissive
         | you are the better you look. Anyone worried about it reveals
         | themself as a bad coder.
         | 
         | It's a luxury belief, and the better LLMs get the better you
         | look by dismissing them.
        
           | rybosworld wrote:
           | This is spot on.
           | 
           | It's essentially like saying "What I do in particular, is
           | much too difficult for an AI to ever replicate." It is always
           | in part, humble bragging.
           | 
           | I think some developers like to pretend that they are
           | exclusively solving problems that have never been solved
           | before. Which sure, the LLM architecture in particular might
           | never be better than a person for the novel class of problem.
           | 
           | But the reality is, an extremely high percentage of all
           | problems (and by reduction, the lines of code that build that
           | solution) are not novel. I would guesstimate that less than 1
           | out of 10,000 developers are solving truly novel problems
           | with any regularity. And those folks tend to work at places
           | like Google Brain.
           | 
           | That's relevant because LLM's can likely scale forever in
           | terms of solving the already solved.
        
         | sigmarule wrote:
         | My perspective is that if you are unable to find ways to
         | improve your own workflows, productivity, output quality, or
         | any other meaningful metric using the current SOTA LLM models,
         | you should consider the possibility that it is a personal
         | failure at least as much as you consider the possibility that
         | it is a failure of the models.
         | 
         | A more tangible pitfall I see people falling into is testing
         | LLM code generation using something like ChatGPT and not
         | considering more involved usage of LLMs via interfaces more
         | suited for software development. The best results I've managed
         | to realize on our codebase have not been with ChatGPT or IDEs
         | like Cursor, but a series of processes that iterate over our
         | full codebase multiple times to extract various levels of
         | resuable insights, like general development patterns, error
         | handling patterns, RBAC-related patterns, extracting example
         | tasks for common types of tasks based on git commit histories
         | (i.e. adding a new API endpoint related to XYZ), common bugs or
         | failure patterns (again by looking through git commit
         | histories), which create a sort of library of higher-level
         | context and reusable concepts. Feeding this into o1, and having
         | a pre-defined "call graph" of prompts to validate the output,
         | fix identified issues, consider past errors in similar types of
         | commits and past executions, etc has produced some very good
         | results for us so far. I've also found much more success with
         | ad-hoc questions after writing a small static analyzer to trace
         | imports, variable references->declarations, etc, to isolate the
         | portions of the codebase to use for context rather than RAG-
         | based searching that a lot of LLM-centric development tools
         | seem to use. It's also worth mentioning that performance
         | quality seems to be very much influenced by language; I
         | thankfully primarily work with Python codebases, though I've
         | had success using it against (smaller) Rust codebases as well.
        
           | j45 wrote:
           | Sometimes if it's as much work to setup and keep the tech
           | running compared to writing it, it can be worth thinking
           | about the tradeoffs.
           | 
           | A person with experience knowing how to push LLMs to output
           | the perfect little function or utility to solve a problem,
           | and collect enough of them to get somewhere is the
           | interesting piece.
        
         | Const-me wrote:
         | > CPU-only custom 2D pixel blitter engine I wrote to make 2D
         | games in styles practically impossible with modern GPU-based
         | texture rendering engines
         | 
         | I'm curious what's so special about that blitting?
         | 
         | BTW, pixel shaders in D3D11 can receive screen-space pixel
         | coordinates in SV_Position semantic. The pixel shader can cast
         | .xy slice of that value from float2 to int2 (truncating towards
         | 0), offset the int2 vector to be relative to the top-left of
         | the sprite, then pass the integers into Texture2D.Load method.
         | 
         | Unlike the more commonly used Texture2D.Sample, Texture2D.Load
         | method delivers a single texel as stored in the texture i.e. no
         | filtering, sampling or interpolations. The texel is identified
         | by integer coordinates, as opposed to UV floats for the Sample
         | method.
        
       | ramesh31 wrote:
       | Grappling with this hard right now. Anyone who is still of the
       | "these things are stupid and will never replace me" mindset needs
       | to sober up real quick. AGI level agentic systems are coming, and
       | _fast_. A solid 90% of what we thought of as software engineering
       | for the last 30 years will be completely automated by them in the
       | next couple years. The only solution I see so far is to be the
       | one building them.
        
         | TechDebtDevin wrote:
         | As someone who's personally tried ( with lots of effort) to
         | build agentic assistants/systems 3+ times over the course of
         | the last few years I haven't seen any huge improvements in the
         | quality of output. I think you greatly underestimate the
         | plateau these models are running into.
         | 
         | Grok and o1 are great examples of how these plateaus also wont
         | be overcome with more capital and compute.
         | 
         | Agentic systems might become great search/research tools to
         | speed up the time it takes to gather (human created) info from
         | the web, but I don't see them creating anything impressive or
         | novel on their own without a completely different architecture.
        
           | ramesh31 wrote:
           | >As someone who's personally tried ( with lots of effort) to
           | build agentic assistants/systems 3+ times over the course of
           | the last few years I haven't seen any huge improvements in
           | the quality of output. I think you greatly underestimate the
           | plateau these models are running into.
           | 
           | As someone who's personally tried with great success to build
           | agentic systems over the last 6 months, you need to be aware
           | of how fast these things are improving. The latest Claude
           | Sonnet makes GPT-3.5 look like a research toy. Things are
           | trivial now in the code gen space that were impossible just
           | earlier this year. Anyone not paying attention is missing the
           | boat.
        
             | TechDebtDevin wrote:
             | >As someone who's personally tried with great success to
             | build agentic systems over the last 6 months.
             | 
             | Like what? You're the only person ive seen claim they've
             | built agentic systems with great success. I dont regard
             | improved chat-bot outputs as success, im talking about
             | agentic systems that can roll their own auth from scratch,
             | or gather data from the web independently and build even a
             | mediocore prediction model with that data. Or code anything
             | halfway decently in something other than Python.
        
       | byyoung3 wrote:
       | you dont become an SWE
        
       | sirwhinesalot wrote:
       | My job is not to write monospace, 80 column-wide text but to find
       | solutions to problems.
       | 
       | The solution often involves software but what that software does
       | and how it does it can vary wildly and it is my job to know how
       | to prioritize the right things over the wrong things and get to
       | decent solution as quickly as possible.
       | 
       | Should we implement this using a dependency? It seems it is too
       | big / too slow, is there an alternative or do we do it ourselves?
       | If we do it ourselves how do we tackle this 1000 page PDF full of
       | diagrams?
       | 
       | LLMs cannot do what I do and I assume it will take a very long
       | time before they can. Even with top of the line ones I'm
       | routinely disappointed in their output on more niche subjects
       | where they just hallucinate whatever crap to fill in the gaps.
       | 
       | I feel bad for junior devs that just grab tickets in a treadmill,
       | however. They will likely be replaced by senior people just
       | throwing those tickets at LLMs. The issue is that seniors age and
       | without juniors you cannot have new seniors.
       | 
       | Lets hope this nonsense doesn't lead to our field falling apart.
        
       | mym1990 wrote:
       | As someone who went to a bootcamp a good while ago...I am now
       | formally pursuing a technical masters program which has an angle
       | on AI(just enough to understand where to apply it, not doing any
       | research).
        
       | isatty wrote:
       | Autocomplete and snippets have been a thing for a long time and
       | it hasn't come for my job yet, and I suspect, will never.
        
       | askonomm wrote:
       | I feel the influencer crowd is overblowing the actual utility of
       | LLM's massively. Kind of feels akin to the "cryptocurrency will
       | take over the world" trope 10 years ago, and yet .. I don't see
       | it any crypto in my day to day life to this day. Will it improve
       | general productivity and boring tasks nobody wants to do? Sure,
       | but to think any more than that frankly I'd like some hard
       | evidence of it being actually able to "reason". And reason better
       | than most devs I've ever worked with, because quite honestly
       | humans are also pretty bad at writing software, and LLM's learn
       | from humans, so ...
        
         | asdev wrote:
         | the hype bubble is nearing it's top
        
       | ravedave5 wrote:
       | So what I've seen so far is that LLMs are amazing for small self
       | contained problems. Anything spanning a whole project they aren't
       | quite up to the task yet. I think we're going to need a lot more
       | processing power to get to that point. So our job will change,
       | but I have a feeling it will be slow and steady.
        
       | sdybskiy wrote:
       | I just copied the html from this thread into Claude to get a
       | summary. I think being very realistic, a lot of SWE job
       | requirements will be replaced by LLMs.
       | 
       | The expertise to pick the right tool for the right job based on
       | previous experience that senior engineers poses is something that
       | can probably be taught to an LLM.
       | 
       | Having the ability to provide a business case for the technology
       | to stakeholders that aren't technologically savvy is going to be
       | a people job for a while still.
       | 
       | I think positioning yourself as an expert / bridge between
       | technology and business is what will future-proof a lot of SWE,
       | but in reality, especially at larger organizations, there will be
       | a trimming process where the workload of what was thought to need
       | 10 engineers can be done with 2 engineers + LLMs.
       | 
       | I'm excited about the future where we're able to create software
       | quicker and more contextual to each specific business need.
       | Knowing how to do that can be an advantage for software engineers
       | of different skill levels.
        
         | dayvid wrote:
         | I'd argue design and UX will be more important for engineers.
         | You need taste to direct LLMs. You can automate some things and
         | maybe have it do data-driven feedback loops but there are so
         | many random industries/locations with off requirements, changes
         | in trends, etc. that it will require someone to oversee and
         | make adjustments
        
       | iepathos wrote:
       | Better tools that accelerate how fast engineers can produce
       | software? That's not a threat, just a boon. I suspect the actual
       | transition will just be people learning/focusing on somewhat
       | different higher level skills rather than lower level coding.
       | Like going from assembly to c, we're hoping we can transition
       | more towards natural language.
       | 
       | > junior to mid level software engineering will disappear mostly
       | People don't magically go to senior. Can't get seniors without
       | junior and mid to level up. We'll always need to take in and
       | train new blood.
        
       | hooverd wrote:
       | Software quality, already bad, will drop even more as juniors
       | outsource all their cognition to the SUV for the mind. Most
       | "developers" will be completely unable to function without their
       | LLMs.
        
       | rqtwteye wrote:
       | My plan is to retire in 1-2 years, take a break and then, if I
       | feel like it, go all in on AI. Right now it's at that awkward
       | spot where AI clearly shows potential but from my experience it's
       | not really improving my productivity on complex tasks.
        
       | agentultra wrote:
       | Learn to think above the code: learn how to model problems and
       | reason about them using maths. There are plenty of tools in this
       | space to help out: model checkers like TLA+ or Alloy, automated
       | theorem provers such as Lean or Agda, and plain old notebooks and
       | pencils.
       | 
       | Our jobs are not and have never been: _code generators_.
       | 
       | Take a read of Naur's essay, _Programming as Theory Building_
       | [0]. The gist is that it 's the theory you build in your head
       | about the problem, the potential solution, and what you know
       | about the real world that is valuable. Source code depreciates
       | over time when left to its own devices. It loses value when the
       | system it was written for changes, dependencies get updated, and
       | it bit-rots. It loses value as the people who wrote the original
       | program, or worked with those who did, leave and the organization
       | starts to forget what it was for, how it works, and what it's
       | supposed to do.
       | 
       | You still have to figure out what to build, how to build it, how
       | it serves your users and use cases, etc.
       | 
       | LLM's, at best, generate some code. Plain language is not
       | specific enough to produce reliable, accurate results. So you'll
       | forever be trying to hunt for increasingly subtle errors. The
       | training data will run out and models degrade on synthetic
       | inputs. So... it's only going to get, "so good," no matter how
       | many parameters of context they can maintain.
       | 
       | And your ability, as a human, to find those errors will be
       | quickly exhausted. There are way too few studies on the effects
       | of informal code review on error rates in production software. Of
       | those that have been conducted any statistically significant
       | effect on error rates seems to disappear when humans have read
       | ~200SLOC in an hour.
       | 
       | I suspect a good source of income will come from having to
       | untangle the mess of code generated by teams that rely too much
       | on these tools that introduce errors that only appear at scale or
       | introduce subtle security flaws.
       | 
       | Finally, it's not "AI," that's replacing jobs. It's humans who
       | belong to the owning class. They profit from the labour of the
       | working class. They make more profit when they can get the same,
       | or greater, amount of value while paying less for it. I think
       | these tools, "inevitably," taking over and becoming a part of our
       | jobs is a loaded argument with vested interests in that becoming
       | true so that people who own and deploy these tools can profit
       | from it.
       | 
       | As a senior developer I find that these tools are not as useful
       | as people claim they are. They're capable of fabricating test
       | data... usually of quality that requires inspection... and
       | really, who has time for that? And they can generate boilerplate
       | code for common tasks... but how often do I need boilerplate
       | code? Rarely. I find the answers it gives in summaries to contain
       | completely made-up BS. I'd rather just find out the answer
       | myself.
       | 
       | I fear for junior developers who are looking to find a footing.
       | There's no royal road. Getting your answers from an LLM for
       | everything deprives you of the experience needed to form your own
       | theories and ideas...
       | 
       | so focus on that, I'd say. Think above the code. Understand the
       | human factors, the organizational and economic factors, and the
       | technical ones. You fit in the middle of all of these moving
       | parts.
       | 
       | [0] https://pages.cs.wisc.edu/~remzi/Naur.pdf
       | 
       |  _Update_ : forgot to add the link to the Naur essay
        
       | zitterbewegung wrote:
       | Learning how to use LLMs and seeing what works and what doesn't.
       | When I've used them to code after awhile I can start to figure
       | out where they hallucinate. I have made an LLM system that
       | performs natural language network scanning called
       | http://www.securday.com which I presented at DEF CON (hacker
       | conference). Even if it has no change or affect on your
       | employment it is fun to experiment with things regardless.
        
       | bravetraveler wrote:
       | As a systems administrator now SRE, it's never really been about
       | _my_ code... if code _at all._
       | 
       | Where I used to be able to get by with babysitting shell scripts
       | that only lived on the server, we're now in a world with endless
       | abstraction. I don't hazard to guess; just learn what I can to
       | remain adaptable.
       | 
       | The fundamentals tend to generally apply
        
       | veidelis wrote:
       | I will not believe the AI takeover until there's evidence. I
       | haven't seen any examples, apart from maybe TODO list apps.
       | Needless to say, that's nowhere near the complexity that is
       | required at most jobs. Even if my carreer was endangered, I would
       | continue the path I've taken so far: have a basic understanding
       | of as much as possible (push out the edges of knowledge circle or
       | whatever it's called), and strive to have an expert knowledge
       | about maybe 1 or 2, or 3 subjects which pay for your daily bread.
       | Basically just be good at what you do, and that should be fine.
       | As for beginners, I advise to dive deep into a subject, start
       | with a solid foundation and be sure to have a hands-on approach,
       | while maintaining a consistent effort.
        
       | mellosouls wrote:
       | It depends on whether you think they are a paradigm change (at
       | the very least) or not. If you don't then either you will be
       | right or you will be toast.
       | 
       | For those of us who do think this is a revolution, you have two
       | options:
       | 
       | 1. Embrace it.
       | 
       | 2. Find another career, presumably in the trades or other hands-
       | on vocations where AI ingress will lag behind for a while.
       | 
       | To embrace it you need to research the LLM landscape as it
       | pertains to our craft and work out what interests you and where
       | you might best be able to surf the new wave, it is rapidly moving
       | and growing.
       | 
       | The key thing (as it ever was) is to build real world projects
       | mastering LLM tools as you would an IDE or language; keep on top
       | of the key players, concepts and changes; and use your soft
       | skills to help open-eyed others follow the same path.
        
       | nathan_anecone wrote:
       | I think fully automated LLM code generation is an inherently
       | flawed concept, unless the entire software ecosystem is automated
       | and self-generating. I think if you carry out that line of
       | thought to its extreme, you'd essentially need a single Skynet
       | like AI that controls and manages all programming languages,
       | packages, computer networks internally. And that's probably going
       | to remain a sci-fi scenario.
       | 
       | Due to a training-lag, LLMs usually don't get the memo when a
       | package gets updated. When these updates happen to patch security
       | flaws and the like, people who uncritically push LLM-generated
       | code are going to get burned. Software moves too fast for
       | history-dependent AI.
       | 
       | The conceit of fully integrating all needed information in a
       | single AI system is unrealistic. Serious SWE projects, that
       | attempt to solve a novel problem or outperform existing
       | solutions, require a sort of conjectural, visionary and
       | experimental mindset that won't find existing answers in training
       | data. So LLMs will get good at generating the billionth to-do app
       | but nothing boundary pushing. We're going to need skilled people
       | on the bleeding edge. Small comfort, because most people working
       | in the industry are not geniuses, but there is also a reflexive
       | property to the whole dynamic. LLMs open up a new space of
       | application possibilities which _are not represented in existing
       | training data_ so I feel like you could position yourself
       | comfortably by getting on board with startups that are actually
       | applying these new technologies _creatively_. Ironically, LLMs
       | are trained on last-gen code, so they obsolete yesterday 's jobs.
       | But you won't find any training data for _solutions which have
       | not been invented yet_. So ironically AI will create a niche for
       | new application development which is not served by AI.
       | 
       | Already if you try to use LLMs for help on some of the new LLM
       | frameworks that came out recently like LangChain or Autogen etc,
       | it is far less helpful than on something that has a long tailed
       | distribution in the training data. (And these frameworks get
       | updated constantly, which feeds into my last point about
       | training-lag).
       | 
       | This entire deep learning paradigm of AI is not able solve
       | problems creatively. When it tries to it "hallucinates".
       | 
       | Finally, I still think a knowledgable, articulate developer PLUS
       | AI will consistently outperform an AI MINUS a knowledgable,
       | articulate developer. More emphasis may shift onto "problem
       | formulation", getting good at writing half natural language, half
       | code pseudo-code prompts and working with the models
       | conversationally.
       | 
       | There's a real problem too with model collapse, as AI generated
       | code becomes more common, you remove the tails of the
       | distribution, resulting in more generic code without a human
       | touch. There's only so many cycles of retraining on this
       | regurgitated data you can create before you start encountering
       | not just diminishing returns, but damage the model. So I think
       | LLMs will be self-limiting.
       | 
       | So all in all I think LLMs will make it harder to be a mediocre
       | programmer who can just coast by doing highly standardized
       | janitorial work, but it will create more value if you are trying
       | to do something interesting. What that means for jobs is a mixed
       | picture. Fewer boring, but still paying jobs, but maybe more work
       | to tackle new problems.
       | 
       | I think only programmers understand the nuances of their field
       | however and people on the business side are going to just look at
       | their expense spreadsheets and charts, and will probably
       | oversimplify and overestimate. But that could self-correct and
       | they might eventually concede they're going to have to hire
       | developers.
       | 
       | In summary, the idea that LLMs will completely take over coding
       | logically entails an AI system that completely contains the
       | entire software ecosystem within itself, and writes and maintains
       | every endpoint. This is science fiction. Training lag is a real
       | limitation since software moves too fast to constantly retrain on
       | the latest updates. AI itself creates a new class of interesting
       | applications that are not represented in the training data, which
       | means there's room for human devs at the bleeding edge.
       | 
       | If you got into programming just because it promised to be a
       | steady, well-paying job, but have no real interest in it, AI
       | might come for you. But if you are actually interested in the
       | subject and understand that not all problems have been solved,
       | there's still work to be done. And unless we get a whole new
       | paradigm of AI that is not data-dependent, and can generate new
       | knowledge whole cloth, I wouldn't be too worried. And if that
       | does happen, too, the whole economy might change and we won't
       | care about dinky little jobs.
        
       | neilv wrote:
       | I was advising this MBA student's nascent startup (with the idea
       | I might technical cofound once they're graduating), and they
       | asked about whether LLMs would help.
       | 
       | So I listed some ways that LLMs practically would and wouldn't
       | fit into the workflow of the service they doing. And related it
       | to a bunch of other stuff, including how to make the most of the
       | precious customer real-world access they'd have, and generating a
       | success in the narrow time window they have, and the special
       | obligations of that application domain niche.
       | 
       | Later, I mentally replayed the conversation in my head (as I do),
       | and realized they were actually probably asking about _using an
       | LLM to generate the startup 's prototype/MVP for the software
       | they imagined_.
       | 
       | And also, "generating the prototype" is maybe the only value that
       | an MBA student had been told a "technical" person could provide
       | at this point. :)
       | 
       | That interpretation of the LLM question didn't even occur to me
       | when I was responding. I could've easily whipped up the generic
       | Web CRUD any developer could do _and_ the bespoke scrape-y
       | /protocol-y integrations that fewer developers could do, both to
       | a correctness level necessarily higher than the norm (which was
       | required by this particular application domain). In the moment,
       | it didn't occur to me that anyone would think an LLM would help
       | at all, rather than just be an unnecessary big pile of risk for
       | the startup, and potential disaster in the application domain.
        
       | tech_ken wrote:
       | I think it's about evaluating the practical strengths and
       | weaknesses of genAI for coding tasks, and trying to pair your
       | skillset (or areas of potentially quick skill learning) with the
       | weaknesses. Try using the tools and see what you like and
       | dislike. For example I use a code copilot for autocomplete and
       | it's saving my carpals; I'm not a true SWE more a code-y DS, but
       | autocomplete on repetitive SQL or plotting cells is a godsend.
       | It's like when I first learned vi macros, except so much simpler.
       | Not sure what your domain is, but I'd wager there are areas that
       | are similar for you; short recipes or utils that get reapplied in
       | slightly different ways across lots of different areas. I would
       | try and visualize what your job could look like if you just
       | didn't have to manually type them; what types of things do you
       | like doing in your work and how can you expand them to fill the
       | open cycles?
        
       | codebolt wrote:
       | Currently starting my first project integrating with Azure OpenAI
       | using the new MS C# AI framework. I'm guessing that having
       | experience actually building systems that integrate with LLMs
       | could be a good career move over the next decade.
        
       | siliconc0w wrote:
       | I think the real world improvements will plateau and it'll take
       | awhile for current enterprise just to adopt what is possible
       | today but that is still going to cause quite a bit of change. You
       | can imagine us going from AI Chat Bots with RAG on traditional
       | datastores, to AI-enhanced but still human-engineered SaaS
       | Products, to bespoke AI-generated and maintained products, to
       | fully E2E AI Agentic products.
       | 
       | An example is do you tell the app to generate a python
       | application to manage customer records or do you tell it
       | "remember this customer record so other humans or agents can ask
       | for it" and it knows how to efficiently and securely do that.
       | 
       | We'll probably see more 'AI Reliability Engineer' type roles
       | which will likely be around building and maintain evaluation
       | datasets, tracking and stomping out edge cases, figuring out
       | human intervention/escalation, model routing, model distillation,
       | Context-window vs Fine-tuning, and overall intelligence-cost
       | management.
        
       | TZubiri wrote:
       | Who's going to build, maintain and admin the llm software?
        
       | dayvid wrote:
       | My job is determining what needs to be done, proving it should be
       | done, getting people to approve it and getting it done.
       | 
       | LLMs help more with the last part which is often considered the
       | lowest level. So if you're someone who just wants to code and not
       | have to deal with people or business, you're more at risk.
        
         | ustad wrote:
         | > My job is determining what needs to be done, proving it
         | should be done, getting people to approve it...
         | 
         | LOL - this is where LLMs are being used the most right now!
        
       | tamrix wrote:
       | LLM is just a hypervised search engine. You still need to know
       | what to ask, what you can get away with and what you can't.
        
       | haolez wrote:
       | Maybe creating your own AI agents with your own "touch". Devin,
       | for example, is very dogmatic regarding pull requests and some
       | process bureaucracy. Different tasks and companies might benefit
       | from different agent styles and workflows.
       | 
       | However, true AGI would change everything, since the AGI could
       | create specialized agents by itself :)
        
       | closeparen wrote:
       | My job is not to translate requirements into code, or even
       | particularly to create software, but to run a business process
       | for which my code forms the primary rails. It is possible that
       | advanced software-development and reasoning LLMs will erode some
       | of the advantage that my technical and analytical skills give me
       | for this role. On the other hand even basic unstructured-text-
       | understanding LLMs will dramatically reduce the size of the
       | workforce involved in this business process, so it's not clear
       | that my role would logically revert to a "people manager" either.
       | Maybe there is a new "LLM supervisor" type of role emerging in
       | the future, but I suspect that's just what software engineer
       | means in the future.
        
       | swgo wrote:
       | You are asking wrong people. Of course, people are going to say
       | it is not even close and probably they are right given current
       | chaos of LLMs. It's like asking a mailman delivering email would
       | you be replaced by email. The answer was not 100% but volume went
       | down by 95%.
       | 
       | Make no mistake. All globalists -- Musks, Altmans, Grahams,
       | A16Zs, Trump supporting CEOs, Democrats -- have one goal. MAKE
       | MORE PROFIT.
       | 
       | The real question is -- can you make more money than using LLM?
       | 
       | Therefore, the question is not whether there will be impact.
       | There will absolutely will be impact. Will it be Doomsday
       | scenario? No, unless you are completely out of touch -- which can
       | happen to a large population.
        
       | data_block wrote:
       | I work on a pretty straightforward CRUD app in a niche domain and
       | so far they haven't talked about replacing me with some LLM
       | solution. But LLMs have certainly made it a lot faster to add new
       | features. I'd say working in a niche domain is my job security.
       | Not many scientists want to spend their time trying to figure out
       | how to get an LLM to make a tool that makes their life easier -
       | external competitors exist but can't give the same intense
       | dedication to the details required for smaller startups and their
       | specific requirements.
       | 
       | A side note - maybe my project is just really trivial, maybe I'm
       | dumber or worse at coding than I thought, or maybe a combination
       | of the above, but LLMs have seemed to produce code that is fine
       | for what we're doing especially after a few iteration loops. I'm
       | really curious what exactly all these SWEs are working on that is
       | complex enough that LLMs produce unusable code
        
       | k__ wrote:
       | No idea.
       | 
       | Most of my code is written by AI, but it seems most of my job is
       | arranging that code.
       | 
       | Saves me 50-80% of my key strokes, but sprinkles subtle errors
       | here and there and doesn't seem understand the whole
       | architecture.
        
       | yodsanklai wrote:
       | > The more I speak with fellow engineers, the more I hear that
       | some of them are either using AI to help them code, or feed
       | entire projects to AI and let the AI code
       | 
       | LLMs do help but to a limited extend. Never heard of anyone in
       | the second category.
       | 
       | > how do you future-proof your career in light of, the
       | inevitable, LLM take over?
       | 
       | Generally speaking, coding has never been a future proof career.
       | Ageism, changes in technology, economic cycles, offshoring...
       | When I went into that field in early 2000s, it was kind of
       | expected that most people if they wanted to be somewhat
       | successful had to move eventually to leadership/management
       | position.
       | 
       | Things changed a bit with successful tech companies competing for
       | talents and offering great salaries and career paths for
       | engineers, especially in the US but it could very well be
       | temporary and shouldn't be taken for granted.
       | 
       | LLMs is one factor among many that can impact our careers,
       | probably not the most important. I think there's a lot of hype
       | and we're not being replaced by machines anytime soon. I don't
       | see a world where an entrepreneur is going to command an LLM to
       | write a service or a novel app for them, or simply maintain an
       | existing complex piece of software.
        
       | starbugs wrote:
       | In a market, scarce services will always be more valuable than
       | abundant services. Assuming that AI will at some point be capable
       | of replacing an SWE, to future-proof your career, you will need
       | to learn how to provide services that AI cannot provide. Those
       | might not be what SWEs currently usually offer.
       | 
       | I believe it's actually not that hard to predict what this might
       | be:
       | 
       | 1. Real human interaction, guidance and understanding: This, by
       | definition, is impossible to replace with a system, unless the
       | "system" itself is a human.
       | 
       | 2. Programming languages will be required in the future as long
       | as humans are expected to interface with machines and work in
       | collaboration with other humans to produce products. In order to
       | not lose control, people will need to understand the full chain
       | of experience required to go from junior SWE to senior SWE - and
       | beyond. Maybe less people will be required to produce more
       | products but still, they will be required as long as humanity
       | doesn't decide to give up control over basically any product that
       | involves software (which will very likely be almost all
       | products).
       | 
       | 3. The market will get bigger and bigger to the point where
       | nothing really works without software anymore. Software will most
       | likely be even more important to have a unique selling point than
       | it is now.
       | 
       | 4. Moving to a higher level of understanding of how to adapt and
       | learn is beneficial for any individual and actually might be one
       | of the biggest jumps in personal development. This is worth a lot
       | for your career.
       | 
       | 5. The current state of software development in most companies
       | that I know has reached a point where I find it actually
       | desirable for change to occur. SWE should improve as a whole. It
       | can do better than Agile for sure. Maybe it's time to "grow up"
       | as a profession.
        
       | snikeris wrote:
       | A quote from SICP:
       | 
       | > First, we want to establish the idea that a computer language
       | is not just a way of getting a computer to perform operations,
       | but rather that it is a novel formal medium for expressing ideas
       | about methodology. Thus, programs must be written for people to
       | read, and only incidentally for machines to execute.
       | 
       | From this perspective, the code base isn't just an artifact left
       | over from the struggle of getting the computer to understand the
       | business's problems. Instead, it is an evolving methodological
       | documentation (for humans) of how the business operates.
       | 
       | Thought experiment: suppose that you could endlessly iterate with
       | an LLM using natural language to build a complex system to run
       | your business. However, there is no source code emitted. You just
       | get a black box executable. However, the LLM will endlessly
       | iterate on this black box for you as you desire to improve the
       | system.
       | 
       | Would you run a business with a system like this?
       | 
       | For me, it depends on the business. For example, I wouldn't start
       | Google this way.
        
       | vbezhenar wrote:
       | So far I haven't found much use for LLM code generation. I'm
       | using Copilot as a glorified autocomplete and that's about it. I
       | tried to use LLM to generate more code, but it takes more time to
       | yield what I want than to write it myself, so it's just not
       | useful.
       | 
       | Now ChatGPT really became indispensable tool for me, on the one
       | row with Google and StackOverflow.
       | 
       | So I don't feel threatened so far. I can see the potential, and I
       | think that it's very possible for LLM-based agents to replace me
       | eventually, probably not this generation, but few years later -
       | who knows. But that's just hand waving, so getting worried about
       | possible future is not useful for mental well-being.
        
       | prerok wrote:
       | As others have stated, I don't think we have anything to worry
       | about.
       | 
       | As a SWE you are expected to neatly balance code, its
       | architecture and how it addresses the customers' problems. At
       | best, what I've seen LLMs produce is code monkey level
       | programming (like copy pasting from StackOverflow), but then a
       | human is still needed to tweak it properly.
       | 
       | What would be needed is General AI and that's still some 50 years
       | away (and has been for the past 70 years). The LLMs are a nice
       | sleight of hand and are useful but more often wrong than right,
       | as soon as you delve into details.
        
       | thor_molecules wrote:
       | After reading the comments, the themes I'm seeing are:
       | 
       | - AI will provide a big mess for wizards to clean up
       | 
       | - AI will replace juniors and then seniors within a short
       | timeframe
       | 
       | - AI will soon plateau and the bubble will burst
       | 
       | - "Pshaw I'm not paid to code; I'm a problem solver"
       | 
       | - AI is useless in the face of true coding mastery
       | 
       | It is interesting to me that this forum of expert technical
       | people are so divided on this (broad) subject.
        
         | pockmarked19 wrote:
         | As soon as you replace the subject of LLMs with nebulous "AI"
         | you have ventured into a la la land where any claim can be
         | reasonably made. That's why we should try and stick to the
         | topic at hand.
        
         | themanmaran wrote:
         | The biggest surprise to me (generally across HN) is that people
         | expect LLMs to develop on a really slow timeframe.
         | 
         | In the last two years LLM capabilities have gone from "produces
         | a plausible sentence" to "can generate a functioning web app".
         | Sure it's not as masterful as one produced by a team of senior
         | engineers, but a year ago it was impossible.
         | 
         | But everyone seems to evaluate LLMs like they're fixed at
         | today's capabilities. I keep seeing "10-20 year" estimates for
         | when "LLMs are smart enough to write code". It's a very head in
         | the sand attitude to the last 2 years trajectory.
        
           | layer8 wrote:
           | You can't extrapolate the future trajectory of progress from
           | the past. It comes in pushes and phases. We had long phases
           | of AI stagnation in the past, we might see them again. The
           | past five years or so might turn out to be a phase transition
           | from pre-LLM to post-LLM, rather than the beginning of
           | endless dramatic improvements.
        
           | unclad5968 wrote:
           | Probably because we see stuff like this every decade. Ten
           | years ago no one was ever going to drive again because self-
           | driving cars were imminent. Turns out a lot of problems can
           | be partially solved very quickly, but as anyone with
           | experience knows, solving the last 10% takes at least as much
           | time as solving the first 90.
        
       | bfrog wrote:
       | LLMs are overrated trash feeding themselves garbage and producing
       | garbage in return. AI is in a bubble, when reality comes back the
       | scales will of course rebalance and LLMs will be a tool to
       | improve human productivity but not replace them as some people
       | might think. Then again I could be wrong, most people don't
       | actually know how to create products for other humans and that's
       | the real goal... not simply coding to code. Let me know when LLMs
       | can produce products.
        
       | light_triad wrote:
       | There's a great Joel Spolsky post about developers starting
       | businesses and realising that there's a bunch of "business stuff"
       | that was abstracted away at big companies. [1]
       | 
       | One way to future proof is to look at the larger picture, the
       | same way that coding can't be reduced to algorithm puzzles:
       | 
       | "Software is a conversation, between the software developer and
       | the user. But for that conversation to happen requires a lot of
       | work beyond the software development."
       | 
       | [1] The Development Abstraction Layer
       | https://www.joelonsoftware.com/2006/04/11/the-development-ab...
        
         | dkyc wrote:
         | But conversations are exactly LLMs strength?
        
           | elcritch wrote:
           | It looks like it, but LLMs still lack critical reasoning by
           | and large. So if a client tells them or asks for something
           | nonsensical it won't reason its way out of that.
           | 
           | I'm not worried about software as a profession yet, as first
           | clients will need to know what they want much what they
           | actually need.
           | 
           | Well I am a bit worried that many big businesses seem to
           | think they can lay off most of their software devs because
           | "AI" causing wage suppression and overwork.
           | 
           | It'll come back to bite them IMHO. I've contemplated shorting
           | Intuit stock because they did precisely that, which will
           | almost certainly just end up with crap software, missed
           | deadlines, etc.
        
       | pockmarked19 wrote:
       | I see this sort of take from a lot of people and I always tell
       | them to do the same exercise. A cure for baseless fears.
       | 
       | Pick an LLM. Any LLM.
       | 
       | Ask it what the goat river crossing puzzle is. With luck, it will
       | tell you about the puzzle involving a boatman, a goat, some
       | vegetable, and some predator. If it doesn't, it's disqualified.
       | 
       | Now ask it to do the same puzzle but with two goats and a cabbage
       | (or whatever vegetable it has chosen).
       | 
       | It will start with the goat. Whereupon the other goat eats the
       | cabbage left with it on the shore.
       | 
       | Hopefully this exercise teaches you something important about
       | LLMs.
        
         | cdfuller wrote:
         | o1 had no issues solving this.
         | 
         | https://chatgpt.com/share/67609bca-dd08-8004-ba27-0f010afc12...
        
         | hviniciusg wrote:
         | emmmmm... i think your argument is not valid any more:
         | 
         | https://chatgpt.com/c/6760a0a0-fa34-800c-9ef4-78c76c71e03b
        
           | pockmarked19 wrote:
           | Seems like they caught up because I have posted this before
           | including in chatGPT. All that means is you have to change it
           | up slightly.
           | 
           | Unfortunately "change it up slightly" is not good enough for
           | people to do anything with, and anything more specific just
           | trains the LLM eventually so it stops proving the point.
           | 
           | I cannot load this link though.
        
         | FlyingLawnmower wrote:
         | https://chatgpt.com/share/6760a122-0ec4-8008-8b72-3e950f0288...
         | 
         | My first try with o1. Seems right to me...what does this teach
         | us about LLMs :)?
        
           | ktxyznvda wrote:
           | Let's ask for 3 goats then. And how much developing o1 cost,
           | how much another version will cost? X billions of dollars per
           | goat is not really a good scaling when any number of goats or
           | cabbages can exist.
        
       | uludag wrote:
       | Firstly, as many commenters have mentioned, I don't see AI taking
       | jobs en masse. They simply aren't accurate enough and they tend
       | to generate more code faster which ends up needing more
       | maintenance.
       | 
       | Advice #1: do work on your own mind. Try to improve your personal
       | organization. Look into methodologies like GTD. Get into habits
       | of building discipline. Get into the habit of storing information
       | and documentation. From my observations many developers simply
       | can't process many threads at once, making their bottleneck their
       | own minds.
       | 
       | Advice #2: lean into "metis"-heavy tasks. There are many
       | programming tasks which can be easily automated: making a app
       | scaffold, translating a simple algorithm, writing tests, etc.
       | This is the tip of the iceberg when it comes to real SWE work
       | though. The intricate connections between databases and services,
       | the steps you have to go through to debug that one feature, the
       | hack you have to make in the code so the code behaves differently
       | in the testing environment, and so on. LLMs require legibility to
       | function: a clean slate, no tech-debt, low entropy, order, etc.
       | Metis is a term talked about in the book "Seeing Like a State"
       | and it encompasses knowledge and skills gained through experience
       | which is hard to transfer. Master these dark corners, hack your
       | way around the code, create personal scripts for random one-off
       | tasks. Learn how to poke and pry the systems you work on to get
       | out the information you want.
        
       | asdefghyk wrote:
       | It will probably end up like self driving cars. Can do lots of
       | the problem, but is predicted to be never quite there .....
        
       | aristofun wrote:
       | If you fear => you're still in the beginning of your career or
       | your work has very little to do with software engineering. (the
       | engineering part in particular)
       | 
       | The only way to future-proof any creative and complex work - get
       | awesome at it.
       | 
       | It worked before LLM it will work after LLM or any new shiny
       | three-letter gimmick.
        
         | throwaway_43793 wrote:
         | Maybe I'm a lousy developer, true. But I do know now that your
         | code does not matter. Unlike any other creative profession,
         | what matters is the final output, and code is not the final
         | output.
         | 
         | If companies can get the final output with less people, and
         | less money, why would they pass on this opportunity? And please
         | don't tell me that it's because people produce maintainable
         | code and LLMs don't.
        
       | ElevenLathe wrote:
       | I spent approximately a decade trying to get the experience and
       | schooling necessary to move out of being a "linux monkey" (read:
       | responding to shared webhosting tickets, mostly opened by people
       | who had broken their Wordpress sites) to being an SRE.
       | 
       | Along the way I was an "incident manager" at a couple different
       | places, meaning I was basically a full-time Incident Commander
       | under the Google SRE model. This work was always fun, but the
       | hours weren't great (these kind of jobs are always "coverage"
       | jobs where you need to line up a replacement when you want to
       | take leave, somebody has to work holidays, etc.). Essentially I'd
       | show up at work and paint the factory by making sure our
       | documentation was up to date, work on some light automation to
       | help us in the heat of the moment, and wait for other teams to
       | break something. Then I'd fire up a bridge and start
       | troubleshooting, bringing in other people as necessary.
       | 
       | This didn't seem like something to retire from, but I can imagine
       | it being something that comes back, and I may have to return to
       | it to keep food on the table. It is exactly the kind of thing
       | that needs a "human touch".
        
       | whateveracct wrote:
       | LLMs have not affected my day-to-day at all. I'm a senior eng
       | getting paid top percentile using a few niche technologies at a
       | high profile company.
        
       | gt0 wrote:
       | I use Copilot a bit, and it can be really, really good.
       | 
       | It helps me out, but in terms on increasing productivity, it
       | pales in comparison to simple auto-complete. In fact it pales in
       | comparison to just having a good, big screen vs. battling away on
       | a 13" laptop.
       | 
       | LLMs are useful and provide not insignificant assistance, but
       | probably less assistance than the tools we've had for a long
       | time. LLMs are not a game changer like some other thing have been
       | since I've been programming (since late 1980s). Just going to
       | Operating Systems with protected memory was a game changer, I
       | could make mistakes and the whole computer didn't crash!
       | 
       | I don't see LLMs as something we have to protect our careers
       | from, I see LLMs as an increasingly useful tool that will become
       | a normal part of programming same as auto-complete, or protected
       | memory, or syntax-highlighting. Useful stuff we'll make use of,
       | but it's to help us, not replace us.
        
       | MrQuimico wrote:
       | Programming is about coding an idea into a set of instructions.
       | LLMs are the same, they just require using a higher level
       | language.
        
       | JKCalhoun wrote:
       | I no longer have skin in the game since I retired a few years
       | back.
       | 
       | But I have had over 30 years in a career that has been nothing if
       | not dynamic the whole time. And so I no doubt would keep on
       | keepin' on (as the saying goes).
       | 
       | Future-proof a SWE career though? I think you're just going to
       | have to sit tight and enjoy (or not) the ride. Honestly, I
       | enjoyed the first half of my career much more than where SWE
       | ended up in the latter half. To that end, I have declined to
       | encourage _anyone_ from going into SWE. I know a daughter of a
       | friend that is going into it -- but she 's going into it because
       | she has a passion for it. (So, 1) no one needed to convince her
       | but 2) passion for coding may be the only valid reason to go into
       | it anyway.)
       | 
       | Imagine the buggy-whip makers gathered around the pub, grousing
       | about how they are going to future-proof their trade as the new-
       | fangled automobiles begin rolling down the street. (They're not.)
        
       | bawolff wrote:
       | AI's are going to put SWE's out of a job at roughly the same time
       | as bitcoin makes visa go bankrupt.
       | 
       | Aka never, or at least far enough in the future that you can't
       | really predict or plan for it.
        
       | jonahbenton wrote:
       | I think in principle LLMs are no different from other lowercase-a
       | abstractions that have substantially boosted productivity while
       | lowering cost, from compilers to languages to libraries to
       | protocols to widespread capabilities like payments and cloud
       | services and edge compute and more and more. There is so much
       | more software that can be written, and may be rewritten, abstract
       | machines that can be built and rebuilt, across domains of
       | hardware and software, that become enabled by this new
       | intelligence-as-a-service capability.
       | 
       | I think juniors are a significant audience for LLM code
       | production because they provide tremendous leverage for making
       | new things. For more experienced folk, there are lots of choices
       | that resemble prior waves of adoption of new state of the art
       | tools/techniques. And as it always goes, adoption of those in
       | legacy environments is going to go more slowly, while disruption
       | of legacy products and services that have a cost profile may
       | occur more frequently as new economics for building and then
       | operating something intelligent start to appear.
        
       | sailorganymede wrote:
       | I invest in my soft skills. I've become pretty good at handling
       | my business stakeholders now and while I do still code, I'm also
       | keeping business in the loop and helping them to be involved.
        
       | mattlondon wrote:
       | Start selling the shovels.
       | 
       | I.e., get into the LLM/AI business
        
       | nurettin wrote:
       | I have huge balls and I am not threatened by RNG.
        
       | usixk wrote:
       | LLM's are a model therefore require data, including new data.
       | When it comes to obscure tasks, niche systems and peculiar
       | integrations, LLM's seem to struggle with that nuance.
       | 
       | So should you be worried they will replace you? No. You should
       | worry about not adopting the technology in some form, otherwise
       | your peers will outpace you.
        
       | splwjs wrote:
       | Right now LLMs have a slight advantage over stackoverflow etc in
       | that they'll react to your specific question/circumstances, but
       | they also require you to doublecheck everything they spit out. I
       | don't think that will ever change, and I think most of the hype
       | comes from people whose salaries depend on it being right around
       | the corner or people who are playing a speculation game (if I
       | learn this tool I'll never have to work again/ avoid this tool
       | will doom me to poverty forever).
        
       | jsjdkdbsnsb wrote:
       | I get in to management... (Managing a bunch of llms to do my
       | bidding).
       | 
       |  _insert Mr burns excellent gif here_
        
       | janalsncm wrote:
       | For high-paid senior software engineers I believe it is
       | delusional to think that the wolves are not coming for your job.
       | 
       | Maybe not today, and depending on your retirement date maybe you
       | won't be affected. But if your answer is "nothing" it is
       | delusional. At a minimum you need to understand the failure modes
       | of statistical models well enough to explain them to short-
       | sighted upper management that sees you as a line in a
       | spreadsheet. (And if your contention is you are seen as more than
       | that, congrats on working for a unicorn.)
       | 
       | And if you're making $250k today, don't think they won't jump at
       | the chance to pay you half that and turn your role into a
       | glorified (or not) prompt engineer. Your job is to find the
       | failure modes and either mitigate them or flag them so the
       | project doesn't make insurmountable assumptions about "AI".
       | 
       | And for the AI boosters:
       | 
       | I see the idea that AI will change nothing as just as delusional
       | as the idea that "AI" will solve all of our problems. No it
       | won't. Many of our problems are people problems that even a
       | perfect oracle couldn't fix. If in 2015 you bought that self-
       | driving cars would be here in 3 years, please see the above.
        
       | cpill wrote:
       | if things go as you predict then the models are going to start to
       | eat their own tail in terms of training data. because of the
       | nature of LLMs training, they can't come up with anything truly
       | original. if you have tried to do something even slightly novel
       | then you'll know what I mean. web development might need taken
       | out, if front-end Devs didn't perpetually reinvent the FE :P
        
       | j45 wrote:
       | I think it might be the opposite. It's not advisable to give
       | advice to young SWEs when you might be one yourself out some.
       | 
       | Junior devs aren't going away. What might improve is often the
       | gap between where a junior dev is hired and the effort and
       | investment to get them to the real start line of adding value,
       | before they hop ship.
       | 
       | AI agents will become their coding partners, that can teach and
       | code with the Junior Dev, it will be more reliable contributions
       | to a code base, and sooner.
       | 
       | By teach and code with, I mean explaining so much of the basic
       | stuff, step by step, tirelessly, in the exact way each junior dev
       | needs, to help them grow and advance.
       | 
       | This will allow SWE's to move up the ladder and work on more
       | valuable work (understanding problems and opportunities, for
       | example) and solve higher level problems or from a higher
       | perspective.
       | 
       | Specifically the focus of Junior Devs on problems, or problems
       | sets could give way to placing them in opportunities to be
       | figured out and solved.
       | 
       | LLMs can write code today, not sure if it can manage clean
       | changes to an entire codebase on it's own today at scale, or for
       | many. Some folks probably have this figured out quietly for their
       | particular use cases.
        
       | ramon156 wrote:
       | Another thing I want to note is; even if I get replaced by AI, I
       | think I'd be sad for a bit. I think it'd be a fun period to try
       | to find a "hand-focused" job. Something like a bakery or
       | chocolatier. I honestly wouldn't mind if I could do the same
       | satisfying work but more hands-on, rather than behind a desk all
       | day
        
       | mlboss wrote:
       | LLM's for now only have 2-3 senses. The real shift will come when
       | they can collect data using robotics. Right now a human
       | programmer is needed to explain the domain to AI and review the
       | code based on the domain.
       | 
       | On the bright side, every programmer can start a business without
       | a need to hire a army of programmers. I think we are getting back
       | to artisan based economy where everyone can be a producer without
       | a corporate job.
        
       | entropyneur wrote:
       | Thinking about a military career. Pretty sure soldier will be the
       | last job to disappear. Mostly not joking.
        
       | VeejayRampay wrote:
       | LLMs are not really there except for juniors though
       | 
       | the quality of the code is as bad as it was two years ago, the
       | mistakes are always there somewhere and take a long time to spot,
       | to the point where it's somewhat of a useless party trick to
       | actually use a LLM for software development
       | 
       | and for more senior stuff the code is not what matters anyway,
       | it's reassuring other stakeholders, budgeting, estimation,
       | documentation, evangelization, etc.
        
       | Volrath89 wrote:
       | (10+ years of experience here) I will be starting training for
       | commercial pilot license next year. The pay is much less than one
       | of a software engineer but I think this job is already done for
       | most of us, only the top 5% will survive. I don't think I'm part
       | of that top and don't want to go to management or PO roles so I
       | am done with tech
        
         | nimbleplum40 wrote:
         | What makes you believe that commercial pilot is safer from AI
         | than software engineering?
        
       | JamesLeonis wrote:
       | I'm 15 years in, so a little behind you, but this is also some
       | observations from the perspective of a student during the Post-
       | Dot-Com bust.
       | 
       | A great parallel of today's LLMs was the Outsourcing mania from
       | 20 years ago. It was worse than AGI because actual living
       | breathing thinking people would write your code. After the Dot-
       | Bomb implosion, a bunch of companies turned to outsourcing as a
       | way to skirt costs for expensive US programmers. In their mind, a
       | manager can produce a spec that was sent to an oversees team to
       | implement. A "Prompt" if you will. But as time wore on, the hype
       | wore off with every broken and spaghettified app. Businesses were
       | stung back into hiring back programmers, but not before
       | destroying a whole pipeline of CS graduates for many years. It
       | fueled a surge in demand in programmers against a small supply
       | that didn't abate until the latter half of the 2010s.
       | 
       | Like most things in life, a little outsourcing never hurt anybody
       | but a lot can kill your company.
       | 
       | > My prediction is that junior to mid level software engineering
       | will disappear
       | 
       | Agree with some qualifications. I think LLMs will follow a
       | similar disillusionment as outsourcing, but not before decimating
       | the profession in both headcount and senior experience. The
       | pipeline of Undergrad->Intern/Jr->Mid->Sr development experience
       | will stop, creating even more demand for the existing (and now
       | dwindling) senior talent. If you can rough it for the next few
       | years the employee pool will be smaller and businesses will ask
       | _wHeRe dId aLl tHe pRoGrAmMeRs gO?!_ just like last time. We 're
       | going to lose entire classes of CS graduates for years before
       | companies reverse course, and then it will take several more
       | years to steward another generation of CS grads through the
       | curriculum.
       | 
       | AI companies sucking up all the funding out of the room isn't
       | helping with the pipeline either.
       | 
       | In the end it'll be nearly a decade before the industry recovers
       | its ability to create new programmers.
       | 
       | > So, fellow software engineers, how do you future-proof your
       | career in light of, the inevitable, LLM take over?
       | 
       | Funnily enough, probably start a business or that cool project
       | you've had in the back of your mind. Now is the time to keep your
       | skills sharp. LLMs are _good enough_ to help with some of those
       | rote tasks as long as you are diligent.
       | 
       | I think LLMs will fit into future tooling as souped-up Language
       | Servers and be another tool in our belt. I also foresee a whole
       | field of predictive BI tools that lean on LLMs hallucinating
       | plausible futures that can be prompted with (for example) future
       | newspaper headlines. There's also tons of technical/algorithmic
       | domains ruled by Heuristics that could possibly be improved by
       | the tech behind LLMs. Imagine a compiler that understands your
       | code and applies more weight on some heuristics and/or
       | optimizations. In short, keeping up with the tools will be useful
       | long after the hype train derails.
       | 
       | People skills are perennially useful. It's often forgotten that
       | programming is two domains; the problem domain and the
       | computation domain. Two people in each domain can build
       | Mechanical Sympathy that blurs the boundaries between the two.
       | However the current state of LLMs does not have this expertise,
       | so the LLM user must grasp both the technical and problem domains
       | to properly vet what the LLMs return from a prompt.
       | 
       | Also keep yourself alive, even if that means leaving the
       | profession for something else for the time being. The Software
       | Engineer Crisis is over 50 years old at this point, and LLMs
       | don't appear to be the Silver Bullet.
       | 
       | tl;dr: Businesses saw the early 2000s and said "More please, but
       | with AI!" Stick it out in "The Suck" for the next couple of years
       | until businesses start demanding people again. AI can be cool and
       | useful if you keep your head firmly on your shoulders.
        
         | bdangubic wrote:
         | > Like most things in life, a little outsourcing never hurt
         | anybody but a lot can kill your company.
         | 
         | there are amazing companies which have fully outsourced all of
         | their development. this trend is on the rise and might hit $1T
         | market cap in this decade...
        
       | bangaloredud wrote:
       | Simples: I don't do as a SWE. Who wants to be that eternal code
       | nigga? You'll need a profession where the inputs are unsanitized
       | and often non-structured. Try any *management in ITIL. Vacation
       | more, code less... or not at all.
       | 
       | HN code niggas :)
        
       | jacktheturtle wrote:
       | i'm not worried, because as a solid senior engineer my "training
       | data" largely is not digitized or consumable by a model yet. I
       | don't think we will have enough data in the near future to
       | threaten my _entire_ job, only support me in the easier parts of
       | it.
        
       | lcvw wrote:
       | I've carved out a niche of very low level systems programming and
       | optimization. I think it'll be awhile before LLMs can do what I
       | do. I also moved to to staff so I think a lot of what I do now
       | will still exist with junior/mid level devs being reduce by AI.
       | 
       | But I am focusing on maximizing my total comp so I can retire in
       | 10-15 years if I need to. I think most devs are underestimating
       | where this is eventually going to go.
        
       ___________________________________________________________________
       (page generated 2024-12-16 23:00 UTC)