[HN Gopher] The Software Engineering Identity Crisis
___________________________________________________________________
The Software Engineering Identity Crisis
Author : napolux
Score : 112 points
Date : 2025-03-23 18:37 UTC (1 days ago)
(HTM) web link (annievella.com)
(TXT) w3m dump (annievella.com)
| kylehotchkiss wrote:
| Anecdotally I perceive there's been less open source JS
| libraries/frameworks being released lately. I generally keep an
| eye on JS/node weekly newsletters and nothing has seemed
| interesting to me lately.
|
| Of course that could be my info bubble (Bluesky instead of
| twitter, newsletters, slightly less attracted to shiny than I
| used to be)
|
| Anybody else feel the same?
| fizx wrote:
| AI is killing e.g. React Server Components and Svelte, but for
| different reasons.
|
| Vibe coding doesn't care about the implementation details, so
| Svelte is dead.
|
| A unified codebase might be better for humans. But a FE/BE
| database is better for AI, because you have a clear security
| boundary, separation of concerns, and well-known patterns for
| both individually.
| ldjkfkdsjnv wrote:
| Disagree. Unified frameworks are better for AI, you can
| cohesively build a feature with the single output of the AI
| and it has an easier time integrating the two.
| fizx wrote:
| Hypothetically, yes. Practically, after having built ~10
| small apps in a unified framework with Cursor and Claude
| Code, it doesn't seem true with today's AI.
| georgemcbay wrote:
| My hobby project languages of choice these days are Kotlin
| multiplatform and Go, not JS, but there are multiple things
| I've worked on over the past year that I would have open
| sourced in the past but won't now because I'm not interested in
| freely helping with the big LLM slurp.
|
| I'm not ideologically against LLMs as a technology or the usage
| of them, but I do believe there is an inherent unfairness in
| the way a small set of companies have freely hoovered up all of
| this work meant to enhance the public commons (often using the
| most toxic and anti-social web crawling robots imaginable)
| while handwaving away what I believe are important copyright
| considerations.
|
| I'd rather just not release my own source code anymore than
| help them continue to do that.
| aforwardslash wrote:
| Its funny you assume source code is needed to infer behaviour
| of a given application, or to be a valid training item for an
| AI.
| 18172828286177 wrote:
| I've been meaning to write essentially this article for a while
| now.
|
| I'm currently prepping for some upcoming interviews, which is
| involving quite a bit of deep digging into some technical
| subjects. I'm enjoying it, but part of it feels... pointless.
| ChatGPT can answer better than I can about the things I'm
| learning. It is detracting quite a bit from my joy, which would
| not have been the case 5 years ago
| sepositus wrote:
| More than ever modern interviews are pointless. I just finished
| an SRE technical interview where I had to efficiently solve a
| problem around maximizing profit in a theoretical market
| setting. I'm guessing it was just a reframed leet code
| question. Yet not moments earlier they were talking about their
| needs with increasing visibility, improving deployment times,
| etc. At some point this has to break, right? If the article
| indicates anything, it's exactly those high level analytical
| skills they should be testing. I almost think allowing AI would
| be even better because it allows a conversation about what it
| got wrong, where it can be improved or is not applicable, etc.
| strict9 wrote:
| What you describe has echoes of allowing api lookups and such
| during an interview. Or if it's ide or repl only, or
| something like whiteboarding.
|
| You see the process and the questions they ask and evaluate
| how they distill the responses. The more you let the
| candidate use sources, the closer it is to day-to-day work.
|
| A somewhat similar equivalent to yesterday's copying and
| pasting a Stack Overflow or w3 schools solution is blindly
| copying and pasting a chat response from a quick and vague
| prompt.
|
| But someone who knows how to precisely prompt or use the
| correct set of templates is someone with more critical
| thinking skills that knows when to push back or modify the
| suggested solution.
|
| Knowing the small % of difference can make a big difference
| long term in code readability, reliability, and security.
|
| The other big alternative to all of this is strict debugging.
| A debugging test or chain of thought around quickly
| identifying and fixing the source of problems. This is a
| skill whose needs will probably increase over time.
| sepositus wrote:
| I suspect a large part of the problem is a lack of
| experienced engineers with the capacity to do interviews.
| As someone who often gets stuck with them, I can attest to
| how draining they can be, especially if you're doing them
| correctly. The problem is that it's _these_ engineers we
| need running the interview because they can pretty quickly
| pick out a fraudulent candidate from an exceptional one
| while giving both the option to use AI. I generally only
| need about 30 minutes to confidently assess whether someone
| is worth pushing further in the process.
|
| But while HR would love to make me a full-time interviewer,
| it rarely makes business sense. So we end up with
| unqualified people using what they were told are good
| signals for hiring talent.
| techpineapple wrote:
| I wonder if software engineers should take something like a
| modified LSAT. I do think that testing the ability to do some
| basic coding logic problem does get at the ability to break
| apart and understand the kind of problems one faces when
| turning requirements into business logic. I may be the only
| person in the world that doesn't really see a problem with
| the way we interview folks, when done skillfully.
| nyarlathotep_ wrote:
| Exactly.
|
| I'm far less motivated to learn technical topics now than I was
| even two years ago. I used to crack books/articles open pretty
| frequently largely for personal reasons but much of my
| motivation to do so has been removed by the presence of LLMs.
| braebo wrote:
| This is the part that saddens me the most. Watching that
| spark of curiosity and intrigue dwindle from the hearts and
| minds of nerds like you and me. I noticed it start to happen
| to me back in November when I truly began to understand where
| we were headed.
|
| I think I occupy a sweet spot now with my current skill set -
| AI can't solve the problems I work on - but it can really
| help empower my workflow in small doses. Nevertheless, it's a
| moving target with lots of uncertainty, and the atrophy of my
| skill and passion is palpable even in the past 6 months.
| nyarlathotep_ wrote:
| Yeah you nailed it. Glad to hear I'm not alone.
|
| I used to pay for O'Reilly, I have a pile of
| software/programming/computer-related books, and meh I just
| don't care for it like I did.
|
| I'd page through stuff at night, and the last year
| especially have really de-motivated me.
| twistedcheeslet wrote:
| This is an excellent article.
|
| We're all just swimming as the AI wave comes crashing on every
| developer out there - we can keep swimming, dive or surf. Picking
| a strategy is necessary but it would probably be good to be able
| to do all of the above.
| ldjkfkdsjnv wrote:
| I think we are underestimating the change that is about to occur
| in this field. There is a certain type of mind that is good at
| programming, and the field will no longer reward that type of
| mind as AI takes over. The smaller, gritty details of making
| something work will be smoothed over. Other types of person will
| be able to build, and might even surpass traditional "10x
| engineers", as new skill sets will take precedence.
| d_silin wrote:
| After a wave of AI-written slop floods the software supply
| chain, there will be even greater demand for 10x software
| engineers.
| ldjkfkdsjnv wrote:
| This is cope, fixing ai slop will just be what software
| engineering becomes. Its the new requirements of the job, not
| a failure state
| hooverd wrote:
| Eh, it's a great tool, but I'm still interested in
| understanding the world rather than proudly being incurious
| of it.
| 9dev wrote:
| How are you going to tell spam from ham if you don't
| understand the underlying systems and their constraints?
| And how are you going to gain that understanding if
| software engineering doesn't value that anymore, and won't
| educate people to gain it?
|
| I dunno, it just doesn't seem, like, all that thought out
| to me.
| zer8k wrote:
| You act like software engineering is a respected field.
|
| Most of our job is fixing slop. Previously this was slop
| produced by low quality cut rate developers in countries
| known for outsourcing. Now it's just fixing AI slop and
| foreign outsourced slop.
| 9dev wrote:
| And you act like software engineering is whatever it is
| you're apparently doing for a living. This field is big,
| and there are more corners in it than just line-of-
| business CRUD apps.
| ldjkfkdsjnv wrote:
| dude businesses already outsource software to the lowest
| wages possible. they just check to see if it works, and
| if it does, they ship it
| pjmlp wrote:
| Offshoring adoption, and the low quality of related projects,
| has proven this is not the case.
| achierius wrote:
| How so? The number of American software jobs is still way
| up from when people said software was "dead" thanks to
| offshoring. By something like 50x!
| JaDogg wrote:
| There is no such a thing as a 10x engineer. Anyone who appears
| as 10x only do whatever that maintains that illusion. (don't
| help anyone else, don't do support, keep all knowledge in head,
| bad documentation, etc)
| soulofmischief wrote:
| But many would list the things you've just listed as table
| stakes for any 10x engineer.
| TeMPOraL wrote:
| Some good points, but I feel that by the end, the article lost
| track of an important angle. Quoting from the ending:
|
| > _And now we come full circle: AI isn't taking our jobs; it's
| giving us a chance to reclaim those broader aspects of our role
| that we gave away to specialists. To return to a time when
| software engineering meant more than just writing code. When it
| meant understanding the whole problem space, from user needs to
| business impact, from system design to operational excellence._
|
| Well, I for one never cared about _business impact_ in general
| sense, nor did I consider it part of the problem space.
| Obviously, minding the business impact is critical _at work_. But
| if we 're talking about _identity_ , then it never was a part of
| mine - and I believe the same is true about many software
| engineers in my cohort.
|
| I picked up coding because I wanted to build things (games, at
| first). _Build_ things, not _sell_ things.
|
| This mirrors a common blind spot I regularly see in some articles
| and comments on HN (which perhaps is just because of its
| adjacency to startup culture) - doing stuff and running a company
| that does stuff are entirely different things. I want to be a
| builder - I _don 't want to be a founder_. Nor I want to be a
| manager of builders.
|
| So, for those of us with slightly narrower sense of identity as
| software engineers, the AI thing is both fascinating and
| disconcerting.
| spo81rty wrote:
| I think it comes down to ownership. Going forward it will be
| more important for engineers to show more product ownership of
| their domain. Product thinking is becoming more important.
|
| That doesn't mean you are a salesperson. It means you are more
| connected to the users and their problems.
| TeMPOraL wrote:
| Except you don't actually own any of it. Ownership belongs to
| your employer. The only thing you own, is _responsibility_.
| jimbokun wrote:
| Well take out the two words "business impact" and the rest
| still applies to you.
| TeMPOraL wrote:
| Sure, but these two words are _important_. They 're placing
| the whole into a different category.
|
| It's kind of like me saying, "I'm not a soldier - being a
| soldier means exercising a lot, following orders, and
| occasionally killing people", and you replying, "well take
| out the two words 'killing people' and the rest still applies
| to you".
| userbinator wrote:
| Those who think AI can generate code better than they can, are
| quite frankly below-average. It's the equivalent of using Google
| Translate to read and write another language --- and programming
| languages really do need to be learned as languages to make the
| most of them. It "works", but the result can never be above
| average.
|
| _or systems where performance and reliability are paramount_
|
| Since when has that _not_ been the case? Neglecting or even
| actively avoiding performance and reliability is why almost all
| new software is just mediocre at best, and the industry is on a
| decline. AI is only going to accelerate that.
| ldjkfkdsjnv wrote:
| The whole concept of "good code" is built around
| maintainability for humans. If you remove that requirement, it
| opens up a whole new definition of what it means to build good
| software
| datadrivenangel wrote:
| Using LLMs as compilers is unwise. Ultimately software must
| be configured or written by humans, even if there are layers
| and layers of software in between us and 'the code', and that
| act of configuration/writing is programming.
| ldjkfkdsjnv wrote:
| I think alot of the fastest moving and innovative companies
| right now are generating most code with ai. the ones who
| ignore this will be left behind
| whattheheckheck wrote:
| Which ones stick out the most to you?
| pjmlp wrote:
| That was the same argument folks used against high level
| languages when Assembly was king.
|
| It will come, even if we are a couple of years away.
| userbinator wrote:
| At least compilers are deterministic and their operation
| can be easily inspected.
| pjmlp wrote:
| Said someone that never researched miscompilation issues
| or UB.
|
| They are mostly deterministic.
|
| LLMs can also provide the equivalent of an -S switch.
| Yoric wrote:
| You are correct, of course, but the general consensus is
| that (in languages other than C++) compilers are largely
| close enough to deterministic that the difference doesn't
| matter. LLMs, by design, don't even attempt to go in this
| direction.
|
| Not entirely sure what I'd do with an LLM -S switch.
| Debug the output/intermediate representation of the LLM,
| knowing that the next time I press "enter", it's probably
| going to give me an entirely different output?
| pjmlp wrote:
| It won't be much different from human computers in that
| regard, isn't after all what everyone is looking for as
| replacement?
| bee_rider wrote:
| The first is a bug in the compiler, the second is a bug
| in your code.
| pjmlp wrote:
| Not everyone agrees, see Linus.
| Yoric wrote:
| I don't think LLMs themselves (at least what we currently
| call LLMs, with the Transformer-based architecture) will
| provide that. They're too prone to hallucinations.
|
| However, I do think that they will pave the way towards
| another technology that will work better. Possibly some
| hybrid neurosymbolic approach. There are many researchers
| working on this, and the current hype around LLM will
| help fund them. Sadly, it will also add considerable
| amounts of noise that might hinder their ability to
| demonstrate the usefulness of their solutions, so no clue
| when that future happens, if it does.
| klooney wrote:
| LLMs are even more limited than humans though- tiny context
| windows, can't really learn new things- which really raises
| the bar on writing APIs that are footgun free.
| sumedh wrote:
| > LLMs are even more limited than humans though- tiny
| context windows,
|
| For now.
| Werewolf255 wrote:
| Given the power and cooling requirements, and the
| underlying techniques used for LLMs right now, I think
| 'for now' is going to be quite a few years. Maybe a
| decade plus. Plenty of time to train a new cohort of
| human developers who do more than just code 24/7.
| danielbln wrote:
| Context windows have grown significantly, the biggest one
| at are at 2 million tokens right now. That plenty enough,
| as even in his t codebase you don't need to feed the fill
| codebase in, you just need to provide the LLM a map of e.g.
| functions and where to find them, and the LLM can pull the
| relevant files itself as it plans the implementation path.
| And for that the current context window is plenty.
| sublinear wrote:
| Not true at all. Good code is concise and does what it's
| supposed to do without any weird side effects or artifacts.
| That has nothing to do with "maintainability for humans".
| 9dev wrote:
| Why would you want to avoid side effects, other than making
| it less hard for the next developer working on the code to
| understand how it works? Nature uses side effects all the
| time, and is widely considered to work pretty well.
| jimbokun wrote:
| At that point "building software" goes away as a distinct
| activity. Everything is a conversation with an AI, that
| builds software to answer questions or perform tasks you
| request as needed.
| techpineapple wrote:
| Maybe the whole concept of "clean code" is built around
| maintainability for humans, but I think there's a version of
| organization of "good code" that makes it much easier to
| avoid like nested loop foot guns.
| allenu wrote:
| I agree that it's possible AI is unable to generate exceptional
| code (at least not at the moment), but there are definitely
| places where average or below-average may just be good enough.
|
| If the goal is deliver business value, an argument can be made
| that one could leverage AI for bits of code where high skill
| isn't required, and that that could free up the human developer
| to focus on places where high skill is more important (high
| level system architecture, data model designs, simpler user
| experiences that reduce the amount of work overall).
|
| If a dev just used pure "vibe coding" to generate code and
| didn't provide enough human oversight to verify the high-level
| designs, then you can definitely get into an issue where the
| code gets out of control, but I think there's a middle ground
| where you have a hybrid of high-level human design and
| oversight and low-level AI implementation.
|
| I think the line between how much human involvement there is
| versus pure AI coding may be a sliding one. For something like
| a startup that is unsure if their product is even providing
| enough user value, it might make sense to quickly prototype
| with AI to see if a product is viable, then if it is, rewrite
| parts with more human intervention to scale up.
| bsder wrote:
| Want to convince me of AI coding? Let's see AI go modernize the
| old X11 codebase. Wayland progress is so glacially slow that a
| motivated programmer should be able to run rings around them with
| AI on X11, right? Show me that, and I'll pay attention to "AI".
|
| > Many of us don't just write code - we love writing code.
|
| Excuse me, I _HATE_ writing code.
|
| Code is the thing that is in the way between what I want to
| computer to do and the computer doing it. If AI reduces that
| burden, I would be the first to jump on that wave.
|
| GUI programming still sucks. GPU programming still sucks.
| Embedded programming still sucks. Concurrent programming still
| sucks. I can go on and on.
|
| I was actually having this discussion with somebody the other day
| that 99% of my programming is "Shaving Yaks" and 1% actually
| focused on the problem I want to solve.
|
| When AI starts shaving the yaks for me, I'll start getting
| excited.
| chickenzzzzu wrote:
| I would like to say this as unambiguously as possible. You
| either have a skill issue, or you are deliberately solving
| problems other than the thing you actually want to solve, which
| you do not mention.
| 9dev wrote:
| I think he's got a point actually. How many times has a
| compiler told you about a missing semicolon "here", and
| you've never thought, "well if you're so clever, why don't
| you just solve it?!"
|
| Code is an awful abstraction to capture chains of thoughts,
| but it's the best we've got. Still, caring about syntax,
| application architecture, concurrency, memory layout, type
| casting, ...--all of that is just busywork, not making the
| robot go beep.
| mdaniel wrote:
| _ed:_ or did you omit a "not" as in "not just busywork"?
|
| > application architecture,
|
| I think you got carried away with your analogy there,
| because I can assure you that if the LLM generates
| kafkaClient.sendMessage everywhere for latency sensitive
| apps, that's not gonna go well, or similar for
| httpClient.post for high throughput cases
| 9dev wrote:
| Neither. What I was trying to say was that all tasks I
| listed are just there to please the binary gods, not to
| solve actual business problems. With _Application
| architecture_ , I was more referring to the layout of the
| code, as in module boundaries, class inheritance, and so
| on.
| cheevly wrote:
| Let me see you build an airplane with bricks and mortar, only
| then will I be excited by the power of flight.
| Apocryphon wrote:
| You're saying X11 is that bad?
| Werewolf255 wrote:
| "Look, you're being a pessimist. Yes, over half of the
| airplanes we make kill everyone on board.
|
| But you're really NOT focused on the planes that kill only
| 75% of the crew and passengers. Just think! In ten years,
| we'll know how to build a plane!
|
| Please stop telling me that we already have designs for
| planes that work. I don't want to hear that anymore."
| hnthrow90348765 wrote:
| My guess is that job requirements will grow even larger, so it
| will be better for people who like jumping around between front-
| end, back-end, infrastructure, database, product, support,
| testing, and management duties. You'll have to resist any
| uncomfortable feelings of not being good at any one thing, much
| less mastering it. Naturally, they won't ask non-technical staff
| and managers to suddenly become technical and learn to code with
| AI.
|
| In the grander scheme of things, what matters is if their
| products can still sell with AI coders making it. If not, then
| companies will have to pivot back to finding quality - similar to
| offshoring to the cheapest, getting terrible developers (not
| always) and a terrible product, then having to rehire the team
| again.
|
| If the products do sell with AI coders, then you have to reckon
| with a field that doesn't care about quality or craftsmanship and
| decide if you can work like that, day-in-day-out.
| JaDogg wrote:
| Yes this is what I call "the accountant/spreadsheet theory",
| and I think this is the most likely scenario.
| rreichman wrote:
| I think we can expect a bifurcation: managerial jobs that will
| require a lot of breadth and engineering jobs that will require
| a lot of depth. The manager engineers will have AIs doing all
| sorts of things for them across the stack. The deep engineers
| will develop an expertise that the AI can't get to (at least
| not yet).
| lunarboy wrote:
| I agree this is where things seem to be going in the 5-10year
| frame. Spinning wheel didn't obsolete weavers completely, it
| just allowed for more workers and more throughput at less
| skill. I think entry junior devs will be out of jobs, but
| unless these AIs can start coming up with coherent high level
| designs, higher level architects seem to be okay in that time
| frame at least
| mistrial9 wrote:
| architectural design was very well paid for a long time, for
| many individuals. In modern USA, there is almost no way a
| person could be an architect for a living -- there is no
| career path. Employers in finance and other core business are
| already bragging that eighty percent of coding will be AI.
| Executives want to fire coders, and lower the wages for
| coders, and have complete control over output of coders. AI
| is being sold for that today.
| dandellion wrote:
| I've been using AI for code for more than two years already, the
| auto-completion is a nice help that I'm willing to pay for, but
| every time I try anything that's harder than the basics it
| completely falls flat.
|
| It doesn't surprise me though, most of the people working on this
| are the same that had been promising self-driving cars. But that
| proved to be quite hard, and most of them moved on to the next
| thing which is this. So maybe a decade from now we'll be
| directing AIs instead of writing code. Or maybe that will also be
| difficult, and people will have moved on to the next thing they
| will fail to replace with AI.
| AaronAPU wrote:
| Every time someone says this, when I ask they haven't used
| o1-pro. Not saying that's the case with you but I have to ask.
|
| In my experience it's literally the only model which can
| actually code beyond auto-complete. Not perfect but a
| completely different tier above the rest.
| 9dev wrote:
| People have been saying that since the start, too, albeit
| about different models. It never felt revolutionary; the
| moment I asked about a particularly gnarly recursive generic
| type problem, or something that requires insights from across
| the code base, it was just rubbish from all models. Good to
| finish the line I wanted to write, bad at creating software.
| FirmwareBurner wrote:
| _> People have been saying that since the start_
|
| The progress made since the start has been wild, and if it
| keeps increasing, even at a much slower pace, it's gonna be
| even better.
|
| That's like people looking at N64 games saying "wow, these
| new 3D graphics sure look like ass, they'll never catch on
| and replace 2D games". Or like people looking at the output
| of early C compilers going "wow, this is so unoptimized,
| I'll stick to coding in assembly for my career since nobody
| will ever use compilers for serious work".
|
| It boggles my mind how ignorant people can be about
| progress and disruption based on how past history played
| out. Oh well, at least more power to those who embrace the
| new tech early on.
| Jensson wrote:
| > Oh well, at least more power to those who embrace the
| new tech early on.
|
| Why would embracing it early before it is useful give you
| more power?
| jaimebuelta wrote:
| Well, there's still a lot of 2D games. And, for many
| games, it's probably the right choice.
| 9dev wrote:
| 0.001 is infinitely more than 0, and yet it's a far way
| off from 1. Call me when we've reached 0.2 at least.
|
| Back with N64 or C compilers, we didn't talk about energy
| requirements on the level of small countries and billions
| of dollars to even compete with the status quo. You're
| like one of those guys in the fifties, thinking it'll
| only be a few more years until we're all commuting in
| flying cars powered by nuclear reactors.
| bee_rider wrote:
| > "wow, this is so unoptimized, I'll stick to coding in
| assembly for my career since _nobody will ever use
| compilers for serious work_ "
|
| I'm sure somebody said that, but I don't think it was a
| common sentiment.
|
| Rather, people said that there were still some cases that
| required some assembly programming. And it is true. It's
| just a niche. One they gets smaller over time. If you are
| a generalist it makes sense to retrain. If you are a
| specialist, there's some level of socialization where you
| can get away with continuing to program assembly.
| techpineapple wrote:
| Do you really feel this way? I feel like we're probably
| batting better than 50%, but plenty of revolutionary
| technologies never manifest, and more specifically,
| there's such a wide swath of predictions. Sure I think
| LLM's will continue to get more useful, and they're
| helpful for programming, but the range of predictions
| runs the gamut from Software engineers may become a bit
| higher level like software managers/architects to, LLM's
| will bring on the end of scarcity and all white color
| jobs within the next 10 years.
|
| Like, how is some hesitance towards the possibilities of
| this specific implementation of AI "ignorant of the
| progress and disruption of history". A casual glance at
| the headlines around blockchain ~ 6 years ago should
| invite some skepticism.
|
| I think in some ways this is like looking at the
| difference between Camera only self-driving vs all the
| tools. I think it's quite possible LLM's look more like
| camera-only self-driving which will continue to get
| marginally better but may never solve the problem, and
| we're still waiting on the insight/architecture that will
| bring us full AGI.
| AaronAPU wrote:
| I'll put you down as a "No, I haven't used o1-pro"
| 9dev wrote:
| Feel free! I've played this game a few rounds, but it
| didn't get any less disappointing. I'm absolutely fine
| with missing out this time, but if you haven't had your
| fill yet, don't mind me.
|
| I'll be back in line for the iPhone 3G, thank you.
| J_Shelby_J wrote:
| O1-pro is the first time I'm actually hesitant to recommend
| an AI tool. Out of selfishness perhaps. It's the true kick
| off of the AI career Cold War.
| aerokr wrote:
| Self driving cars exist and are safer than human drivers, Waymo
| being the obvious answer - https://waymo.com/research/do-
| autonomous-vehicles-outperform.... Replacement is a matter of
| large scale deployment and coordination with legal frameworks,
| which isn't the same problem as self driving cars.
| carlmr wrote:
| Has Waymon figured out how to scale this yet? Last time I
| checked they needed highly precise and up to date maps that
| are not just available for every location, and thus they're
| limited to small test regions.
| mdaniel wrote:
| > highly precise and up to date maps
|
| I would guess that's one of those "pick any two" things,
| given how many construction projects, repair, "life
| happens" stuff goes on in a modern city
|
| That said, unless I'm totally missing something they have a
| self-solving problem with that since the Waymo's all carry
| around cameras, lidar, and presumably radar so I would
| expect that they update the maps as they go. Come to think
| of it, that's very likely why I originally saw them roaming
| around the city with drivers in them: testing the
| pathfinding _and_ mapping at the same time
| margalabargala wrote:
| Even in the best case scenario for LLMs, they aren't mind
| readers. They'll be time savers. They're more like compilers
| for language to code, like how actual compilers transform code
| to assembly.
|
| The job has changed before, it will change again. It may get
| easier to enter the industry, but the industry will still exist
| and will still need subject matter experts a decade from now.
| dgellow wrote:
| I don't feel LLMs are the good tool for that. It's nice to
| have something that behaves as if it understood my requests
| and business model, but we need reproducibility, otherwise
| it's too unpredictable.
|
| I also don't like English as a language to express
| requirements, it's not strict enough and depends too much on
| an implicit context. Whatever high-level abstraction we end
| up with it cannot be something that results in the wrong
| implementation because the agent incorrectly read the tone of
| the exchange.
| kristianc wrote:
| > Whatever high-level abstraction we end up with it cannot
| be something that results in the wrong implementation
| because the agent incorrectly read the tone of the
| exchange.
|
| That can easily be said of most Product Manager > SWE
| relationships too though
| lunarboy wrote:
| That's why code was invented in the first place no? But now
| LLMs are in the ballpark where lay people can describe
| something vaguely and get a working MVP. Whether you can
| ship, scale, and debug that code is a completely different
| question
| hooverd wrote:
| Lay people can pour concrete and nail boards together;
| that doesn't mean they'll lay a good foundation and erect
| a square frame.
| soco wrote:
| Nobody (or nobody in their right mind) argues that AI
| should be let by itself build productive applications.
| How about using it as a willing helping hand? I do that
| and it's really helping me, provided I actually know what
| it's trying to do and I will correct course - both by re-
| prompting and by writing the missing pieces by hand. That
| gives me extra speed, not magical powers.
| bluefirebrand wrote:
| > Nobody (or nobody in their right mind) argues that AI
| should be let by itself build productive applications
|
| There are people on hacker news daily arguing this
| AbstractH24 wrote:
| > Even in the best case scenario for LLMs, they aren't mind
| readers. They'll be time savers.
|
| In otherwords, they are cheaper mid to entry level employees
| which don't get sick or have emotions. I think most people
| would agree with this.
|
| One of the reasons well informed curious people tend to
| underestimate the value of LLMs despite their flaws is they
| underestimate the amount of routine work that could be
| automated but is still done imperfectly by humans. LLMs lower
| the barrier to entry to halfway decent automation and
| eliminating those jobs.
|
| Where I'm not sold yet is on the whole idea of bots that go
| off and do their own thing totally unsupervised (but
| increasingly, you are having models supervise one another).
| torginus wrote:
| This. All these reasoning models that push out complete modules
| of code tend to not write code I would have - and have
| difficulty writing code that matches up with the rest of the
| codebase. And even if it does, the burden of understanding what
| the AI does falls onto my shoulders, and as everyone knows,
| understanding someone elses code is 10x harder than writing
| your own.
| greenie_beans wrote:
| meanwhile i'm building entire features with it, and they work
| without bugs and i understand all the code.
| sceptic123 wrote:
| In what language? On what size of code base?
| greenie_beans wrote:
| python, django, react, javascript, vanilla js, html, css,
| tailwind, htmx, bash, etc, etc, etc
| NBJack wrote:
| > Or maybe that will also be difficult, and people will have
| moved on to the next thing they will fail to replace with AI.
|
| Probably quantum computing. That seems to be the next hyped up
| product.
| AbstractH24 wrote:
| Hasn't been discussed for ages?
| ddoolin wrote:
| Honestly, I don't know why nobody says it, but I just _don't
| want to._ I don't want to use it too much. It's not that I'm
| paranoid about it outputting bad code, but I just like doing
| almost all of it myself. It helps that I will do better than it
| 100% of the time but that isn't really why I don't use it. If
| it's going to replace all of us, fine. I guess you can chalk me
| up as not being into this hyper-productivity mindset. I just
| want to write code, period. I use it in the same manner as I
| see most comments saying; that is, as a fancy code complete,
| but I haven't found myself wishing "if only this could do _all_
| of it for me! "
| CharlieDigital wrote:
| I think a good analogy is 3D printing. Certainly, it's faster
| from idea to prototype, but the actual act of carving wood or
| molding clay is itself a creative process that for some will
| never be replaced because we have cheap commodity 3D
| printers.
| elliottkember wrote:
| 3D printing is an interesting analogy. When I got my
| printer I really truly thought I'd be printing all sorts of
| things to use around the house and gadgets and stuff.
|
| It turns out that to be a good 3D printer, you need to be
| really good at CAD, and measuring stuff with Vernier
| calipers. That's like prompt engineering.
|
| Then there was the nozzle temperature, print errors, and
| other strange results -- call those hallucinations.
|
| Once I had designed something that I needed many instances
| of, it was great. But for one-offs, it was a lot of work.
| So it goes with AI.
| danielbln wrote:
| I'll provide a counter perspective: I love building things,
| but the minutae and implementation details of code are merely
| a means to an end for me, something that stands in between me
| and a feature/experiment. Agentic coding, especially when
| doing green field or prototyping takes the grunt work away
| and let's me build on a higher abstraction level. And I love
| it, though I can see that that's not for everyone.
| ddoolin wrote:
| Totally get it. And there has been work for me where I
| would've offloaded the details like that _for sure_. It
| might be that it 's when I really feel like the project is
| my baby and I'm really enjoying that part of the process.
| m463 wrote:
| > anything that's harder than the basics it completely falls
| flat
|
| I kind of wonder if there's a way to make ai-accessible
| software.
|
| For example, lets say someone wrote some really descriptive
| tutorial on blender, not only simple features, but advanced
| ones. added some college texts on adjacent problems to help
| prevent "falling flat" at more difficult tasks.
|
| could something like that work? I figure LLMs are now just
| reading simple tutorials now, what about feeding them advanced
| stuff?
| roxolotl wrote:
| The secret is that looking to work as a way to fulfill that
| desire to build and create is not a good idea. The existence of
| industrial farming takes no joy from my backyard garden. My usage
| of Gen AI doesn't diminish the wonder I feel building projects at
| home.
|
| Looking to corporate work as an out for your creative desires
| never really worked out. Sure there was a brief golden age where
| if you worked at a big tech company you could find it but the
| vast majority of engineers do utilitarian work. As a software
| engineer your job as always been to drive business value.
| m3t4man wrote:
| It's also not hard to understand why people seek that kind of
| fulfillment at work. It is something we dedicate most of our
| day to for most of the week
| v3xro wrote:
| It doesn't have to be that way no? As soon as we start
| realigning economic systems to value labor more than capital
| again I think we will find meaningful pursuits in all business
| areas.
|
| Edit: and yes, I am all too aware of the "market can remain
| irrational longer than you can remain solvent" adage as applied
| to this situation.
| roxolotl wrote:
| Oh yea absolutely. But like you're saying in your edit it
| might take a long time to get there. I think also in order to
| get there we have to acknowledge where we are.
| cadamsdotcom wrote:
| It's great that the typing part is being reduced - as is looking
| up APIs and debugging stupid issues that wreck your estimates by
| wasting your work-day!
|
| You are still in charge and you still need to _read_ the code,
| understand it, make sure it's factored properly, make sure
| there's nothing extraneous..
|
| But ultimately it's when you demo the thing you built (with or
| without AI help) and when a real human gets it in their hands,
| that the real reward begins.
|
| In the future that's coming, non-engineers will be more and more
| able to make their own software and iterate it based on their own
| domain expertise, no formally educated software engineers in
| sight. This will be outrage fuel for old-mindset software
| engineers to take to their blogs and shake their metaphorical
| walking-sticks at the young upstarts. Meanwhile those who move
| with the times will find joy in helping non-engineer domain
| experts to get started, and find joy once again in helping their
| non-engineer compatriots solve the tricky stuff.
|
| Mark my words, move with the times people. It's happening with or
| without you.
| spo81rty wrote:
| I'm actually writing a book on Product Driven Engineering about
| this very problem. Many engineers have to become product owners
| in this new era. The bottle neck is moving from coding speed to
| product management speed. Everyone needs to realize they work on
| the product team.
|
| Subscribe to my newsletter to get the book announcement.
|
| https://newsletter.productdriven.com/
| MiiMe19 wrote:
| Over my dead body will I ever code with AI.
| meitham wrote:
| That's the spirit!
| kelseydh wrote:
| - The dinosaur engineer exclaims as the AI asteroid strikes.
| FirmwareBurner wrote:
| Thank you for your service!
| aforwardslash wrote:
| Right now, being able to efficiently extract value from AI based
| code generation tools requires supervision at some extent - e.g.
| A competent developer that is able to validade the output; As the
| industry moves toward these systems, so do the hiring
| requirements - there is little to no incentive to hire more
| junior devs (as they lack the experience of building software),
| effectively killing the entry-level jobs that would someday
| generate those competent developers.
|
| The thing is, part of the reason AI requires supervision is
| because it's producing human-maintainable output in languages
| oriented for human generation; It is my belief we're at a reality
| akin to the surgence of the first programming langages - a mimick
| that allows humans to abstract away machine details and translate
| high-level concepts into machine language. It is also my belief
| that the next step are specialized AI languages, that will remove
| most of the human element, as an optimization. Sure, there will
| always the need for meatbags, but big companies will hire tens,
| not thousands.
| cma wrote:
| Alternatively, junior devs with ai web search and ai
| explanations are able to learn much faster and not bother
| senior devs with compiler error puzzles.
| hooverd wrote:
| > are able to but they won't. they'll just give up if the
| auto-complete doesn't fix it for them.
| n_ary wrote:
| But if those aspiring juniors stop asking those questions and
| the seniors not answer them in open web, neither LLMs nor
| juniors have new training data and both become stagnated. How
| do we solve this?
|
| At my early days, I learned more reading code from much
| senior engineers and began to appreciate it. An effective
| seasoned senior writes a beautiful poetry that conveys deeper
| meaning and solves the entertainment(erm... business
| requirement) purpose. If the seniors retire and the juniors
| are no longer hired, then where do LLMs get new data from?
|
| In all sense of things, I suspect we'll see more juniors
| getting hired in coming years and few seniors present to
| guide them, same as how we previously had few db specialists
| and architects giving out the outline and the followers made
| those into actual products.
| 1shooner wrote:
| I heard an OpenAI engineer give an eye-opening perspective that
| went something like: anything above machine code is an
| abstraction for the benefit of the humans that need to maintain
| it. If you don't need humans to understand it, you don't need
| your functionality in higher-level languages at all. The AI
| will just brute-force those abstractions.
| soulofmischief wrote:
| I don't buy into it. The benefits of abstraction hold for
| machines, as they can spend less bandwidth when modeling,
| using and modifying a system, and error is minimized during
| long, repetitive operations.
|
| Abstraction can be thought of as a system of interfaces. The
| right abstractions can totally transform how a human or
| machine interpret and solve a problem. The most effective and
| elegant machines will still make use of abstraction above
| machine code.
| techpineapple wrote:
| This is an interest observation, and seems true for future
| versions of AI, but when the current technology is based on
| human language, I don't think I would make the assumption
| that LLMs would directly translate to manipulating machine
| language.
| skydhash wrote:
| Abstractions are patterns that leads to reusable solutions.
| So you don't need to write the same code again and again
| where you can use a simple symbol or construct to manipulate.
| It leads to easier understanding, yes, but also to re-
| usability.
| meander_water wrote:
| This reduces the field of software engineering to simple code
| generation when it is much more than that.
|
| Things like system design thinking and architectural design are
| not solely tasks performed by managers or specialised roles.
|
| Software developers need to wear multiple hats to deliver a
| solution. Sure, building out new features or products from
| scratch often get the most glory an attention. But IMO, humans
| still have the edge when it comes to debugging, refactoring and
| optimisation. In my experience, we beat AI in these problems
| because we can hold the entire problem space/context in our
| brains,and reason about it. In contrast, AI is simply pattern
| matching, and sure it can do a great job, but only stochastically
| so.
| leoedin wrote:
| The foundation of maintainable software is architecture. I
| can't be alone in having often spent days puzzling over a
| seemingly highly complex problem before finally finding a set
| of abstractions that makes it simple and highly testable.
|
| LLMs are effectively optimisation algorithms. They can find the
| local minima, but asking them to radically change the structure
| of something to find a much simpler solution is not yet
| possible.
|
| I'm actually pretty excited about LLMs getting better at
| coding, because in most jobs I've been in the limiting factor
| has always been rate of development rather than rate of idea
| production. If LLMs can take a software architecture diagram
| and fill in all the boxes, that would mean we could test our
| assumptions much quicker.
| Centigonal wrote:
| Yes, this is how I feel as well. I'm not going to use an LLM
| to create my architecture for me (though I may use it for
| advice), because I think of that as the core creative thing
| that I bring into the project, and the thing I need to fully
| understand in order to steer it in the right direction.
|
| The AI is great at doing all the implementation grunt work
| ("how do I format that timestamp again?" "What's a faster
| vectorized way to do this weird polars transformation?" "Can
| you write tests to catch regressions for these 5 edge cases
| which I will then verify?").
| skydhash wrote:
| Almost everytime I read about someone finding LLMs useful
| for a programming task, the description of how the LLMs are
| used sounds like either the person is missing domain
| knowledge, don't use a capable editor, or are not familiar
| with reading docs.
|
| When I find myself missing domain knowledge, my first
| action is to seek it. Not to try random things that may
| have hidden edge cases that I can't foresee. The semantics
| of every line and every symbol should be clear to me. And I
| should be able to go in details about its significance and
| usage.
|
| Editing code shouldn't be a bottleneck. In The Pragmatic
| Programmer, one of the advice is to achieve editor fluency.
| And even Bram has written about this[0]. Code is very
| repetitive, and your editor should assist you in reducing
| the amount of boilerplate you write and navigating around
| the codebase. Why? Because that will help you prune the
| code and get it in better shape as code is a liability.
| Generating code is a step in the wrong direction.
|
| There can be bad docs, or the information you're seeking is
| not easily retrievable. But most is actually quite decent,
| and in the worst case, you have the source code (or
| should). But there are different kind of docs and when
| someone is complaining about them, it's usually because
| they need a tutorial or a guide to learn the concepts and
| usage. Most systems assume you have the prerequisites and
| will only have the reference.
|
| [0]: https://www.moolenaar.net/habits.html
| slt2021 wrote:
| a lot of value of software engineers is talking to users (end
| users, clients etc)
| JaDogg wrote:
| What I do is turn off copilot, do the design, get in the zone and
| enable copilot. This way I do not get it to slow me down,
| otherwise I just wait for it to do API calls. Even then, I turn
| it off when it create complete wrong implementations.
|
| Problem is it doesn't know the layer of utilities we have and
| just rewrite everything from scratch. Which is too much
| duplicatation. I have to now delete it and type correct code
| again.
|
| One advantage I have seen is that, it can defenitely translate /
| simplify what my collegues say, fix typos or partial-words. Which
| is very useful when you are working with lot of different people.
| greenie_beans wrote:
| ai took away the fun of coding but it's impossible not to use it
| now that i've opened pandora's box. fortunately i don't have the
| identity problem. you shouldn't base your identity on your job.
| my problem is more like, "this isn't as much fun to do anymore"
| kittikitti wrote:
| Most big tech engineers I know hate coding and see abandoning
| coding as a progression in their career. Personally, I don't care
| what the industry thinks, I like coding and do it anyway with or
| without a huge corporation providing me with every last resource
| required. If I'm unemployed because coding as a skill is no
| longer needed, I will still do it just not part of my day job.
| Just as painters were worried about color printers, the value of
| the Mona Lisa was put into question. I'm no artist, but I'm not
| believing the overhyped scenario's. I love language models and
| their abilities but there's too many HR reps drooling at the
| mouth thinking they can replace coders with AI.
| zer8k wrote:
| AI is the only reason I've been able to keep up with the constant
| death marches. Since the job market went to shit employers are
| piling on as much work as they can knowing they have indentured
| servants.
|
| The code quality isn't great but it's a lot easier to have it
| write tests, and other code, and then go back and audit and
| clean.
|
| Feels absolutely awful but whatever.
| porridgeraisin wrote:
| It's a useful assistant. Never again do I need to argparse each
| flag onto a class Config: fully manually again. I also found it
| useful in catching subtle bugs in my (basic, learning purpose)
| cuda kernels. It is also nice to be able to do `def utc_to_ist(x:
| str) -> str`<TAB>.
|
| As for whole apps, I never agree with its code style... ever. It
| also misses some human context. For example, sometimes, we avoid
| changing code in certain ways in certain modules so as to get it
| more easily reviewed by the CODEOWNER of that part of the
| codebase. I find it easier to just write the code "the way they
| would prefer" myself rather than explain it in a prompt to the
| LLM losslessly.
|
| The best part of it is getting started. It quickly gives me a
| boilerplate to start something. Useful for procrastinators like
| me.
| ookblah wrote:
| Articles like this honestly confuse me. I really do not
| understand this sentiment that that some coders have where it
| feels like every line they write is like some finely chiseled
| piece of wood on a sculpture they made.
|
| Since day one I've always liked to build things much like the
| author, my first line of HTML to CSS, frameworks, backend,
| frontend, devops, what have you. All of it a learning experience
| to see something created out of nothing. The issue has always
| been my fingers don't move fast enough; I'm just one person. My
| mind can't sustain extended output that long.
|
| My experience with AI has been incredibly transforming. I can
| prototype new ideas and directions in literally minutes instead
| of writing or setting up boilerplate over and over. I can feed it
| garbage and have it give me ballpark insights. I can use it as a
| sounding board to draw some direction or try to see a diff angle
| I'm not seeing.
|
| Or maybe it's just the way that some people use AI coding? Like
| it's some magic box and if you use it you wont' understand
| anything or it's going to produce some gibberish? Like a form
| bikeshedding where people hold the "way" they code as some kind
| of sacrosant belief. I still review near every line of code and
| if something is confusing I either figure it out and rewrite it
| or just comment on what it's doing if it's not critical.
| n_ary wrote:
| The article appears to be a rant and panic piece...
|
| At this point, I am totally confused. When I attend expensive
| courses from Google or Amazon, the idea in the courses are that,
| tech has become sooo complex(I agree, look at the number of ways
| you can achieve something using aws infinite number of services),
| we need some code assistants which can quickly remind us of that
| one syntax or fill out the 10000th time of writing same
| boilerplate or quickly suggest a new library functions that would
| take several google searches and wading through bad
| documentations on another 5h of SEO spam or another 50
| StackOverflow with same issue closed as not focused
| enough/duplicate/opinionated.
|
| It is like, they want to sell you this new shiny tool. If anyone
| here remembers the early days of Jetbrains IDEs, the fans would
| whirl and IDE would freeze in middle of intellisense suggestion,
| but now those are buttery smooth and I actually feel sad when
| unable to access them.
|
| Now, on the outside in news, media, blogs and what not, the
| marketing piece is being boosted a 1000x with all panic and
| horror, because a certain greedy people found that, only way to
| dissuade brilliant people from the field and not bootstrap next
| disrupters by signaling that they themselves will be obsolete.
|
| Come to think of it, it is cheap now. First idea was to hire them
| when investments were cheap and disruption risk was high, then
| came the extinction of ZIRP when it was safe to stop hoarding
| them as no investment means less risk of disrupters, but if some
| dared, acquire and kill in the crib. Then came bad economy, so
| now it is easier to lay them off and smear their reputation so
| they can't get the time of the day from deep pockets. Final
| effort is to threat the field by fake marketing and media
| campaign of them being replaced.
|
| This panic drama needs to stop. First we had SysAdmins
| maintaining on-prem hardware and infra. But Aws/Gcp/Azure/Oracle
| came along to replace them to only move them up the chain and now
| we need dedicated IAM specialist, certified AWS architects,
| Certified Azure Cloud Consultants and what not.
|
| Sorry for the incoherent rant, but these panic and "f*k you
| entitled avocado toast eating school dropout losers, now you'll
| be so screwed" envy social media posts are so insane and gets so
| much weird, I am just baffled by the whole thing.
|
| I don't know what to believe, big tech telling me in their pretty
| courses and talks about how my productivity and code quality will
| be now improved, or the media and influencers telling me we are
| going to be so obsolete (and avocado toast eating dropout, which
| I am not). Only time will tell.
|
| In the meantime, the more I see demos of impressive LLM building
| entire site from everyone and their pet hamster, the more number
| of Frontend engineering jobs popup daily on my inbox(I keep job
| alerts to watch market trends and dabble in topics that might
| interest me).
___________________________________________________________________
(page generated 2025-03-24 23:02 UTC)