[HN Gopher] AI Coding assistants provide little value because a ...
___________________________________________________________________
AI Coding assistants provide little value because a programmer's
job is to think
Author : d0liver
Score : 71 points
Date : 2025-04-27 20:51 UTC (2 hours ago)
(HTM) web link (www.doliver.org)
(TXT) w3m dump (www.doliver.org)
| fire_lake wrote:
| Most developers use languages that lack expressivity. LLMs allow
| them to generate the text faster, bringing it closer to the speed
| of thought.
| bamboozled wrote:
| What if they help you to think ?
|
| I know LLMs are masters of averages and I use that to my
| advantage.
| 65 wrote:
| I wish people would realize you can replace pretty much any LLM
| with GitHub code search. It's a far better way to get example
| code than anything I've used.
| monkaiju wrote:
| Couldn't agree more. And I'm regards to some of the comments
| here, generating the text isn't the hard OR time consuming part
| of development, and that's even assuming the generated code was
| immediately trustworthy. Given that it isn't must be checked,
| it's really just not very valuable
| lobochrome wrote:
| Object-oriented languages provide little value because a
| programmer's job is to think
|
| Memory-safe languages provide little value because a programmer's
| job is to think
|
| ...
| DidYaWipe wrote:
| Not comparable at all.
| ahartmetz wrote:
| Now this isn't a killer argument, but your examples are about
| readability and safety, respectively - the quality of the
| result. LLMs seem to be more about shoveling the same or worse
| crap faster.
| dymk wrote:
| Have you tried an AI coding assistant, or is that just the
| impression you get?
| skydhash wrote:
| Using deterministic methods as counter arguments about a
| probabilistic one. Something apples, something oranges....
| protocolture wrote:
| So theres no value in dealing with the repeatable stuff to free
| the programmer up to solve new problems? Seems like a stretch.
| 9rx wrote:
| There is no new value that we didn't already recognize. We've
| know for many decades that programming languages can help
| programmers.
| adocomplete wrote:
| I disagree. It's all about how you're using them. AI coding
| assistants make it easy to translate thought to code. So much
| boilerplate can be given to the assistant to write out while you
| focus on system design, architecture, etc, and then just guide
| the AI system to generate the code for you.
| wakefulsales wrote:
| this is just stupid, anyone who's used ai to code knows this is
| wrong empirically
| recursive wrote:
| I've used it and haven't had much success.
| dukeofdoom wrote:
| It's more like an assistant that can help you write a class to do
| something. You could write on your own but feeling lazy.
| Sometimes it's good, other times it's idioticly bad. Need to keep
| it in check and keep telling it what it needs to do because it
| has a tendency to dig holes it can't get out of. Breaking things
| up into Smaller classes helps to a degree.
| incoming1211 wrote:
| I'm sorry to say, but the author of this post doesn't appear to
| have much, if any experience with AI and sounds like he's just
| trying to justify not using it and pretend hes better without it.
| cedws wrote:
| It's okay to be a sceptic, I am too, but the logic and
| reasoning in the post is just really flimsy and makes our
| debate look weak.
| permo-w wrote:
| seriously. if you want to say that AI will likely reduce
| wages and supply for certain more boilerplate jobs; or that
| they are comparatively much worse for the environment than
| normal coding; or that they are not particularly good once
| you get into something quite esoteric or complex; or that
| they've led certain companies to think that developing AGI is
| a good idea; or that they're mostly centralised into the
| hands of a few unpleasant actors; then any of those
| criticisms, and certainly others, are valid to me, but to say
| that they're not actually useful or provide little value?
| it's just nonsense or ragebait
| tensor wrote:
| "Spellcheck provides little value because an authors job is to
| write." - rolls eyes
| kace91 wrote:
| These articles keep popping up, analyzing an hypothetical usage
| of AI (and guessing it won't be useful) as if it wasn't something
| already being used in practice. It's kinda weird to me.
|
| "It won't deal with abstractions" -> try asking cursor for
| potential refactors or patterns that could be useful for a given
| text.
|
| "It doesn't understand things beyond the code" -> try giving them
| an abstract jira ticket or asking what it things about certain
| naming, with enough context
|
| "Reading code and understanding whether it's wrong will take more
| time than writing it yourself" -> ask any engineer that saves
| time with everything from test scaffolding to run-and-forget
| scripts.
|
| It's as if I wrote an article today arguing that exercise won't
| make you able to lift more weight - every gymgoer would raise an
| eyebrow, and it's hard to imagine even the non-gymgoers would be
| sheltered enough to buy the argument either.
| tyleo wrote:
| Agreed. It isn't like crypto where the proponents proclaimed
| some use case that would prove value always on the verge of
| arriving. AI is useful right now. People are using these tools
| now and enjoying them.
| jdiff wrote:
| I'm not sure that's a convincing argument given that crypto
| heads haven't just been enthusiastically chatting about the
| possibilities in the abstract. They do an awful lot of that,
| see Web3, but they _have_ been using crypto.
| awesome_dude wrote:
| I don't (use AI tools), I've tried them and found that they
| got in the way, made things more confusing, and did not get
| me to a point where the thing I was trying to create was
| working (let alone working well/safe to send to prod)
|
| I am /hoping/ that AI will improve, to the point that I can
| use it like Google or Wikipedia (that is, have some trust in
| what's being produced)
|
| I don't actually know anyone using AI right now. I know one
| person on Bluesky has found it helpful for prototyping things
| (and I'm kind of jealous of him because he's found how to get
| AI to "work" for him).
|
| Oh, I've also seen people pasting AI results into serious
| discussions to try and prove the experts wrong, but only to
| discover that the AI has produced flawed responses.
| tptacek wrote:
| _I don 't actually know anyone using AI right now._
|
| I believe you, but this to me is a wild claim.
| awesome_dude wrote:
| Ha! I think the same way when I see people saying that AI
| is in widespread use - I believe that it's possible, but
| it feels like an outlandish claim
| LanceJones wrote:
| I'd say 500M WAUs on chatGPT alone qualifies as
| widespread use.
| awesome_dude wrote:
| Ok, how much of that is developers using it to help them
| code?
| plsbenice34 wrote:
| Essentially the same for me, I had one incident where
| someone was arguing in favor of it and then immediately
| embarrassed themselves badly because they were misled by a
| chatgpt error. I have the feeling that this hype will
| collapse as this happens more and people see how bad the
| consequences are when there are errors
| plsbenice34 wrote:
| Even in 2012 bitcoin could very concretely be used to order
| drugs. Many people have used it to transact and preserve
| value in hostile economic environments. Etc etc. Ridiculous
| comment.
|
| Personally i have still yet to find LLMs useful at all with
| programming.
| crispinb wrote:
| It's the barstool economist argument style, on long-expired
| loan from medieval theology. Responding to clear empirical
| evidence that X occurs: "X can't happen because [insert
| 'rational' theory recapitulation]"
| Kapura wrote:
| weird metaphor, because a gym goer practices what they are
| doing by putting in the reps in order to increase personal
| capacity. it's more like you're laughing at people at the gym,
| saying "don't you know we have forklifts already lifting much
| more?"
| asadotzler wrote:
| "If you want to be good at lifting, just buy an exoskeleton
| like me and all my bros have. Never mind that your muscles
| will atrophy and you'll often get somersaulted down a flight
| of stairs while the exoskeleton makers all keep trying, and
| failing, to contain the exoskeleton propensity for tossing
| people down flights of stairs."
| cgriswald wrote:
| I see it as almost the opposite. It's like the pulley has
| been invented but some people refuse to acknowledge its
| usefulness and make claims that you're weaker if you use it.
| But you can grow quite strong working a pulley all day.
| kace91 wrote:
| That's a completely different argument, however, and a good
| one to have.
|
| I can buy "if you use the forklift you'll eventually lose the
| ability to lift weight by yourself", but the author is going
| for "the forklift is actually not able to lift anything"
| which can trivially be proven wrong.
| jstummbillig wrote:
| Well said. It's not that there would not be much to seriously
| think about and discuss - so much is changing, so quickly - but
| the stuff that a lot of these articles focus is a strange
| exercise in denial.
| amelius wrote:
| If AI gives a bad experience 20% of the time, and if there are
| 10M programmers using it, then about 3000 of them will have a
| bad experience 5 times in a row. You can't really blame them
| for giving up and writing about it.
| charlie-83 wrote:
| Out if interest, what kind of codebases are you able to get AI
| to do these things on. Everytime I have tried it with even
| simpler things than these it has failed spectacularly. Every
| example I see of people doing this kind of thing seems to be on
| some kind if web development so I have a hypothesis that AI
| might currently be much worse for the kinds of codebases I work
| on.
| idontwantthis wrote:
| That's my experience too. It also fails terribly with
| ElasticSearch probably because the documentation doesn't have
| a lot of examples. ChatGPT, copilot and claude were all
| useless for that and gave completely plausible nonsense. I've
| used it with most success for writing unit tests and
| definitely shell scripts.
| kace91 wrote:
| I currently work for a finance-related scaleup. So backend
| systems, with significant challenges related to domain
| complexity and scalability, but nothing super low level
| either.
|
| It does take a bit to understand how to prompt in a way that
| the results are useful, can you share what you tried so far?
| charlie-83 wrote:
| I have tried on a lot of different projects.
|
| I have a codebase in Zig and it doesn't understand Zig at
| all.
|
| I have another which is embedded C using zephyr RTOS. It
| doesn't understand zephyr at all and even if it could, it
| can't read the documentation for the different sensors nor
| can it plug in cables.
|
| I have a tui project in rust using ratatui. The core of the
| project is dealing with binary files and the time it takes
| to explain to it how specific bits of data are organised in
| the file and then check it got everything perfectly correct
| (it never has) is more than the time to just write the
| code. I expect I could have more success on the actual TUI
| side of things but haven't tried too much since I am trying
| to learn rust with this project.
|
| I just started an android app with flutter/dart. I get the
| feeling it will work well for this but I am yet to verify
| since I need to learn enough flutter to be able to judge it
|
| My dayjob is a big C++ codebase making a GUI app with Qt.
| The core of it is all dealing with USB devices and
| Bluetooth protocols which it doesn't understand at all. We
| also have lots of very complicated C++ data structures, I
| had hoped that the AI would be able to at least explain
| them to me but it just makes stuff up everytime. This also
| means that getting it to edit any part of the codebase
| touching this sort if thing doesn't work. It just rips up
| any thread safety or allocates memory incorrectly etc. It
| also doesn't understand the compiler errors at all, I had a
| circular dependency and tried to get it to solve it but I
| had to give so many clues I basically told it what the
| problem was.
|
| I really expected it to work very well for the Qt interface
| since building UI is what everyone seems to be doing with
| it. But the amount of hand holding it requires is insane.
| Each prompt feels like a monkey's paw. In every experiment
| I've done it would have been faster to just write it
| myself. I need to try getting it to write an entirely new
| pice of UI from scratch since I've only been editing
| existing UI so far.
|
| Some of this is clearly a skill issue since I do feel
| myself getting better at prompting it and getting better
| results. However, I really do get the feeling that it
| either doesn't work or doesn't work as well on my code
| bases as other ones.
| doug_durham wrote:
| I work in Python, Swift, and Objective-C. AI tools work great
| in all of these environment. It's not just limited to web
| development.
| charlie-83 wrote:
| I suppose saying that I've only seen it in web development
| is a bit of an exaggeration. It would be more accurate to
| say that I haven't seen any examples of people using AI on
| a codebase that looks like on of the ones I work on.
| Clearly I am biased just lump all the types of coding I'm
| not interested in into "web development"
| thenaturalist wrote:
| While I tend to agree with your premise that the linked article
| seems to be reasoning to the extreme off the basis of a very
| small code snippet, I think the core critique the author wants
| to make stands:
|
| AI agents alone, unbounded, currently cannot provide huge
| value.
|
| > try asking cursor for potential refactors or patterns that
| could be useful for a given text.
|
| You, the developer, will be selecting this text.
|
| > try giving them an abstract jira ticket or asking what it
| things about certain naming, with enough context
|
| You still selected a JIRA ticket and provided context.
|
| > ask any engineer that saves time with everything from test
| scaffolding to run-and-forget scripts.
|
| Yes that is true, but again, what you are providing as a
| counterfactual are very bounded, aka easy contexts.
|
| In any case, the industry (both the LLM providers as well as
| tooling builders and devs) is clearly going into the direction
| of constantly etching out small imoprovements by refining which
| context is deemed relevant for a given problem and most
| efficient ways to feed it to LLMs.
|
| And let's not kid ourselves, Microsoft, OpenAI, hell Anthropic
| all have 2027-2029 plans where these things will be
| significantly more powerful.
| tptacek wrote:
| Why does it matter that you're doing the thinking? Isn't that
| good news? What we're not doing any more is any the rote
| recitation that takes up most of the day when building stuff.
| tomnipotent wrote:
| I think maybe you have unrealistic expectations.
|
| Yesterday I needed to import a 1GB CSV into ClickHouse. I
| copied the first 500 lines into Claude and asked it for a
| CREATE TABLE and CLI to import the file. Previous day I was
| running into a bug with some throw-away code so I pasted the
| error and code into Claude and it found the non-obvious
| mistake instantly. Week prior it saved me hours converting
| some early prototype code from React to Vue.
|
| I do this probably half a dozen times a day, maybe more if
| I'm working on something unfamiliar. It saves at a minimum an
| hour a day by pointing me in the right direction - an answer
| I would have reached myself, but slower.
|
| Over a month, a quarter, a year... this adds up. I don't need
| "big wins" from my LLM to feel happy and productive with the
| many little wins it's giving me today. And this is the worst
| it's ever going to be.
| viraptor wrote:
| In lots of jobs, the person doing work is not the one
| selecting text or the JIRA ticket. There's lots of "this is
| what you're working on next" coding positions that are fully
| managed.
|
| But even if we ignored those, this feels like goalpost
| moving. They're not selecting the text - ok, ask LLM what
| needs refactoring and why. They're not selecting the JIRA
| ticket with context? Ok, provide MCP to JIRA, git and comms
| and ask it to select a ticket, then iterate on context until
| it's solvable. Going with "but someone else does the step
| above" applies to almost everyone's job as well.
| danielschreber wrote:
| >etching out
|
| Could you explain what you mean by etching out small
| improvements? I've never seen the phrase "etching out"
| before.
| verelo wrote:
| It's all good to me - let these folks stay in the simple times
| while you and i arbitrage our efforts against theirs? I agree,
| there's massive value in using these tools and it's hilarious
| to me when others don't see it. My reaction isn't going to be
| convince them they're wrong, it's just to find ways to use it
| to get ahead while leaving them behind.
| Glyptodon wrote:
| I don't know. Cursor is decent at refactoring. ("Look at x and
| ____ so that it ____." With some level of elaboration, where
| the change is code or code organization centric.)
|
| And it's okay at basic generation - "write a map or hash table
| wrapper where the input is a TZDB zone and the output is
| ______" will create something reasonable and get some of the
| TZDB zones wrong.
|
| But it hasn't been that great for me at really extensive
| conceptual coding so far. Though maybe I'm bad at prompting.
|
| Might be there's something I'm missing w/ my prompts.
| skydhash wrote:
| > _"It won't deal with abstractions" - > try asking cursor for
| potential refactors or patterns that could be useful for a
| given text._
|
| That is not what abstraction is about. Abstraction is having a
| simpler model to reason about, not simply code rearranging.
|
| > _"It doesn't understand things beyond the code" - > try
| giving them an abstract jira ticket or asking what it things
| about certain naming, with enough context_
|
| Again, that is still pretty much coding. What matters is the
| overall design (or at least the current module).
|
| > _"Reading code and understanding whether it's wrong will take
| more time than writing it yourself" - > ask any engineer that
| saves time with everything from test scaffolding to run-and-
| forget scripts._
|
| Imagine having a script and not checking the man pages for
| expected behavior. I hope the backup games are strong.
| tptacek wrote:
| There really is a category of these posts that are coming from
| some alternate dimension (or maybe we're in the alternate
| dimension and they're in the real one?) where this isn't one of
| the most important things ever to happen to software
| development. I'm a person who didn't even use autocomplete (I
| use LSPs almost entirely for cross-referencing --- oh wait
| that's another thing I'm apparently never going to need to do
| again because of LLMs), a sincere tooling skeptic. I do not
| understand how people expect to write convincingly that tools
| that reliably turn slapdash prose into median-grade idiomatic
| working code "provide little value".
| quesera wrote:
| > _tools that reliably turn slapdash prose into median-grade
| idiomatic working code_
|
| This may be the crux of it.
|
| Turning slapdash prose into median-grade code is not a
| problem I can imagine trying to solve.
|
| I think I'm better at describing code _in code_ than I am in
| prose.
|
| And when I ask for some algorithmic "save me a web search"
| help, e.g. "In Ruby using no external gems, implement the
| Chudnovsky algorithm to calculate the first N digits of Pi",
| the results I get are absurd, despite explicit assurances
| that the code is tested (often it won't even parse, and yes I
| know there's no Ruby interpreter inside, but why lie?), and
| simpering gratitude when I repeatedly point out obvious
| lexical problems like assigning constants in a loop.
|
| I Want to Believe. And I certainly don't want to be "that
| guy", but my honest assessment of LLMs for coding so far is
| that they are a frustrating Junior, who maybe I should help
| out because mentoring might be part of my job, but from whom
| I should not expect any near-term technical contribution.
| tptacek wrote:
| It is most of the problem of delivering professional
| software.
| agentultra wrote:
| I don't think the argument from such a simple example does much
| for the authors point.
|
| The bigger risk is skill atrophy.
|
| Proponents say, it doesn't matter. We shouldn't have to care
| about memory allocation or dependencies. The AI system will
| eventually have all of the information it needs. We just have
| to tell it what we want.
|
| However, knowing what you want requires knowledge about the
| subject. If you're not a security engineer you might not know
| what funny machines are. If someone finds an exploit using them
| you'll have no idea what to ask for.
|
| AI may be useful for some but at the end of the day, knowledge
| is useful.
| dymk wrote:
| And yet I keep meeting programmers who say AI coding assistants
| are saving them tons of time or helping them work through
| problems they otherwise wouldn't have been able to tackle. I
| count myself among that group at this point. Maybe that means I'm
| just not a very good programmer if I need the assistance, but I'd
| like to think my work speaks for itself at this point.
|
| Some things where I've found AI coding assistants to be fantastic
| time savers: - Searching a codebase with natural
| language - Quickly groking the purpose of a function or
| file or module - Rubber duck debugging some particularly
| tricky code - Coming up with tests exercising functionality
| I hadn't yet considered - Getting up to speed with popular
| libraries and APIs
| WhitneyLand wrote:
| If it's of such little value, does he really want to compete
| against developers trying to do the same thing he is but that
| have the benefit of it?
| minimaxir wrote:
| > But AI doesn't think -- it predicts patterns in language.
|
| Boilerplate code _is_ a pattern, and code is a language. That 's
| part of why AI-generated code is especially effective for simple
| tasks.
|
| It's when you get into more complicated apps that the pros/cons
| of AI coding start to be more apparent.
| permo-w wrote:
| not even necessarily complicated, but also obscure
| gersh wrote:
| It seems like the traditional way to develop good judgement is by
| getting experience with hands-on coding. If that is all
| automated, how will people get the experience to have good
| judgement? Will fewer people get the experiences necessary to
| have good judgement?
| nico wrote:
| Compilers, for the most part, made it unnecessary for
| programmers to check the assembly code. There are still
| compiler programmers that do need to deal with that, but most
| programmers get to benefit from just trusting that the
| compilers, and by extension the compiler programmers, are doing
| a good job
|
| We are in a transition period now. But eventually, most
| programmers will probably just get to trust the AIs and the
| code they generate, maybe do some debugging here and there at
| the most. Essentially AIs are becoming the English -> Code
| compilers
| asadotzler wrote:
| In my experience, compilers are far more predictable and
| consistent than LLMs, making them suitable for their purpose
| in important ways that LLMs are not.
| permo-w wrote:
| I honestly think people are so massively panicking over
| nothing with AI. even wrt graphic design, which I think
| people are most worried about, the main, central skill of a
| graphic designer is not the actual graft of sitting down and
| drawing the design, it's having the taste and skill and
| knowledge to make design choices that are worthwhile and
| useful and aesthetically pleasing. I can fart around all day
| on Stable Diffusion or telling an LLM to design a website,
| but I don't know shit about UI/UX design or colour theory or
| simply what appeals to people visually, and I doubt an AI can
| teach me it to any real degree.
|
| yes there are now likely going to be less billable hours and
| perhaps less joy in the work, but at the same time I suspect
| that managers who decide they can forgo graphic designers and
| just get programmers to do it without someone trained AI are
| going to lose a competitive advantage
| Falimonda wrote:
| I fear this will not age well.
|
| Which models have you tried to date? Can you come up with a top 3
| ranking among popular models based on your definition of value?
|
| What can be said about the ability of an LLM to translate your
| thinking represented in natural language to working code at rates
| exceeding 5-10x your typing speed?
|
| Mark my words: Every single business that has a need for SWEs
| will obligate their SWEs to use AI coding assistants by the end
| of 2026, if not by the end of 2025. It will not be optional like
| it is today. Now is the time you should be exploring which models
| are better at "thinking" than others, and discerning which
| thinking you should be doing vs. which thinking you can leave up
| to ever-advancing LLMs.
| jdiff wrote:
| I've had to yank tokens out of the mouths of too many thinking
| models stuck in loops of (internally, within their own chain of
| thought) rephrasing the same broken function over and over
| again, realizing each time that it doesn't meet constraints,
| and trying the same thing again. Meanwhile, I was sat staring
| at an opaque spinner wondering if it would have been easier to
| just write it myself. This was with Gemini 2.5 Pro for
| reference.
|
| Drop me a message on New Year's Day 2027. I'm betting I'll
| still be using them optionally.
| Falimonda wrote:
| I've experienced gemini get stuck as you describe a handful
| of times. With that said, my predication is made on the
| observation that these tools are already force multipliers,
| and they're only getting better each passing quarter.
|
| You'll of course be free to use them optionally in your free
| time and on personal projects. It won't be the case at your
| place of employment.
|
| I will mark my calendar!
| marcusb wrote:
| This reminds me of the story a few days ago about "what is your
| best prompt to stump LLMs", and many of the second level
| replies were links to current chat transcripts where the LLM
| handled the prompt without issue.
|
| I think there are a couple of problems at play: 1) people who
| don't want the tools to have value, for various reasons, and
| have therefore decided the tools don't have value; 2) people
| who tried the tools six months or a year ago and had a bad
| experience and gave up; and 3) people who haven't figured out
| how to make good use of the tools to improve their productivity
| (this one seems to be heavily impacted by various grifters who
| overstate what the coding assistants can do, and people
| underestimating the effort they have to put in to get good at
| getting good output from the models.)
| tomschwiha wrote:
| One point AI helps me with is to keep going.
|
| Does it do things wrong (compared to what I have in my mind?). Of
| course. But it helps to have code quicker on screen. Editing /
| rolling back feels faster than typing everything myself.
| rcarmo wrote:
| 4 lines of JS. A screenful of "reasoning". Not much I can agree
| with.
|
| Meanwhile I just asked Gemini in VS Code Agent Mode to build an
| HTTP-like router using a trie and then refactor it as a Python
| decorator, and other than a somewhat dumb corner case it failed
| at, it generated a pretty useful piece of code that saved me a
| couple of hours (I had actually done this before a few years ago,
| so I knew exactly what I wanted).
|
| Replace programmers? No. Well, except front-end (that kind of
| code is just too formulaic, transactional and often boring to
| do), and my experiments with React and Vue were pretty much "just
| add CSS".
|
| Add value? Heck yes - although I am still very wary of letting
| LLM-written code into production without a thorough review.
| jdiff wrote:
| Not even front end, unless it literally is a dumb thin wrapper
| around a back end. If you are processing anything on that front
| end, AI is likely to fall flat as quickly as it would on the
| backend.
| permo-w wrote:
| based on what?
| jdiff wrote:
| My own experience writing a web-based, SVG-based 3D
| modeler. No traditional back end, but when working on the
| underlying 3D engine it shits the bed from all the broken
| assumptions and uncommon conventions used there. And in the
| UI, the case I have in mind involved pointer capture and
| event handling, it chases down phantoms declaring it's
| working around behavior that isn't in the spec. I bring it
| the spec, I bring it minimal examples producing the desired
| behavior, and it still can't produce working code. It still
| tries to critique existing details that aren't part of the
| problem, as evidenced by the fact it took me 5 minutes to
| debug and fix myself when I got tired of pruning context.
| At one point it highlighted a line of code and suggested
| the problem could be a particular function getting called
| after that line. That function was called 10 lines above
| the highlighted line, in a section it re-output in a quote
| block.
|
| So yes, it's bad for front end work too if your front end
| isn't just shoveling data into your back end.
|
| AI's fine for well-trodden roads. It's awful if you're
| beating your own path, and especially bad at treading a new
| path just alongside a superhighway in the training data.
| permo-w wrote:
| it built the meat of the code, you spent 5 minutes fixing
| the more complex and esoteric issues. is this not the
| desired situation? you saved time, but your skillset
| remained viable
|
| > AI's fine for well-trodden roads. It's awful if you're
| beating your own path, and especially bad at treading a
| new path just alongside a superhighway in the training
| data.
|
| I very much agree with this, although I think that it can
| be ameliorated significantly with clever prompting
| jolt42 wrote:
| > that kind of code is just too formulaic, transactional and
| often boring to do
|
| No offense, but that sounds like every programmer that hasn't
| done front-end development to me. Maybe for some class of
| front-ends (the same stuff that Ruby on Rails could generate),
| but past that things tend to get not boring real fast.
| nottorp wrote:
| > I had actually done this before a few years ago, so I knew
| exactly what I wanted
|
| Oh :) LLMs do work sometimes when you already know what you
| want them to write.
| greatpostman wrote:
| Honestly, o3 has completely blown my mind in terms of ability to
| come up with useful abstractions beyond what I would normally
| build. Most people claiming LLMs are limited just arent using the
| tools enough, and cant see the trajectory of increasing ability
| quesera wrote:
| > _Most people claiming LLMs are limited just rent using the
| tools enough_
|
| The old quote might apply:
|
| ~"XML is like violence. If it's not working for you, you need
| to use more of it".
|
| (I think this is from Tim Bray -- it was certainly in his
| .signature for a while -- but oddly a quick web search doesn't
| give me anything authoritative. I asked Gemma3, which suggests
| Drew Conroy instead)
| meander_water wrote:
| A programmers job is to provide value to the business. Thinking
| is certainly a part of the process, but not the job in itself.
|
| I agree with the initial point he's making here - that code takes
| time to parse mentally, but that does not naturally lead to the
| conclusion that this _is_ the job.
| kristopolous wrote:
| It's the "Day-50" problem.
|
| On Day-0, AI is great but by Day-50 there's preferences and
| nuance that isn't captured through textual evidence. The
| productivity gains mostly vanish.
|
| Ultimately AI coding efficacy is an HCI relationship and you need
| different relationships (workflows) at different points in time.
|
| That's why, currently, as time progresses you use AI less and
| less on any feature and fall back to human. Your workflow isn't
| flexible enough.
|
| You don't need to tell me you do this, everyone does.
|
| So the real problem isn't the Day-0 solution, it's solving the
| HCI workflow problem to get productivity gains at Day-50.
|
| Smarter AI isn't going to solve this. Large enough code becomes
| internally contradictory, documentation becomes dated, tickets
| become invalid, design docs are based on older conceptions.
| Devin, plandex, aider, goose, claude desktop, openai codex, these
| are all Day-0 relationships. The best might be a Day-10 solution,
| but none are Day-50.
|
| The future world of GPT-5 and Sonnet-4 still won't read your
| thoughts.
|
| Day-50 productivity is ultimately a user-interface problem - a
| relationship negotiation and a fundamentally dynamic
| relationship.
|
| I talked about what I'm doing to empower new workflows over here:
| https://news.ycombinator.com/item?id=43814203
| Bengalilol wrote:
| You pinpoint a truly important thing, even though I cannot put
| words onto it, I think that getting lost with AI coding
| assistants is far worse than getting lost as a programmer. It
| is like doing vanilla code or trying to make a framework suit
| your needs.
|
| AI coding assistants provide 90% of the time more value than
| the good old google search. Nothing more, nothing less. But I
| don't use AI to code for me, I just use it to optimize very
| small fractions (ie: methods/functions at most).
|
| > The future world of GPT-5 and Sonnet-4 still won't read your
| thoughts. Chills ahead. For sure, it will happen some day. And
| there won't be any reason to not embrace it (although I am, for
| now, absolutely reluctant to such idea).
| ChrisMarshallNY wrote:
| I haven't been using AI for coding assistance. I use it like
| someone I can spin around in my chair, and ask for any ideas.
|
| Like some knucklehead sitting behind me, sometimes, it has given
| me good ideas. Other times ... _not so much_.
|
| I have to carefully consider the advice and code that I get.
| Sometimes, it works, but it does not work _well_. I don 't think
| that I've ever used suggested code verbatim. I _always_ need to
| modify it; sometimes, heavily.
|
| So I still have to think.
| calf wrote:
| Do non-AI coding assistants provide value?
| moshegramovsky wrote:
| It doesn't seem like the author has ever used AI to write code.
| You definitely can ask it to refactor. Both ChatGPT and Gemini
| have done excellent work for me on refactors, and they have also
| made mistakes. It seems like they are both quite good at making
| lengthy, high-quality suggestions about how to refactor code.
|
| His argument about debugging is absolutely asinine. I use both
| GDB and Visual Studio at work. I hate Visual Studio except for
| the debugger. GDB is definitely better than nothing, but only
| just. I am way, way, way more productive debugging in Visual
| Studio.
|
| Using a good debugger can absolutely help you understand the code
| better and faster. Sorry but that's true whether the author likes
| it or not.
| robertclaus wrote:
| In the past I've worked at startups that hired way too many
| bright junior developers and at companies that insisted on only
| hiring senior developers. The arguments for/against AI coding
| assistants feel very reminiscent of the arguments that occur
| around what seniority balance we want on an engineering team. In
| my experience it's a matter of balancing between doing complex
| work yourself and handing off simple work.
| bastawhiz wrote:
| I have not had the same experience as the author. The code I have
| my tools write is not long. I write a little bit at a time, and I
| know what I expect it to generate before it generates it. If what
| it generates isn't what I expect, that's a good hint to me that I
| haven't been descriptive enough with my comments or naming or
| method signatures.
|
| I use Cursor not because I want it to think for me, but because I
| can only type so fast. I get out of it exactly the amount of
| value that I expect to get out of it. I can tell it to go through
| a file and perform a purely mechanical reformatting (like
| converting camel case to snake case) and it's faster to review
| the results than it is for me to try some clever regexp and screw
| it up five or six times.
|
| And quite honestly, for me that's the dream. Reducing the
| friction of human-machine interaction is _exactly the goal_ of
| designing good tools. If there was no meaningful value to be had
| from being able to get my ideas into the machine faster, nobody
| would buy fancy keyboards or (non-accessibility) dictation
| software.
| mrtksn wrote:
| Software ate the world, it's time for AI to eat the software :)
|
| Anything methodical is exactly what the current gen AI can do.
| Its phenomenal in translations, be it human language to human
| language or an algorithm description into computer language.
|
| People like to make fun with the "vibe coding" but that's
| actually a purification process where humans are getting rid of
| the toolset that we used to master to be able to make the
| computer do what we tell it to do.
|
| Most of todays AI developer tools are misguided because they are
| trying to orchestrate tools that were created to help people
| write and manage software.
|
| IMHO the next-gen tools will write code that is not intended for
| human consumption. All the frameworks, version management, coding
| paradigms etc will be relics of the past. Curiosities for people
| who are fascinated for that kind of things, not production
| material.
| beernet wrote:
| Call it AI, ML, Data Mining, it does not matter. Truth is these
| tools have been disrupting the SWE market and will continue to do
| so. People working with it will simply be more effective. Until
| even them are obsolete. Don't hate the player, hate the game.
| androng wrote:
| If AI coding assistants provide little value then why is Cursor
| IDE a 300m company and why does this study say it makes people
| more 37% more productive?
|
| https://exec.mit.edu/s/blog-post/the-productivity-effects-of...
| hooloovoo_zoo wrote:
| That study shows nothing of the sort. It essentially showed
| ChatGPT is better at pumping out boilerplate than humans. Here
| are the tasks:
| https://www.science.org/action/downloadSupplement?doi=10.112...
| einpoklum wrote:
| Reading just the title:
|
| It is _because_ a programmer's job is to think that AI Coding
| assistants may provide value. They would (and perhaps already do)
| complete the boiler plate, and perhaps help you access
| information faster. They also have detriments, may atrophy some
| of your capabilities, may tempt you to go down more simplistic
| paths etc., but still.
|
| Reading the post as well: It didn't change my mind. As for what
| it actually says, my reaction is a shrug, "whatever".
| tangotaylor wrote:
| I think there's some truth here in that AI can be used as a band-
| aid to sweep issues of bad abstractions or terse syntax under the
| rug.
|
| For example, I often find myself reaching for Cursor/ChatGPT to
| help me with simple things in bash scripts (like argument
| parsing, looping through arrays, associative maps, handling
| spaces in inputs) because the syntax just isn't intuitive to me.
| But I can easily do these things in Python without asking an AI.
|
| I'm not a web developer but I imagine issues of boilerplate or
| awkward syntax could be solved with more "thinking" instead of
| using the AI as a better abstraction to the bad abstractions in
| your codebase.
| WillPostForFood wrote:
| _engineering workflows should be more about thinking and
| discussing than writing code_
|
| This is also the best case for using AI. You think, you discuss,
| then instruct the AI to write, then you review.
| Velorivox wrote:
| Title is a bit provocative and begs the question (is thinking the
| part being replaced?), but the bigger issue is what "little"
| means here. Little in absolute terms? I think that's harsh.
| Little in relation to how it's touted? That's a rational
| conclusion, I think.
|
| You need three things to use LLM based tools effectively: 1) an
| understanding of what the tool is good at and what it isn't good
| at; 2) enough context and experience to input a well formulated
| query; and 3) the ability to carefully verify the output and
| discard it if necessary.
|
| This is the same skillset we've been using with search engines
| for years, and we know that not everyone has the same degree of
| Google-fu. There's a lot of subjectivity to the "value".
| rashidae wrote:
| I believe we need to take a more participatory approach to
| intelligence orchestration.
|
| It's not humans vs machines.
| henning wrote:
| I am the first to criticize LLMs and dumb AI hype. there is no
| nothing wrong with using an LSP, and a coding assistant is just
| an enhanced LSP if that is all you want it to be. my job is to
| solve problems, and AI can slightly speed that up.
| rybosworld wrote:
| This is a tired viewpoint.
|
| There's a percentage of developers, who due to fear/ego/whatever,
| are refusing to understand how to use AI tooling. I used to
| debate but I've started to realize that these arguments are
| mostly not coming from a rational place.
| SkyPuncher wrote:
| I get massive value out of Agentic coding.
|
| I no longer need to worry about a massive amount of annoying, but
| largely meaningless implementation details. I don't need to pick
| a random variable/method/class name out of thin air. I don't need
| to plan ahead on how to DRY up a method. I don't need to consider
| every single edge case up front.
|
| Sure, I still need to tweak and correct things but we're talking
| about paint by number instead of starting with a blank canvas.
| It's such a massive reduction in mental load.
|
| I also find it reductionist to say LLM don't think because
| they're simply predicting patterns. Predicting patterns is
| thinking. With the right context, there is little difference
| between complex pattern matching and actual thinking. Heck, a
| massive amount of my actual, professional software development
| work is figuring out how to pattern matching my idea into an
| existing code base. There's a LOT of value in consistency.
| merizian wrote:
| I prefer a more nuanced take. If I can't reliably delegate away a
| task, then it's usually not worth delegating. The time to review
| the code needs to be less than the time it takes to write it
| myself. This is true for people and AI.
|
| And there are now many tasks which I can confidently delegate
| away to AI, and that set of tasks is growing.
| gamescr wrote:
| Some problems require using a different kind of modeling other
| than language:
|
| https://medium.com/@lively_burlywood_cheetah_472/ai-cant-sol...
___________________________________________________________________
(page generated 2025-04-27 23:00 UTC)