[HN Gopher] Senior Developer Skills in the AI Age
___________________________________________________________________
Senior Developer Skills in the AI Age
Author : briankelly
Score : 390 points
Date : 2025-04-03 18:47 UTC (1 days ago)
(HTM) web link (manuel.kiessling.net)
(TXT) w3m dump (manuel.kiessling.net)
| thisdougb wrote:
| This is interesting, thanks for posting. I've been searching for
| some sort of 'real' usage of AI-coding. I'm a skeptic of the
| current state of things, so it's useful to see real code.
|
| I know Python, but have been coding in Go for the last few years.
| So I'm thinking how I'd implement this in Go.
|
| There's a lot of code there. Do you think it's a lot, or it
| doesn't matter? It seems reasonably clear though, easy to
| understand.
|
| I'd have expected better documentation/in-line comments. Is that
| something that you did/didn't specify?
| ManuelKiessling wrote:
| With this project, I was really only interested in the
| resulting _application_ , and intentionally not in the
| resulting _code_.
|
| I really wanted to see how far I can get with that approach.
|
| I will ask it to clean up the code and its comments and report
| back.
| Quarrelsome wrote:
| This is extremely fascinating and finally something that feels
| extremely tangible as opposed to vibes based ideas around how AI
| will "take everyone's jobs" while failing to fill in the gaps
| between. This feels extremely gap filling.
|
| I find it quite interesting how we can do a very large chunk of
| the work up front in design, in order to automate the rest of the
| work. Its almost as if waterfall was the better pattern all
| along, but we just lacked the tools at that time to make it work
| out.
| skizm wrote:
| Waterfall has always been the best model as long as specs are
| frozen, which is never the case.
| Quarrelsome wrote:
| sure, but if you're generating the code in a very small
| amount of time from the specs then suddenly its no longer the
| code that is the source, its the specs.
|
| That's what waterfall always wanted to be and it failed
| because writing the code usually took a lot longer than
| writing the specs, but now perhaps, that is no longer the
| case.
| datadrivenangel wrote:
| Specs aren't done until the product is retired, thus, code
| ain't done either.
| thisdougb wrote:
| When I first started in dev, on a Unix OS, we did 'waterfall'
| (though we just called it releasing software, thirty years
| ago). We did a a major release every year, minor releases
| every three months, and patches as and when. All this
| software was sent to customers on mag tapes, by courier.
| Minor releases were generally new features.
|
| Definitely times were different back then. But we did release
| software often, and it tended to be better quality than now
| (because we couldn't just fix-forward). I've been in plenty
| of Agile companies whose software moves slower than the old
| days. Too much haste, not enough speed.
|
| Specs were never frozen with waterfall.
| PaulDavisThe1st wrote:
| The difference between agile and waterfall only really
| matters at the start of a project. Once it is
| deployed/released/in-use, the two approaches converge, more
| or less.
| zelphirkalt wrote:
| Many companies now engage in serial waterfalling.
| kristiandupont wrote:
| Only if you don't learn anything while developing. Which is
| also never the case.
| reneherse wrote:
| Great observations.
|
| As a frontend designer, not a developer, I'm intrigued by the
| techniques presented by the author, though most devs commenting
| here seem to be objecting to the code quality. (Way above my
| pay grade, but hopefully a solvable problem.)
|
| As someone who loves to nerd out on creative processes, it's
| interesting indeed to contemplate whether AI assisted dev would
| favor waterfall vs incremental project structure.
|
| If indeed what works is waterfall dev similar to the method
| described in TFA, we'll want to figure out how to use iterative
| process elsewhere, for the sake of the many benefits when it
| comes to usability and utility.
|
| To me that suggests the main area of iteration would be A) on
| the human factors side: UX and UI design, and B) in the initial
| phases of the project.
|
| If we're using an AI-assisted "neo waterfall" approach to
| implementation, we'll want to be highly confident in the
| specifications we're basing it all on. On regular waterfall
| projects it's critical to reduce the need for post-launch
| changes due to their impact on project cost and timeline.[1] So
| for now it's best to assume we need to do the same for an AI-
| assisted implementation.
|
| To have confidence in our specs document we'll need a fully
| fledged design. A "fully humane", user approved, feature
| complete UX and UI. It will need to be aligned with users'
| mental models, goals, and preferences as much as possible. It
| will need to work within whatever the technical constraints are
| and meet the business goals of the project.
|
| Now all that is what designers should be doing anyway, but to
| me the stakes seem higher on a waterfall style build, even if
| it's AI-assisted.
|
| So to shoulder that greater responsibility, I think design
| teams are going to need a slightly different playbook and a
| more rigorous process than what's typical nowadays. The makeup
| of the design team may need to change as well.
|
| Just thinking about it now, here's a first take on what that
| process might be. It's an adaptation of the design tecniques I
| currently use on non-waterfall projects.
|
| ----------
|
| ::Hypothesis for a UX and UI Design Method for AI-assisted,
| "Neo-Waterfall" Projects::
|
| Main premise: _Designers will need to lead a structured,
| iterative, comprehensive rapid prototyping phase at the
| beginning of a project._
|
| | Overview: |
|
| * In my experience, the _DESIGN- >BUILD->USE/LEARN_ model is an
| excellent guide for wrangling the iterative cycles of a _rapid
| prototyping_ phase. With each "DBU/L" cycle we define problems
| to be solved, create solutions, then test them with users, etc.
|
| * We document every segment of the DBU/L cycle, including
| inputs and outputs, for future reference.
|
| * The _USE /LEARN_ phase of the DBU/L cycle gives us feedback
| and insight that informs what we explore in the next iteration.
|
| * Through multiple such iterations we gain confidence in the
| tradeoffs and assumptions baked into our prototypes.
|
| * We incrementally evolve the scope of the prototypes and
| further organize the UX object model with every iteration.
| (Object Oriented UX, aka OOUX, is the key to finding our way to
| both beautiful data models and user experiences).
|
| * Eventually our prototyping yields an iteration that fulfills
| user needs, business goals, and heeds technical constraints.
| That's when we can "freeze" the UX and UI models, firm up the
| data model and start writing the specifications for the neo-
| waterfall implementation.
|
| * An additional point of technique: Extrapolating from the
| techniques described in TFA, it seems designers will need to do
| their prototyping in a medium that can later function as a
| _keyframe constraint_ for the AI. (We don 't want our AI agent
| changing the UI in the implementation phase of the waterfall
| project, so UI files are a necessary reference to bound its
| actions.)
|
| * Therefore, we'll need to determine which mediums of UI design
| the AI agents can perceive and work with. Will we need a full
| frontend design structured in directories containing shippable
| markup and CSS? Or can the AI agent work with Figma files? Or
| is the solution somewhere in between, say with a combination of
| drawings, design tokens, and a generic component library?
|
| * Finally, we'll need a method for testing the implemented UX
| and UI against the _USE_ criteria we arrived at during
| prototyping. We should be able to synthesize these criteria
| from the prototyping documentation, data modeling and
| specification documents. We need a reasonable set of tests for
| both human and technical factors.
|
| * Post launch, we should continue gathering feedback. No matter
| how good our original 1.0 is, software learns, wants to evolve.
| (Metaphorically, that is. But maybe some day soon--actually?)
| Designing and making changes to brownfield software originally
| built with AI-assistance might be a topic worthy of
| consideration on its own.
|
| ----------
|
| So as a designer, that's how I would approach the general
| problem. Preliminary thoughts anyway. These techniques aren't
| novel; I use variations of them in my consulting work. But so
| far I've only built alongside devs made from meat :-)
|
| I'll probably expand/refine this topic in a blog post. If
| anyone is interested in reading and discussing more, I can send
| you the link.
|
| Email me at: scott [AT] designerwho [DOT] codes
|
| ----------
|
| [1] For those who are new to waterfall project structure, know
| that unmaking and remaking the "final sausage" can be extremely
| complex and costly. It's easy to find huge projects that have
| failed completely due to the insurmountable complexity. One
| question for the future will be whether AI agents can be useful
| in such cases (no sausage pun intended).
| gsibble wrote:
| I completely agree, as a fellow senior coder. It allows me to
| move significantly faster through my tasks and makes me much more
| productive.
|
| It also makes coding a lot less painful because I'm not making
| typos or weird errors (since so much code autocompletes) that I
| spend less time debugging too.
| overgard wrote:
| I dunno, I just had Copilot sneak in a typo today that took
| about ten minutes of debugging to find. I certainly could have
| made a similar typo myself if copilot hadn't done it for me,
| but, all the same copilot probably saved me a minute worth of
| typing today but cost me 10 minutes of debugging.
| cube00 wrote:
| The vibe bros would have you believe your prompt is at fault
| and that you need add "don't make typos".
| overgard wrote:
| True, I didn't have five paragraphs on the proper way to
| handle bounding boxes and the conceptual use of bounding
| boxes and "please don't confuse lower for upper". All my
| fault!
| miningape wrote:
| Even if you made a similar typo you'd have a better
| understanding of the code having written it yourself. So it
| likely wouldn't have take 10 minutes to debug.
| overgard wrote:
| Conceptually it was a really simple bug (I was returning a
| bounding box and it returned (min, min) instead of (min,
| max)). So, I mean, the amount that AI broke it was pretty
| minor and it was mostly my fault for not seeing it when I
| generated it. But you know, if it's messing stuff up when
| it's generating 4 lines of code I'm not really going to
| trust it with an entire file, or even an entire function.
| only-one1701 wrote:
| Increasingly I'm realizing that in most cases there is a
| SIGNIFICANT difference between how useful AI is on greenfield
| projects vs how useful it is on brownfield projects. For the
| former: pretty good! For the brownfield, it's often worse than
| useless.
| whiplash451 wrote:
| Right, but AI could change the ratio of greenfield vs
| brownfield then (<< I'll be faster if I rewrite this part from
| scratch >>)
| robenkleene wrote:
| I struggle to wrap my head around how this would work (and
| how AI can be used to maintain and refine software in
| general). Brownfield code got brown by being useful and
| solving a real problem, and doing it well enough to be
| maintained. So the AI approach is to throwaway the code
| that's proved its usefulness? I just don't get it.
| the-grump wrote:
| My experience on brownfield projects is the opposite.
| echelon wrote:
| I think there's a similar analogy here for products in the AI
| era.
|
| Bolting AI onto existing products probably doesn't make sense.
| AI is going to produce an entirely new set of products with AI-
| first creation modalities.
|
| You don't need AI in Photoshop / Gimp / Krita to manipulate
| images. You need a brand new AI-first creation tool that uses
| your mouse inputs like magic to create images. Image creation
| looks nothing like it did in the past.
|
| You don't need Figma to design a webpage. You need an AI-first
| tool that creates the output - Lovable, V0, etc. are becoming
| that.
|
| You don't need AI in your IDE. Your IDE needs to be built
| around AI. And perhaps eventually even programming languages
| and libraries themselves need AI annotations or ASTs.
|
| You don't need AI in Docs / Gmail / Sheets. You're going to be
| creating documents from scratch (maybe pasting things in). "My
| presentation has these ideas, figures, and facts" is much
| different than creating and editing the structure from scratch.
|
| There is so much new stuff to build, and the old tools are all
| going to die.
|
| I'd be shocked if anyone is using Gimp, Blender, Photoshop,
| Premiere, PowerPoint, etc. in ten years. These are all going to
| be reinvented. The only way these products themselves survive
| is if they undergo tectonic shifts in development and an
| eventual complete rewrite.
| dghlsakjg wrote:
| Just for the record, Photoshop's first generative 'AI'
| feature, Content Aware Fill, is 15 years old.
|
| That's a long time for Adobe not to have figured out what
| your are saying.
| echelon wrote:
| Photoshop is unapproachable to the 99%.
|
| A faster GPT 4o will kill Photoshop for good.
| dghlsakjg wrote:
| Photoshop is a tool designed for the 1% of people who
| want that level of control for their vision. Adobe has
| several other tools for other markets.
|
| Even the latest model from this week, which is undeniably
| impressive, can't get close to the level of control that
| photoshop gives me. It often edits parts of the image I
| haven't asked it to touch among other issues. I use
| photoshop as a former photojournalist, and AI manipulated
| images are of no use to me. My photos are documentary.
| They represent a slice of reality. I know that AI can
| create a realistic simulacrum of that, but I'm not
| interested.
|
| This is like saying we won't need text editors in the
| future. That's silly, there are some things that we won't
| need text editors for, but the ability of ai to generate
| and edit text files doesn't mean that we won't ever need
| to edit them manually.
| echelon wrote:
| This rhymes with still developing your own film.
|
| I'm really eager to see how this pans out in a decade.
| dghlsakjg wrote:
| > This rhymes with still developing your own film.
|
| Well, guilty, I actually do occasionally develop my own
| film.
|
| Film photography is actually expanding as an industry
| right now. We are well past the point where digital
| photography can do everything a film camera can do, and
| in most cases it can do it far better (very minor
| exceptions like large format photography still exist,
| where you can argue that film still has the edge).
|
| I think that whether you embrace AI photo editing or not
| has more to do with the purpose of your photos. If you
| are trying to create marketing collateral for a
| valentines day ad campaign, AI is probably going to be
| the best tool. If you are trying to document reality,
| even for aesthetic purposes, AI isn't great. When I make
| a portrait of my wife, I don't need AI to reinterpret her
| face for me.
| overgard wrote:
| Not a chance. Photoshop is about having maximum control
| and power. AI requires relinquishing a degree of control
| in exchange for speed. Totally different audiences.
| bmandale wrote:
| Lol photoshop has been integrating AI features since at
| least "content aware" stuff was released. Photoshop has a
| massive audience of people who want to be able to edit
| images quickly and easily.
| carpo wrote:
| I've been thinking about this a lot and agree. I think the UI
| will change drastically, maybe making voice central and you
| just describe what you want done. When language, image and
| voice models can be run locally things will get crazy.
| Aurornis wrote:
| It's also interesting to see how quickly the greenfield
| progress rate slows down as the projects grow.
|
| I skimmed the vibecoding subreddits for a while. It was common
| to see frustrations about how coding tools (Cursor, Copilot,
| etc) were great last month but terrible now. The pattern
| repeats every month, though. When you look closer it's usually
| people who were thrilled when their projects were small but are
| now frustrated when they're bigger.
| Workaccount2 wrote:
| The real issue is context size. You kinda need to know what
| you are doing in order to construct the project in pieces,
| and know what to tell the LLM when you spin up a new instance
| with fresh context to work on a single subsection. It's
| unwieldy and inefficient, and the model inevitably gets
| confused when it can effectively look at the whole code base.
|
| Gemini 2.5 is much better in this regard, it can make decent
| output up to around 100k tokens compared to claude 3.7
| starting to choke around 32k. Long term it remains to see if
| this will remain an issue. If models can get to 5M context
| and perform like current model with 5k context, it would be a
| total game changer.
| bdcravens wrote:
| I find that feeding in a bunch of context can help you
| refactor, add tests to a low coverage application pretty
| quickly, etc in brownfield apps.
| layer8 wrote:
| And greenfield turns into brownfield pretty quickly.
| SkyPuncher wrote:
| Oh, I find almost the exact opposite.
|
| On Greenfield projects there's simply too many options for it
| to pursue. It will take one approach in one place then switch
| to another.
|
| On a brownfield project, you can give it some reference code
| and tell it about places to look for patterns and it will
| understand them.
| necovek wrote:
| The premise might possibly be true, but as an actually seasoned
| Python developer, I've taken a look at one file:
| https://github.com/dx-tooling/platform-problem-monitoring-co...
|
| All of it smells of a (lousy) junior software engineer: from
| configuring root logger at the top, module level (which relies on
| module import caching not to be reapplied), over not using a
| stdlib config file parser and building one themselves, to a
| raciness in load_json where it's checked for file existence with
| an if and then carrying on as if the file is certainly there...
|
| In a nutshell, if the rest of it is like this, it simply sucks.
| rybosome wrote:
| Ok - not wrong at all. Now take that feedback and put it in a
| prompt back to the LLM.
|
| They're very good at honing bad code into good code with good
| feedback. And when you can describe good code faster than you
| can write it - for instance it uses a library you're not
| intimately familiar with - this kind of coding can be
| enormously productive.
| necovek wrote:
| I do plan on experimenting with the latest versions of coding
| assistants, but last I tried them (6 months ago), none could
| satisfy all of the requirements at the same time.
|
| Perhaps there is simply too much crappy Python code around
| that they were trained on as Python is frequently used for
| "scripting".
|
| Perhaps the field has moved on and I need to try again.
|
| But looking at this, it would still be faster for me to type
| this out myself than go through multiple rounds of reviews
| and prompts.
|
| Really, a senior has not reviewed this, no matter their
| language (raciness throughout, not just this file).
| imiric wrote:
| > They're very good at honing bad code into good code with
| good feedback.
|
| And they're very bad at keeping other code good across
| iterations. So you might find that while they might've fixed
| the specific thing you asked for--in the best case scenario,
| assuming no hallucinations and such--they inadvertently broke
| something else. So this quickly becomes a game of whack-a-
| mole, at which point it's safer, quicker, and easier to fix
| it yourself. IME the chance of this happening is directly
| proportional to the length of the context.
| bongodongobob wrote:
| This typically happens when you run the chat too long. When
| it gives you a new codebase, fire up a new chat so the old
| stuff doesn't poison the context window.
| no_wizard wrote:
| Why isn't it smart enough to recognize new contexts that
| aren't related to old ones?
| bongodongobob wrote:
| I don't know, I didn't invent transformers. I do however
| know how to work with them.
| achierius wrote:
| But it rarely gives me a totally-new codebase unless I'm
| working on a very small project -- so I have to choose
| between ditching its understanding of some parts (e.g.
| "don't introduce this bug here, please") and avoiding
| confusion with others.
| aunty_helen wrote:
| Nah. This isn't true. Every time you hit enter you're not
| just getting a jr dev, you're getting a randomly selected jr
| dev.
|
| So, how did I end up with a logging.py, config.py, config in
| __init__.py and main.py? Well I prompted for it to fix the
| logging setup to use a specific format.
|
| I use cursor, it can spit out code at an amazing rate and
| reduced the amount of docs I need to read to get something
| done. But after its second attempt at something you need to
| jump in and do it yourself and most likely debug what was
| written.
| skydhash wrote:
| Are you reading a whole encyclopedia each time you assigned
| to a task? The one thing about learning is that it
| compounds. You get faster the longer you use a specific
| technology. So unless you use a different platform for each
| task, I don't think you have to read that much
| documentation (understanding them is another matter).
| achierius wrote:
| This is an important distinction though. LLMs don't have
| any persistent 'state': they have their activations,
| their context, and that's it. They only know what's pre-
| trained, and what's in their context. Now, their ability
| to do in-context learning is impressive, but you're
| fundamentally still stuck with the deviations and,
| eventually, forgetting that characterizes these guys --
| while a human, while less quick on the uptake, will
| nevertheless 'bake in' the lessons in a way that LLMs
| currently cannot.
|
| In some ways this is even more impressive -- every prompt
| you make, your LLM is in effect re-reading (and re-
| comprehending) your whole codebase, from scratch!
| BikiniPrince wrote:
| I've found AI tools extremely helpful in getting me up to
| speed with a library or defining an internal override not
| exposed by the help. However, if I'm not explicit in how to
| solve a problem the result looks like the bad code it's been
| ingesting.
| barrell wrote:
| I would not say it is "very good" at that. Maybe it's
| "capable," but my (ample) experience has been the opposite. I
| have found the more exact I describe a solution, the less
| likely it is to succeed. And the more of a solution it has
| come up with, the less likely it is to change its mind about
| things.
|
| Every since ~4o models, there seems to be a pretty decent
| chance that you ask it to change something specific and it
| says it will and it spits out line for line identical code to
| what you just asked it to change.
|
| I have had some really cool success with AI finding
| optimizations in my code, but only when specifically asked,
| and even then I just read the response as theory and go write
| it myself, often in 1-15% the LoC as the LLM
| mjr00 wrote:
| I "love" this part: def ensure_dir_exists(path:
| str) -> None: """ Ensure a directory exists.
| Args: path: Directory path """
|
| An extremely useful and insightful comment. Then you look where
| it's actually used, # Ensure the directory
| exists and is writable ensure_dir_exists(work_dir)
| work_path = Path(work_dir) if not work_path.exists() or
| not os.access(work_dir, os.W_OK):
|
| ... so like, the entire function and its call (and its
| needlessly verbose comment) could be removed because the
| existence of the directory is being checked anyway by pathlib.
|
| This might not matter here because it's a small, trivial
| example, but if you have 10, 50, 100, 500 developers working on
| a codebase, and they're _all_ thoughtlessly slinging code like
| this in, you 're going to have a dumpster fire soon enough.
|
| I honestly think "vibe coding" is the _best_ use case for AI
| coding, because at least then you 're fully aware the code is
| throwaway shit and don't pretend otherwise.
|
| edit: and actually looking deeper, `ensure_dir_exists` actually
| _makes_ the directory, except it 's already been made before
| the function is called so... sigh. Code reviews are going to be
| pretty tedious in the coming years, aren't they?
| milicat wrote:
| The more I browse through this, the more I agree. I feel like
| one could delete almost all comments from that project without
| losing any information - which means, at least the variable
| naming is (probably?) sensible. Then again, I don't know the
| application domain.
|
| Also... def
| _save_current_date_time(current_date_time_file: str,
| current_date_time: str) -> None: with
| Path(current_date_time_file).open("w") as f:
| f.write(current_date_time)
|
| there is a lot of obviously useful abstraction being missed,
| wasting lines of code that will all need to be maintained.
|
| The scary thing is: I have seen professional human developers
| write worse code.
| ramesh31 wrote:
| >The scary thing is: I have seen professional human
| developers write worse code.
|
| This is kind of the rub of it all. If the code works, passes
| all relevant tests, is reasonably maintainable, and can be
| fitted into the system correctly with a well defined
| interface, does it really matter? I mean at that point its
| kind of like looking at the output of a bytecode compiler and
| being like "wow what a mess". And it's not like they _can 't_
| write code up to your stylistic standards, it's just
| literally a matter of prompting for that.
| mjr00 wrote:
| > If the code works, passes all relevant tests, is
| reasonably maintainable, and can be fitted into the system
| correctly with a well defined interface, does it really
| matter?
|
| You're not wrong here, but there's a big difference in
| programming one-off tooling or prototype MVPs and
| programming things that need to be maintained for years and
| years.
|
| We did this song and dance pretty recently with dynamic
| typing. Developers thought it was so much more productive
| to use dynamically typed languages, because it _is_ in the
| initial phases. Then years went by, those small, quick-to-
| make dynamic codebases ended up becoming unmaintainable
| monstrosities, and those developers who hyped up dynamic
| typing invented Python /PHP type hinting and Flow for
| JavaScript, later moving to TypeScript entirely. Nowadays
| nobody seriously recommends building long-lived systems in
| untyped languages, _but_ they are still very useful for
| one-off scripting and more interactive /exploratory work
| where correctness is less important, i.e. Jupyter
| notebooks.
|
| I wouldn't be surprised to see the same pattern happen with
| low-supervision AI code; it's great for popping out the
| first MVP, but because it generates poor code, the gung-ho
| junior devs who think they're getting 10x productivity
| gains will wisen up and realize the value of spending an
| hour thinking about proper levels of abstraction instead of
| YOLO'ing the first thing the AI spits out when they want to
| build a system that's going to be worked on by multiple
| developers for multiple years.
| nottorp wrote:
| > those small, quick-to-make dynamic codebases ended up
| becoming unmaintainable monstrosities
|
| In my experience, type checking / type hinting already
| starts to pay off when more than one person is working on
| an even small-ish code base. Just because it helps you
| keep in mind what comes/goes to the other guy's code.
| lolinder wrote:
| And in my experience "me 3 months later" counts as a
| whole second developer that needs accommodating. The only
| time I appreciate not having to think about types is on
| code that I know I will never, ever come back to--stuff
| like a one off bash script.
| wesselbindt wrote:
| > "me 3 months later" counts as a whole second developer
|
| A fairly incompetent one, in my experience. And don't
| even get me started on "me 3 months ago", that guy's even
| worse.
| nottorp wrote:
| "How has that shit ever worked?"
|
| Me, looking at code 100% written by me last year.
| baq wrote:
| It gets worse with age and size of the project. I'm
| getting the same vibes, but for code written by me last
| month.
| guskel wrote:
| Yep, I've seen type hinting even be helpful without a
| type checker in python. Just as a way for devs to tell
| each other what they intend on passing. Even when a small
| percent of the hints are incorrect, having those hints
| there can still pay off.
| bcoates wrote:
| I think the productivity gains of dynamic typed languages
| were real, and based on two things: dynamic typing (can)
| provide certain safety properties trivially, and dynamic
| typing neatly kills off the utterly inadequate type
| systems found in mainstream languages when they were
| launched (the 90s, mostly).
|
| You'll notice the type systems being bolted onto dynamic
| languages or found in serious attempts at new languages
| are radically different than the type systems being
| rejected by the likes of javascript, python, ruby and
| perl.
| triyambakam wrote:
| The ML world being nearly entirely in Python, much of it
| untyped (and that the Python type system is pretty weak)
| is really scary.
| ramesh31 wrote:
| >The ML world being nearly entirely in Python, much of it
| untyped (and that the Python type system is pretty weak)
| is really scary
|
| I think this has a ton to do with the mixed results from
| "vibe coding" we've seen as the codebase grows in scope
| and complexity. Agents seem to break down without a good
| type system. Same goes for JS.
|
| I've just recently started on an Objective-C project
| using Cline, and it's like nirvana. I can code out an
| entire interface and have it implemented for me as I'm
| going. I see no reason it couldn't scale infinitely to
| massive LOC with good coding practices. The real killer
| feature is header files. Being able to have your entire
| projects headers in context at all time, along with a
| proper compiler for debugging, changes the game for how
| agents can reason on the whole codebase.
| dheera wrote:
| > You're not wrong here, but there's a big difference in
| programming one-off tooling or prototype MVPs and
| programming things that need to be maintained for years
| and years.
|
| Humans also worry about their jobs, especially in PIP-
| happy companies; they are very well known for writing
| intentionally over-complicated code that only they
| understand so that they are irreplaceable
| XorNot wrote:
| I'm not convinced this actually happens. Seems more like
| somthing people assume happens because they don't like
| whatever codebase is at the new job.
| dheera wrote:
| Oh, I'm convinced, I've seen it first hand.
| baq wrote:
| If your TC is 500k-1M and you don't feel like job hopping
| anymore, you'd certainly not want to get hit by a random
| layoff due to insufficient organizational masculinity or
| whatever. Maintaining a complex blob of mission critical
| code is one way of increasing your survival chances,
| though of course nothing is guaranteed.
| LtWorf wrote:
| People doing layoffs have no idea of who works and who's
| warming the chair.
| baq wrote:
| Depending on the layoff they may look into yearly
| reviews... or not.
| LtWorf wrote:
| Ah yes, those work /s
| SkyBelow wrote:
| The challenge is that sufficiently bad code could be
| intentional or it could be from a lack of skill.
|
| For example, I've seen a C# application where every
| function takes in and outputs an array of objects,
| supposedly built that way so the internal code can be
| modified without ever having to worry about the contract
| breaking. It was just as bad as you are imagining,
| probably worse. Was that incompetence or building things
| to be so complicated that others would struggle to work
| on it?
| ManuelKiessling wrote:
| I'm certainly _extremely_ happy for having an extensive
| type system in my daily driver languages _especially_
| when working with AI coding assistance -- it 's yet
| another very crucial guard rail that ensures that keeps
| the AI on track and makes a lot of fuckups downright
| impossible.
| dilyevsky wrote:
| what are you going to do when something suddenly _doesn 't_
| work and cursor endlessly spins without progress no matter
| how many "please don't make mistakes" you add? delete the
| whole thing and try to one-shot it again?
| nsonha wrote:
| Why do you HAVE TO one-shot? No one says you have to code
| like those influencers. You are a software engineer, use
| AI like one, iteratively.
| ramesh31 wrote:
| >No one says you have to code like those influencers. You
| are a software engineer, use AI like one, iteratively.
|
| This is my issue with all the AI naysayers at this point.
| It seems to all boil down to "haha, stupid noob can't
| code so he uses AI" in their minds. It's like they are
| incapable of understanding that there could
| simultaneously be a bunch of junior devs pushing
| greenfield YouTube demos of vibe coding, while at the
| same time expert software engineers are legitimately
| seeing their productivity increase 10x on serious
| codebases through judicious use.
|
| Go ahead and keep swinging that hammer, John Henry.
| shove wrote:
| "Maybe you didn't hear me, I said 'good morning steam
| driver, how are you?'"
| necovek wrote:
| > expert software engineers are legitimately seeing their
| productivity increase 10x
|
| It's funny you would say this, because we are really
| commenting on an article where a self-proclaimed "expert"
| has done that and the "10x" output is terrible.
| ManuelKiessling wrote:
| I have just checked my article -- the word "expert" isn't
| in it, so not quite sure where you got this from.
|
| I'm working in the field professionally since June 1998,
| and among other things, I was the tech lead on
| MyHammer.de, Germany's largest craftsman platform, and
| have built several other mid-scale online platforms over
| the decades.
|
| How _well_ I have done this, now that 's for others to
| decide.
|
| Quite objectively though, I do have some amount of
| experience -- even a bad developer probably cannot help
| but pick up _some_ learnings over so many years in
| relevant real-world projects.
|
| However, and I think I stated this quite clearly, I am
| expressively _not_ an expert in Python.
|
| And yet, I could realize an actually working solution
| that solves an actual problem I had in a very real sense
| (and is nicely humming away for several weeks now).
|
| And this is precisely where yes, I _did_ experience a 10x
| productivity increase; it would have certainly taken me
| at least a week or two to realize the same solution
| myself.
| necovek wrote:
| Apologies for implying you are claiming to be an expert
| software engineer: I took the "senior" in the title and
| "25 years of experience" in the post to mean similar
| things as "expert".
|
| I don't doubt this is doing something useful for you. It
| might even be mostly correct.
|
| But it is not a positive advertisement for what AI can
| do: just like the code is objectively crap, you can't
| easily trust the output without a comprehensive review.
| And without doubting your expertise, I don't think you
| reviewed it, or you would have caught the same smells I
| did.
|
| What this article tells me is that when the task is
| sufficiently non-critical that you can ignore being
| perfectly correct, you can steer AI coding assistants
| into producing some garbage code that very well might
| work or appear to work (when you are making stats, those
| are tricky even with utmost manual care).
|
| Which is amazing, in my opinion!
|
| But not what the premise seems to be (how a senior will
| make it do something very nice with decent quality code).
|
| Out of curiosity why did you not build this tool in a
| language you generally use?
| ManuelKiessling wrote:
| Because I wanted exactly this experience: can I get to
| the desired result -- functionality-wise, if not code-
| wise! -- even if I choose the stack that makes sense in
| terms of technology, not the one that I happen to be
| proficient in?
|
| And if I cannot bring language-proficiency to the table
| -- which of my capabilities as a seasoned
| software&systems guy _can_ I put to use?
|
| In the brown-field projects where my team and I have the
| AI implement whole features, the resulting code quality
| -- under our sharp and experienced eyes -- tends to end
| up just fine.
|
| I think I need to make the differences between both
| examples more clear...
| necovek wrote:
| Ok, I guess you shouldn't complain that you really got
| exactly what you wanted.
|
| However, your writing style implied that the result was
| somehow better because you were otherwise an experienced
| engineer.
|
| Even your clarification in the post sits right below your
| statement how your experience made this very smooth, with
| no explanation that you were going to be happy with bad
| code as long as it works.
| ManuelKiessling wrote:
| I guess we are slowly but steadily approaching splitting-
| hairs-territory, so not sure if this is still worth it...
|
| However. I'm not quite sure where I complained. Certainly
| not in the post.
|
| And yes, I'm _very_ convinced that the result turned out
| a lot better than it would have turned out if an
| unexperienced ,,vibe coder" had tried to achieve the same
| end result.
|
| Actually pretty sure without my extensive and structured
| requirements and the guard rails, the AI coding session
| would have ended in a hot mess in the best case, and a
| non-functioning result in the worst case.
|
| I'm 100% convinced that these two statements are true and
| relevant to the topic:
|
| That a) someone lacking my level of experience and
| expertise is simply not capable of producing a document
| like https://github.com/dx-tooling/platform-problem-
| monitoring-co...
|
| And that b) using said document as the basis for the
| agent-powered AI coding session had a significant impact
| on the process as well as the end result of the session.
| achierius wrote:
| I think some of the suspicion is that it's really not 10x
| in practice.
| Macha wrote:
| Like AI could write code perfectly as soon as I thought
| of it, and that would not improve my productivity 10x.
| Coding was never the slow part. Everything that goes
| around coding (like determining that the extra load here
| is not going to overload things, getting PMs to actually
| make their mind up what the feature is going to do,
| etc.), means that there's simply not that much time to be
| saved on coding activities.
| nsonha wrote:
| Same argument can be said for not using any tooling
| really. "Tech is the easy part". No difference typing
| code on notepad and having zero process/engineering
| infrastructure I guess. Because stakeholder management is
| the main engineering skill apparently.
|
| Btw, AI doesn't just code, there are AIs for debugging,
| monitoring etc too.
| necovek wrote:
| If you were to really measure speed improvement of
| notepad vs a tricked out IDE, it's probably not much. The
| problem would be the annoyance caused to an engineer who
| has to manually type out everything.
|
| No, coding speed is really not the bottleneck to software
| engineer productivity.
| nsonha wrote:
| > coding speed > the annoyance caused to an engineer
|
| No one said productivity is this one thing and not that
| one thing, only you say that because it's convenient for
| your argument. Productivity is a combination of many
| things, and again it's not just typing out code that's
| the only area AI can help.
| necovek wrote:
| The argument of "coding speed not a bottleneck to
| productivity" is not in contradiction to "productivity is
| a combination": it even implies it.
|
| Again, the context here was that somebody discussed speed
| of coding and you raised the point of not using any
| tooling with Notepad.
| achierius wrote:
| There are two levels to this.
|
| 1. Tooling obviously does improve performance, but not so
| huge a margin. Yes, if AI could automate more elements of
| tooling, that would very much help. If I could tell an AI
| "bisect this bug, across all projects in our system,
| starting with this known-bad point", that would be very
| helpful -- sometimes. And I'm sure we'll get there soon
| enough. But there is fractal complexity here: what if
| isolating the bug requires stepping into LLDB, or dumping
| some object code, or running with certain stressors on
| certain hardware? So it's not clear that "LLM can produce
| code from specs, given tight oversight" will map (soon)
| to "LLM can independently assemble tools together and
| agentically do what I need done".
|
| 2. Even if all tooling were automated, there's still
| going to be stuff left over. Can the LLM draft
| architectural specs, reach out to other teams (or their
| LLMs), sit in meetings and piece together the big
| picture, sus out what the execs _really_ want us to be
| working on, etc.? I do spend a significant (double-digit)
| percentage of my time working on that, so if you
| eliminate everything else -- then you could get 10x
| improvement, but going beyond that would start to run up
| against Amdahl 's Law.
| johnnyanmac wrote:
| My grievances are simple: an expert programming utilizing
| AI will be a truly dangerous force.
|
| But that's not what we get in this early stage of
| grifting. We get 10% marketing buzz on how cool this is
| with stuff that cannot be recreated in the tool alone,
| and 89% of lazy or inexperienced developers who just turn
| in slop with little or no iteration. The latter don't
| even understand the code they generated.
|
| That 1% will be amazing, it's too bad the barrel is full
| of rotten apples hiding that potential. The experts also
| tend to keep to themselves, in my experience. the 89%
| includes a lot of dunning-kruger as well which makes
| those outspoken experts questionable (maybe a part of why
| real experts aren't commenting on their experience).
| LtWorf wrote:
| Weren't you the guy who only writes HTML? Maybe let
| domain experts comment on their domain of expertise.
| dilyevsky wrote:
| The point is because it generally produces crap code you
| have to one shot or else iteration becomes hard. Similar
| to how a junior would try to refactor their mess and just
| make a bigger mess
| nsonha wrote:
| I find it hard to believe that when the AI generates crap
| code, there is absolutely nothing you can do (change the
| prompt, modify context, add examples) to make it do what
| you want. It has not been my experience either. I only
| use AI to make small modules and refactor instead of one-
| shoting.
|
| Also I find "AI makes crap code so we should give it a
| bigger task" illogical.
| stemlord wrote:
| Right, and the reason why professional developers are
| writing worse code out there is most likely because they
| simply don't have the time/aren't paid to care more about
| it. The LLM is then mildly improving the output in this
| brand of common real world scenario
| necovek wrote:
| I think this code is at least twice the size than it needs
| to be compared to nicer, manually produced Python code: a
| lot of it is really superfluous.
|
| People have different definitions of "reasonably
| maintainable", but if code has extra stuff that provides no
| value, it always perplexes the reader (what is the point of
| this? what am I missing?), and increases cognitive load
| significantly.
|
| But if AI coding tools were advertised as "get 10x the
| output of your least capable teammate", would they really
| go anywhere?
|
| I love doing code reviews as an opportunity to teach
| people. Doing this one would suck.
| ManuelKiessling wrote:
| Good insight, and indeed quite exactly my state of mind
| while creating _this particular solution_.
|
| Iin this case, I did put in the guard rails to ensure that
| I reach my goal in hopefully a straight line and a quickly
| as possible, but to be honest, I did not give much thought
| to long-term maintainability or ease of extending it with
| more and more features, because I needed a very specific
| solution for a use case that doesn't change much.
|
| I'm definitely working differently in my brown-field
| projects where I'm intimately familiar with the tech stack
| and architecture -- I do very thorough code reviews
| afterwards.
| fzeroracer wrote:
| At the very least, if a professional human developer writes
| garbage code you can confidently blame them and either try to
| get them to improve or reduce the impact they have on the
| project.
|
| With AI they can simply blame whatever model they used and
| continually shovel trash out there instantly.
| Hojojo wrote:
| I don't see the difference there. Whether I've written all
| the code myself or an AI wrote all of it, my name will be
| on the commit. I'll be the person people turn to when they
| question why code is the way it is. In a pull request for
| my commit, I'll be the one discussing it with my
| colleagues. I can't say "oh, the AI wrote it". I'm
| responsible for the code. Full stop.
|
| If you're in a team where somebody can continuously commit
| trash without any repercussions, this isn't a problem
| caused by AI.
| Aurornis wrote:
| > I feel like one could delete almost all comments from that
| project without losing any information
|
| I far from a heavy LLM coder but I've noticed a massive
| excess of unnecessary comments in most output. I'm always
| deleting the obvious ones.
|
| But then I started noticing that the comments seem to help
| the LLM navigate additional code changes. It's like a big
| trail of breadcrumbs for the LLM to parse.
|
| I wouldn't be surprised if vibe coders get trained to leave
| the excess comments in place.
| lolinder wrote:
| It doesn't hurt that the model vendors get paid by the
| token, so there's zero incentive to correct this pattern at
| the model layer.
| thesnide wrote:
| or the model get trained from teaching code which
| naturally contains lots of comments.
|
| the dev is just lazy to not include them anymore, wheres
| the model doesn't really need to be lazy, as paid by the
| token
| nostromo wrote:
| LLMs are also good at commenting on existing code.
|
| It's trivial to ask Claude via Cursor to add comments to
| illustrate how some code works. I've found this helpful
| with uncommented code I'm trying to follow.
|
| I haven't seen it hallucinate an incorrect comment yet, but
| sometimes it will comment a TODO that a section should be
| made more more clear. (Rude... haha)
| pastage wrote:
| I have seldomly seen insightful comments from LLMs. It is
| usually better than "comment what the line does" usefull
| for getting a hint about undocumented code, but not by
| much. My experience is limited, but what I have I do
| agree with. As long as you keep on the beaten path it is
| ok. Comments are not such a thing.
| cztomsik wrote:
| More tokens -> more compute involved. Attention-based
| models work by attending every token with each other, so
| more tokens means not only having more time to "think" but
| also being able to think "better". That is also at least
| part of the reason why o1/o3/R1 can sometimes solve what
| other LLMs could not.
|
| Anyway, I don't think any of the current LLMs are really
| good for coding. What it's good at is copy-pasting (with
| some minor changes) from the massive code corpus it has
| been pre-trained. For example, give it some Zig code and
| it's straight unable to solve even basic tasks. Same if you
| give it really unique task, or if you simply ask for
| potential improvements of your existing code. Very, very
| bad results, no signs of out-of-box thinking whatsoever.
|
| BTW: I think what people are missing is that LLMs are
| really great at language modeling. I had great results, and
| boosts in productivity, just by being able to prepare the
| task specification, and do quick changes in that really
| easily. Once I have a good understanding of the problem, I
| can usually implement everything quickly, and do it in much
| much better way than any LLM can currently do.
| Workaccount2 wrote:
| I have tried getting gemini 2.5 to output "token
| efficient" code, i.e. no comments, keep variables to 1 or
| 2 letters, try to keep code as condensed as possible.
|
| It didn't work out that great. I think that all the
| context in the verbose coding it does actually helps it
| to write better code. Shedding context to free up tokens
| isn't so straightforward.
| dkersten wrote:
| What's worse, I get a lot of comments left saying what the
| AI did, not what the code does or why. Eg "moved this from
| file xy", "code deleted because we have abc", etc.
| Completely useless stuff that should be communicated in the
| chat window, not in the code.
| FeepingCreature wrote:
| > there is a lot of obviously useful abstraction being
| missed, wasting lines of code that will all need to be
| maintained.
|
| This is a human sentiment because we can fairly easily pick
| up abstractions during reading. AIs have a much harder time
| with this - they can do it, but it takes up very limited
| cognitive resources. In contrast, rewriting the entire
| software for a change is cheap and easy. So to a point, flat
| and redundant code is actually beneficial for a LLM.
|
| Remember, the code is written primarily for AIs to read and
| only incidentally for humans to execute :)
| jstummbillig wrote:
| > The scary thing is: I have seen professional human
| developers write worse code.
|
| That's not the scary part. It's the honest part. Yes, we all
| have (vague) ideas of what good code looks like, and we might
| know it when we see it but we know what reality looks like.
|
| I find the standard to which we hold AI in that regard
| slightly puzzling. If I can get the same meh-ish code for way
| less money and way less time, that's a stark improvement. If
| the premise is now "no, it also has to be something that I
| recognize as really good / excellent" then at least let us
| recognize that we have past the question if it can produce
| useful code.
| necovek wrote:
| I do believe it's amazing what we can build with AI tools
| today.
|
| But whenever someone advertises how an expert will benefit
| from it yet they end up with crap, it's a different
| discussion.
|
| As an expert, I want AI to help me produce code of similar
| quality faster. Anyone can find a cheaper engineer (maybe
| five of them?) that can produce 5-10x the code I need at
| much worse quality.
|
| I will sometimes produce crappy code when I lack the time
| to produce higher quality code: can AI step in and make me
| always produce high quality code?
|
| That's a marked improvement I would sign up for, and some
| seem to tout, yet I have never seen it play out.
|
| In a sense, the world is already full of crappy code used
| to build crappy products: I never felt we were lacking in
| that department.
|
| And I can't really rejoice if we end up with even more of
| it :)
| merrywhether wrote:
| I think there's a difference in that this is about as good
| as LLM code is going to get in terms of code quality (as
| opposed to capability a la agentic functionality). LLM
| output can only be as good as its training data, and the
| proliferation of public LLM-generated code will only serve
| as a further anchor in future training. Humans on the other
| hand ideally will learn and improve with each code review
| and if they don't want to you can replace them (to put it
| harshly).
| nottorp wrote:
| Here's a rl example from today:
|
| I asked $random_llm to give me code to recursively scan a
| directory and give me a list of file names relative to the top
| directory scanned and their sizes.
|
| It gave me working code. On my test data directory it needed
| ... 6.8 seconds.
|
| After 5 min of eliminating obvious inefficiencies the new code
| needed ... 1.4 seconds. And i didn't even read the docs for the
| used functions yet, just changed what seemed to generate too
| many filesystem calls for each file.
| bongodongobob wrote:
| Nice, sounds like it saved you some time.
| nottorp wrote:
| You "AI" enthusiasts always try to find a positive spin :)
|
| What if I had trusted the code? It was working after all.
|
| I'm guessing that if i asked for string manipulation code
| it would have done something worth posting on accidentally
| quadratic.
| noisy_boy wrote:
| Depends on how toxic the culture is in your workplace.
| This could have been an opportunity to "work" on another
| JIRA task showing 600% improvement over AI generated
| code.
| nottorp wrote:
| I'll write that down for reference in case I do ever join
| an organization like that in the future, thanks.
|
| 600% improvement is worth what, 3 days of billable work
| if it lasts 5 minutes?
| noisy_boy wrote:
| Series of such "improvements" could be fame and fortune
| in your team/group/vertical. In such places, the guy who
| toots the loudest wins the most.
| nottorp wrote:
| So THAT's why large organizations want "AI".
|
| In such a place I should be a very loud advocate of LLMs,
| use them to generate 100% of my output for new tasks...
|
| ... and then "improve performance" by simply fixing all
| the obvious inefficiencies and brag about the 400%
| speedups.
|
| Hmm. Next step: instruct the "AI" to use bubblesort.
| bongodongobob wrote:
| Why would you blindly trust any code? Did you tell it to
| optimize for speed? If not, why are you surprised it
| didn't?
| johnnyanmac wrote:
| >Why would you blindly trust any code?
|
| because that is what the market is trying to sell?
| nottorp wrote:
| So, most low level functions that enumerate the files in
| a directory return a structure that contains the file
| data from each file. Including size. You already have it
| in memory.
|
| Your brilliant AI calls another low level function to get
| the file size _on the file name_. (also did worse stuff
| but let 's not go into details).
|
| Do you call reading the file size from the in memory
| structure that you already have a speed optimization? I
| call it common sense.
| miningape wrote:
| Yep exactly, LLMs blunder over the most simple nonsense
| and just leaves a mess in their wake. This isn't a
| mistake you could make if you actually understood what
| the library is doing / is returning.
|
| It's so funny how these AI bros make excuse after excuse
| for glaring issues rather than just accept AI doesn't
| actually understand what it's doing (not even considering
| it's faster to just write good quality code on the first
| try).
| nottorp wrote:
| The "AI" are useful for one thing. I had no idea what
| functions to use to scan a directory in a native C++
| Windows application. Nor that they introduced an
| abstraction in C++ 2017?. They all work the same
| (needless fs access should be avoided no matter the OS)
| but it did give me the names*.
|
| Stuff that google search from 10 years ago would have
| done without pretending it's "AI". But not google search
| from this year.
|
| * it wasn't able to simply list the fields of the
| returned structure that contained a directory entry. But
| since it gave me the name, i was able to look it up via
| plain search.
| miningape wrote:
| Yeah I find myself doing that too, use the AI to generate
| a bunch of names I can put into google to find a good
| answer. I also think if google hadn't gotten as sh*t as
| it has AI wouldn't be nearly as useful to most people.
| bdhcuidbebe wrote:
| > It's so funny how these AI bros make excuse after
| excuse for glaring issues rather than just accept AI
| doesn't actually understand what it's doing
|
| Its less funny when you realize how few of these people
| even have experience reading and writing code.
|
| They just see code on screen, trust the machine and
| proclaim victory.
| FeepingCreature wrote:
| > What if I had trusted the code? It was working after
| all.
|
| Then you would have been done five minutes earlier? I
| mean, this sort of reads like a parody of
| microoptimization.
| nottorp wrote:
| No, it reads like "your precious AI generates first year
| junior code". Like the original article.
| FeepingCreature wrote:
| There is nothing wrong with first year junior code that
| does the job.
| nottorp wrote:
| Does not. Do you know my requirements? This is actually
| in a time critical path.
| FeepingCreature wrote:
| Well, that wasn't in your comment. :P
|
| If you hadn't told me that I would also not have bothered
| optimizing syscalls.
|
| Did you tell the AI the profiler results and ask for ways
| to make it faster?
| nottorp wrote:
| > Well, that wasn't in your comment. :P
|
| Acting like a LLM now :P
|
| > Did you tell the AI the profiler results and ask for
| ways to make it faster?
|
| Looking for ways to turn a 10 minute job into a couple
| days?
| FeepingCreature wrote:
| AI actually doesn't really work for the "a couple days"
| scale yet. As a heavy AI user, this sort of iterative
| correction would usually be priced in in a 10-minute AI
| session. That said-
|
| > Acting like a LLM now :P
|
| Hey, if we're going to be like that, it sure sounds like
| you gave the employee an incomplete spec so you could
| then blame it for failing. So... at least I'm not acting
| like a PM :P
| layoric wrote:
| Also somewhat strangely, I've found Python output has remained
| bad, especially for me with dataframe tasks/data analysis. For
| remembering matplotlib syntax I still find most of them pretty
| good, but for handling datagframes, very bad and extremely
| counter productive.
|
| Saying that, for typed languages like TypeScript and C#, they
| have gotten very good. I suspect this might be related to the
| semantic information can be found in typed languages, and hard
| to follow unstructured blobs like dataframes, and there for,
| not well repeated by LLMs.
| datadrivenangel wrote:
| Spark especially is brutal for some reason. Even databrick's
| AI is bad at spark, which is very funny.
|
| It's probably because spark is so backwards compatible with
| pandas, but not fully.
| gerdesj wrote:
| My current favourite LLM wankery example is this beauty:
| https://blog.fahadusman.com/proxmox-replacing-failed-drive-i...
|
| Note how it has invented the faster parameter for the zpool
| command. It is possible that the blog writer hallucinated a
| faster parameter themselves without needing a LLM - who knows.
|
| I think all developers should add a faster parameter to all
| commands to make them run faster. Perhaps a LLM could create
| the faster code.
|
| I predict an increase of man page reading, and better quality
| documentation at authoritative sources. We will also improve
| our skills at finding auth sources of docs. My uBlacklist is
| getting quite long.
| Henchman21 wrote:
| What makes you think this was created by an LLM?
|
| I suspect they might actually have a pool named _faster_ -- I
| know I 've named pools similarly in the past. This is why I
| now name my pools after characters from the Matrix, as is
| tradition.
| taurath wrote:
| This really gets to an acceleration of enshittification. If
| you can't tell its an LLM, and there's nobody to verify the
| information, humanity is architecting errors and mindfucks
| into everything. All of the markers of what is trustworthy
| has been coopted by untrustworthy machines, so all of the
| way's we'd previously differentiated actors have stopped
| working. It feels like we're just losing truth as rapidly
| as LLMs can generate mistakes. We've built a scoundrels
| paradise.
|
| How useful is a library of knowledge when n% of the
| information is suspect? We're all about to find out.
| Henchman21 wrote:
| You know, things looked off to me, but thinking it was
| the output of an LLM just didn't seem obvious -- even
| though that was the claim! I feel ill-equipped to deal
| with this, and as the enshittification has progressed
| I've found myself using "the web" less and less. At this
| point, I'm not sure there's much left I value on the web.
| I wish the enshittification wasn't seemingly pervasive in
| life.
| taurath wrote:
| I believe in people, but I start to think that scrolling
| is the Fox News or AM radio of a new generation, it just
| happens to be the backbone of the economy because
| automation is so much cheaper than people.
| lloeki wrote:
| The pool is named _backups_ according to _zpool status_ and
| the paragraph right after.
|
| But then again the old id doesn't match between the two
| commands.
| Henchman21 wrote:
| Yep that's the stuff I noticed that was off too
| rotis wrote:
| How can this article be written by LLM? Its date is November
| 2021. Not judging the article as a whole but the command you
| pointed out seems to be correct. Faster is the name of the
| pool.
| victorbjorklund wrote:
| I used LLM:s for content generation in july 2021. Of course
| that was when LLM:s were pretty bad.
| selcuka wrote:
| GPT-2 was released in 2019. ChatGPT wasn't the first
| publicly available LLM.
| bdhcuidbebe wrote:
| There was alot going on in the years before ChatGPT. Text
| generation was going strong with interactive fiction before
| anyone were talking about OpenAI.
| gruez wrote:
| >Its date is November 2021
|
| The date can be spoofed. It first showed up on archive.org
| in December 2022, and there's no captures for the site
| before then, so I'm liable to believe the dates are
| spoofed.
| byproxy wrote:
| As an actually unseasoned Python developer, would you be so
| kind as to explain why the problems you see are problems and
| their alternatives? Particularly the first two you note.
| saila wrote:
| The call to _logging.basicConfig_ happens at import time,
| which could cause issues in certain scenarios. For a one-off
| script, it 's probably fine, but for a production app, you'd
| probably want to set up logging during app startup from
| whatever your main entry point is.
|
| The Python standard library has a _configparser_ module,
| which should be used instead of custom code. It 's safer and
| easier than manual parsing. The standard library also has a
| _tomllib_ module, which would be an even better option IMO.
| cinntaile wrote:
| Regarding your first paragraph, we still don't understand
| what the issue actually is.
| Perizors wrote:
| How do you properly configure a logger in application like
| that?
| rcfox wrote:
| Usually you would do it in your main function, or a code path
| starting from there. Executing code with non-local side
| effects during import is generally frowned upon. Maybe it's
| fine for a project-local module that won't be shared, but
| it's a bad habit and can make it had to track down.
| necovek wrote:
| Just imagine a callsite that configured a logger in another
| way, and then imports the utils module for a single function:
| its configuration getting overridden by the one in utils.
|
| There are plenty of ways to structure code so this does not
| happen, but simply "do not do anything at the top module
| level" will ensure you don't hit these issues.
| abid786 wrote:
| Doesn't load_json throw if the file doesn't exist?
| isoprophlex wrote:
| Yes but then why do the check in the first place
| NewsaHackO wrote:
| >to a raciness in load_json where it's checked for file
| existence with an if and then carrying on as if the file is
| certainly there...
|
| Explain the issue with load_json to me more. From my reading it
| checks if the file exists, then raises an error if it does not.
| How is that carrying on as if the file is certainly there?
| selcuka wrote:
| There is a small amount of time between the `if` and the
| `with` where another process can delete the file, hence
| causing a race condition. Attempting to open the file and
| catching any exceptions raised is generally safer.
| taberiand wrote:
| Won't it throw the same "FileNotFound" exception in that
| case? The issue being bothering to check if it exists in
| the first place I suppose.
| selcuka wrote:
| Yes, but it won't log the error as it is clearly the
| intention of the first check.
| NewsaHackO wrote:
| OK, that does make sense. Thanks!
| ilrwbwrkhv wrote:
| Yup this tracks with what I have seen as well. Most devs who
| use this daily are usually junior devs or javascript devs who
| both write sloppy questionable code.
| nunez wrote:
| Makes sense given that so much of the training data for so many
| of these tools are trained on hello world examples where this
| kind of configuration is okay. Not like this will matter in a
| world where there are no juniors to replace aged-out seniors
| because AI was "good enough"...
| johnfn wrote:
| Not all code needs to be written at a high level of quality. A
| good deal of code just needs to work. Shell scripts, one-offs,
| linter rules, etc.
| jayd16 wrote:
| It'll be really interesting to see if the tech advances fast
| enough that future AI can deal with the tech debt of present
| day AI or if we'll see a generational die off of
| apps/companies.
| bdhcuidbebe wrote:
| I expect some of the big companies that went all in on
| relying on AI to fall in the coming years.
|
| It will take some time tho, as decision makers will
| struggle to make up reasons why why noone on the payroll is
| able to fix production.
| Aperocky wrote:
| Having seen my fair share of those, they tend to work either
| until they don't, or you need to somehow change it.
| jjice wrote:
| You're objectively correct in a business context, which is
| what most software is for. For me, seeing AI slop code more
| and more is just sad from a craft perspective.
|
| Software that's well designed and architected is a pleasure
| to read and write, even if a lower quality version would get
| the job done. I'm watching one of the things I love most in
| the world become more automated and having the craftsmanship
| stripped out of it. That's a bit over dramatic from me, but
| it's been sad to watch.
| deergomoo wrote:
| I feel exactly the same way, it's profoundly depressing.
| hjnilsson wrote:
| It's probably the same way monks copying books felt when
| the printing press came along. "Look at this mechanical,
| low-quality copy. It lacks all finesse and flourish of the
| pen!"
|
| I agree with you that it is sad. And what is especially sad
| is that the result will probably be lower quality overall,
| but much cheaper. It's the inevitable result of automation.
| cess11 wrote:
| wrap_long_lines shares those characteristics:
|
| https://github.com/dx-tooling/platform-problem-monitoring-co...
|
| Where things are placed in the project seems rather ad hoc too.
| Put everything in the same place kind of architecture. A better
| strategy might be to separate out the I and the O of IO. Maybe
| someone wants SMS or group chat notifications later on, instead
| of shifting the numbers in filenames step11_ onwards one could
| then add a directory in the O part and hook it into an actual
| application core.
| dheera wrote:
| I disagree, I think it's absolutely astounding that they've
| gotten _this_ good in such a short time, and I think we 'll get
| better models in the near future.
|
| By the way, prompting models properly helps a lot for
| generating good code. They get lazy if you don't explicitly ask
| for well-written code (or put that in the system prompt).
|
| It also helps immensely to have two contexts, one that
| generates the code and one that reviews it (and has a different
| system prompt).
| henrikschroder wrote:
| > They get lazy if you don't explicitly ask for well-written
| code (or put that in the system prompt).
|
| This is insane on so many levels.
| globnomulous wrote:
| Computer, enhance 15 to 23.
| ManuelKiessling wrote:
| Thanks for looking into it.
|
| While I would have hoped for a better result, I'm not
| surprised. In this particular case, I really didn't care about
| the code at all; I cared about the end result at runtime, that
| is, can I create a working, stable solution that solves my
| problem, in a tech stack I'm not familiar with?
|
| (While still taking care of well-structured requirements and
| guard rails -- not to guarantee a specific level of code
| quality per se, but to ensure that the AI works towards my
| goals without the need to intervene as much as possible).
|
| I will spin up another session where I ask it to improve the
| implementation, and report back.
| theteapot wrote:
| > to a raciness in load_json where it's checked for file
| existence with an if and then carrying on as if the file is
| certainly there...
|
| It's not a race. It's just redundant. If the file does not
| exist at the time you actually try to access it you get the
| same error with slightly better error message.
| gessha wrote:
| > This is especially noteworthy because I don't actually know
| Python.
|
| > However, my broad understanding of software architecture,
| engineering best practices, system operations, and what makes
| for excellent software projects made this development process
| remarkably smooth.
|
| If the seniors are going to write this sort of Python code and
| then talk about how knowledge and experience made it smooth or
| whatever, might as well hire a junior and let them learn
| through trials and tribulations.
| raxxorraxor wrote:
| In my opinion this isn't even too relevant. I am no python
| expert but I believe defining a logger at the top for the
| average one file python script is perfectly adequate or even
| very sensible in many scenarios. Depends on what you expect the
| code to do. Ok, the file is named utils.py...
|
| Worse by far is still the ability of AI to really integrate
| different problems and combine them into a solution. And it
| also seems to depend on the language. In my opinion especially
| Python and JS results are often very mixxed while other
| languages with presumably a smaller training set might even
| fare better. JS seems often fine with asynchronous operation
| like that file check however.
|
| Perhaps really vetting a training set would improve AIs, but it
| would be quite work intensive to build something like that.
| That would require a lot of senior devs, which is hard to come
| by. And then they need to agree on code quality, which might be
| impossible.
| globnomulous wrote:
| Thanks for doing the footwork. These TED talk blog posts always
| stink of phony-baloney nonsense.
| inerte wrote:
| 100%!
|
| But the alternative would be the tool doesn't get built because
| the author doesn't know enough Python to even produce crappy
| code, or doesn't have the money to hire an awesome Python coder
| to do that for them.
| spoonfeeder006 wrote:
| Perhaps that partly because 90% of the training data used to
| teach LLMs to code is made by junior engineers?
| tracker1 wrote:
| I can say it isn't any better for JS/Node/Deno/Bun projects
| that I've seen or tried. About the only case it's been helpful
| (GitHub CoPilot) is in creating boilerplate .sql files for
| schema creation, and in that it became kind of auto-complete on
| overdrive. It still made basic missteps though.
| jonnycoder wrote:
| Very good concrete examples. AI is moving very fast so it can
| become overwhelming, but what has held true is focusing on
| writing thorough prompts to get the results you want.
|
| Senior developers have the experience to think through and plan
| out a new application for an AI to write. Unfortunately a lot of
| us are bogged down by working our day jobs, but we need to
| dedicate time to create our own apps with AI.
|
| Building a personal brand is never more important, so I envision
| a future where dev's have a personal website with thumbnail links
| (like a fancy youtube thumbnail) to all the small apps they have
| built. Dozens of them, maybe hundreds, all with beautiful or
| modern UIs. The prompt they used can be the new form of blog
| articles. At least that's what I plan to do.
| owebmaster wrote:
| > Building a personal brand is never more important
|
| the low-hanging fruit is to create content/apps to help
| developers create their personal brands through content/apps.
| motorest wrote:
| What a high quality article, packed with gems. What a treat.
| datavirtue wrote:
| That free GitHub Copilot though. Microsoft is a relentless drug
| dealer. If you haven't tried Copilot Edits yet, hold on to your
| hat. I started using it in a clean Express project and a Vue3
| project in VS Code. Basically flawless edits from prompt over
| multiple files, new files...the works. Easy.
| bsoles wrote:
| But can it center a div?
| rs186 wrote:
| That prompt looks horrifying.
|
| I am not going to spend half an hour coming up with that prompt,
| tweaking it, and then spend many hours (on the optimistic side)
| to track down all the hallucinated code and hidden bugs. Have
| been there once, never going to do that again.
|
| I'd rather do it myself to have a piece of mind.
| skydhash wrote:
| I wonder how much time it would take with some samples from
| GitHub, and various documentation about python laying around
| (languages, cheatsheet, libraries)...
| miningape wrote:
| 10 minutes tops, once you have the idea and you've thought it
| through a bit just spamming out the code isn't that hard nor
| does it take that long. If there's anything surprising (e.g.
| function call returns something a bit different than
| expected) it's fast enough to just read the docs and change
| your mental model slightly.
| ramesh31 wrote:
| This maps pretty well to my experience.
|
| Other devs will say things like "AI is just a stupid glorified
| autocomplete, it will never be able to handle my Very Special
| Unique Codebase. I even spent 20 minutes one time trying out
| Cursor, and it just failed"
|
| Nope, you're just not that good obviously. I am literally 10x
| more productive at this point. Sprint goals have become single
| afternoons. If you are not tuned in to what's going on here and
| embracing it, you are going to be completely obsolete in the next
| 6 months unless you are some extremely niche high level expert.
| It wont be a dramatic moment where anyone gets "fired for AI".
| Orgs will just simply not replace people through attrition when
| they see productivity staying the same (or even increasing) as
| headcount goes down.
| __jochen__ wrote:
| That's the problem. The new norm will be 10x of pre-AI
| productivity, nobody will be able justify hand-writing code.
| And until the quality bar of LLM's/their successors get much
| better (see e.g. comments above looking at the details in the
| examples given), you'll get accumulation of errors that are
| higher than what decent programmers get. With higher LOC and
| more uninspected complexity, you'll get significantly lower
| quality overall. The coming wave of AI-coded bugs will be fun
| for all. GOTO FAIL;
| cglace wrote:
| After spending a week coding exclusively with AI assistants,
| I got functional results but was alarmed by the code quality.
| I discovered that I didn't actually save much time, and the
| generated code was so complex and unfamiliar that I was
| scared to modify it. I still use Copilot and Claude and would
| say I'm able to work through problems 2-3x faster than I
| would be without AI but I wouldn't say I get a 10x
| improvement.
|
| My projects are much more complex than standard CRUD
| applications. If you're building simple back-office CRUD
| apps, you might see a 10x productivity improvement with AI,
| but that hasn't been my experience with more complex work.
| williamdclt wrote:
| Giving your anecdotal experience is only useful if you include
| anecdotal context: seniority, years of experience, technology,
| project size, sprint goal complexity...
| ManuelKiessling wrote:
| You are not wrong! I will add those infos.
| Denzel wrote:
| Can you talk through specifically what sprint goals you've
| completed in an afternoon? Hopefully multiple examples.
|
| Grounding these conversations in an actual reality affords more
| context for people to evaluate your claims. Otherwise it's just
| "trust me bro".
|
| And I say this as a Senior SWE who's successfully worked with
| ChatGPT to code up some prototype stuff, but haven't been able
| to dedicate 100+ hours to work through all the minutia of
| learning how to drive daily with it.
| jonnycoder wrote:
| I think experiences vary. AI can work well with greenfield
| projects, small features, and helping solve annoying
| problems. I've tried using it on a large Python Django
| codebase and it works really well if I ask for help with a
| particular function AND I give it an example to model after
| for code consistency.
|
| But I have also spent hours asking Claude and ChatGPT with
| help trying to solve several annoying Django problems and I
| have reached the point multiple times where they circle back
| and give me answers that did not previously work in the same
| context window. Eventually when I figure out the issue, I
| have fun and ask it "well does it not work as expected
| because the existing code chained multiple filter calls in
| django?" and all of a sudden the AI knows what is wrong! To
| be fair, there was only one sentence in the django
| documentation that mentions not chaining filter calls on many
| to many relationships.
| carpo wrote:
| If you do want to get more into it, I'd suggest something
| that plugs into your IDE instead of Copy/Paste with ChatGPT.
| Try Aider or Roo code. I've only used Aider, and run it in
| the VS terminal. It's much nicer to be able to leave comments
| to the AI and have it make the changes to discrete parts of
| the app.
|
| I'm not the OP, but on your other point about completing
| sprint goals fast - I'm building a video library app for
| myself, and wanted to add tagging of videos. I was out
| dropping the kids at classes and waiting for them. Had 20
| minutes and said to Aider/Claude - "Give me an implementation
| for tagging videos." It came back with the changes it would
| make across multiple files: Creating a new model, a service,
| configuring the DI container, updating the DB context,
| updating the UI to add tags to videos and created a basic
| search form to click on tags and filter the videos. I hit
| build before the kids had finished and it all worked. Later,
| I found a small bug - but it saved me a fair bit of time.
| I've never been a fast coder - I stare at the screen and
| think way too much (function and variable names are my doom
| ... and the hardest problem in programming, and AI fixes this
| for me).
|
| Some developers may be able to do all this in 20 minutes, but
| I know that I never could have. I've programmed for 25 years
| across many languages and frameworks, and know my
| limitations. A terrible memory is one of them. I would
| normally spend a good chunk of time on StackOverflow and the
| documentation sites for whatever frameworks/libraries I'm
| using. The AI has reduced that reliance and keeps me in the
| zone for longer.
| skydhash wrote:
| At all the jobs I had, the valuable stuff was shipped features.
| The baseline was for them to work well and to be released on
| time. The struggle was never writing the code, it was to
| clarify specifications. By comparison, learning libraries and
| languages was fun.
|
| I don't really need AI to write code for me, because that's the
| easy part. The aspect that it needs to be good at is helping me
| ship features that works. And to this date, there's never been
| a compelling showcase for that one.
| Peritract wrote:
| Have you considered that the opposite explanation might be true
| instead?
|
| It could be that other developers are not benefitting from AI
| as much as you because they don't understand it.
|
| It could also be that you are benefitting more than them
| because you're less skilled than them, and AI can fill in your
| gaps but not theirs.
| conductr wrote:
| As a long time hobby coder, like 25 years and I think I'm pretty
| good(?), this whole LLM /vibecoding thing has zapped my
| creativity the past year or so. I like the craft of making
| things. I used tools I enjoy working with and learn new ones all
| the time (never got on the JS/react train). Sometimes I have an
| entrepreneur bug and want to create a marketable solution, but I
| often just like to build. Im also the kind of guy that has a shop
| he built, builds his own patio deck, home remodeling, Tinker with
| robotics, etc. Kind of just like to be a maker following my own
| creative pursuit.
|
| All said, it's hard on me knowing it's possible to use llm to
| spit out a crappy but functional version of whatever I've dreamt
| up with out satisfaction of building it. Yet, it also seems to
| now be demotivating to spend the time crafting it when I know I
| could use llm to do a majority of it. So, I'm in a mental
| quagmire, this past year has been the first year since at least
| 2000 that I haven't built anything significant in scale. It's
| indirectly ruining the fun for me for some reason. Kind of just
| venting but curious if anyone else feels this way too?
| fragmede wrote:
| Fascinating. it's gone the other way for me. _because_ I can
| now whip up a serious contender to any SaaS business in a week,
| it 's made everything more fun, not less.
| cglace wrote:
| So you can create a serious contender to Salesforce or Zapier
| in a week?
| fragmede wrote:
| like an Eventbrite or a shopmonkey. but yeah, you don't
| think you could? Salesforce is a whole morass. not every
| customer uses every corner of it, and Salesforce will
| nickel and dime you with their consultants and add ons and
| plugins. if you can be more specific as to which bit of
| Salesforce you want to provide to a client we can go deep.
| caseyohara wrote:
| But you said "I can now whip up a serious contender to
| _any_ SaaS business in a week ".
|
| Any SaaS business. In a week. And to be a "serious
| contender", you have to have feature parity. Yet now
| you're shifting the goalposts.
|
| What's stopping you? There are 38 weeks left in 2025.
| Please build "serious contenders" for each of the top 38
| most popular SaaS products before the end of the year.
| Surely you will be the most successful programmer to have
| ever lived.
| fragmede wrote:
| The rest of the business is the issue. I can whitelabel a
| Spotify clone but licensing rights and all that business
| stuff is outside my wheelhouse. An app that serves mp3s
| and has a bunch of other buttons? yeah, done. "shifting
| goalposts?" no, we're having a conversation, I'm not
| being deposed under a subpoena.
|
| My claim is that in a week you could build a thing that
| people want to use, as long as you can sell it, that's
| competitive with existing options for a given client.
| Salesforce is a CRM with walled gardens after walled
| garden. access to each of which costs extra, of course.
| they happened to be in the right place at the right time,
| with the right bunch of assholes.
|
| A serious contender doesn't have to start with
| everything. It starts by doing the core thing better--
| cleaner UX, clearer value, easier to extend. That's
| enough to matter. That's enough to grow.
|
| I'm not claiming to replace decades overnight. But
| momentum, clarity, and intent go a long way. Especially
| when you're not trying to be everything to everyone--just
| the right thing for the right people.
|
| as for Spotify: https://bit.ly/samson_music
| caseyohara wrote:
| Sure, yeah, go ahead, do it. Seriously! Build a SaaS
| business in a week and displace an existing business.
| Please report back with your findings.
| fragmede wrote:
| As much as I'd like to pretend otherwise, I'm just a
| programmer. Say I build, I dunno, an Eventbrite clone.
| Okay, cool I've got some code running on Vercel. What do
| I do next? I'm not about to quit my day job to try and
| pay my mortgage on hopes and dreams, and while I'm
| working my day job and having a life outside of that.
| There's just not enough hours left in the day to also
| work on this hypothetical EventBrite clone. And there are
| already so many competitors of them out there, what's one
| more? What's my "in" to the events industry that would
| have me succeed over any of their numerous existing
| competitors? Sure, Thants to LLMs I can vibe code some
| CRUD app, but my point is there's so much I don't know
| that I don't even know what I don't know about business
| in order to be successful. So realistically it's just a
| fun hobby, like how some people sew sweaters.
| petersellers wrote:
| > as for Spotify: https://bit.ly/samson_music
|
| I'm not sure what you are trying to say here - that this
| website is comparable to Spotify? Even if you are talking
| about just the "core experience", this example supports
| the opposite argument that you are trying to make.
| fragmede wrote:
| The way I see it, the core user experience is that the
| user listens to music. There's playlist management on top
| of that and some other bits, sure, but I really don't see
| it as being that difficult to build those pieces. This is
| a no code widget I had lying around with a track that was
| produced last night because I kept asking the producer
| about a new release. I linked it because it was top of
| mind. It allows the user to listen to music, which I see
| as the core of what Spotify offers its users.
|
| Spotify has the licensing rights to songs and I don't
| have the business acumen to go about getting those
| rights, so I guess I could make Pirate Spotify and get
| sued by the labels for copyright infringement, but that
| would just be a bunch of grief for me which would be not
| very fun and why would I want to screw artists out of
| getting paid to begin with?
| dijksterhuis wrote:
| > The way I see it
|
| i think ive detected the root cause of your problem.
|
| and, funnily enough, it goes a long way to explaining the
| experiences of some other commentators in this thread on
| "vibe coding competitive SaaS products".
| cess11 wrote:
| Salesforce is and pretty much always has been a set of
| code generation platforms. If you can produce a decent
| code generation platform, do it. It's one of the most
| sure ways to making money from software since it allows
| you to deploy systems and outsource a large portion of
| design to your customers.
|
| Spotify is not the audio player widget in some user
| interface. It started off as a Torrent-like P2P system
| for file distribution on top of a very large search index
| and file storage. That's the minimum you'd build for a
| "whitelabel [...] Spotify clone". Since then they've
| added massive, sophisticated systems for user monitoring
| and prediction, ad distribution, abuse and fraud
| detection, and so on.
|
| Use that code generation platform to build a product off
| any combination of two of the larger subsystems at
| Spotify and you're set for retirement if you only grab a
| reasonable salesperson and an accountant off the street.
| Robust file distribution with robust abuse detection or
| robust ad distribution or robust user prediction would be
| that valuable in many business sectors.
|
| If building and maintaining actually is that effortless
| for you, show some evidence.
| fragmede wrote:
| > Since then they've added massive, sophisticated systems
| for user monitoring and prediction, ad distribution,
| abuse and fraud detection, and so on. Use that code
| generation platform to build a product off any
| combination of two of the larger subsystems at Spotify
|
| I'm listening. I fully admit that I was looking at
| Spotify as a user and thus only as a music playing widget
| so I'd love to hear more about this side of things. What
| is user prediction?
| cess11 wrote:
| They spend a lot of effort trying to get good at
| predicting user preferences, through modeling of
| personality, behaviour patterns and more.
|
| You can find out quite a lot in their blogs and
| publications:
|
| https://research.atspotify.com/2022/02/modeling-users-
| accord...
|
| https://research.atspotify.com/user-modeling/
| conductr wrote:
| Yeah, I see that perspective bu I guess my thought process is
| "what's the point, if everyone else can now do the same"
|
| I had long ago culled many of those ideas based on my ability
| to execute the marketing plan or the "do I really even want
| to run that kind of business?" test. I already knew I could
| build whatever I wanted to exist so My days of pumping out
| side projects ended long ago and I became more selective with
| my time.
| fragmede wrote:
| which turns it into passion. the side project that I'm only
| interested in because it could maybe make some money? eh.
|
| a project in a niche where I live and breath the fumes off
| the work and I can help the whole ecosystem with their
| workflow? sign me up!
| conductr wrote:
| Agree, I've been searching for the latter for a long time
| carpo wrote:
| I guess it depends why you're writing the code. I'm writing
| a local video library desktop app to categorise my home and
| work videos. I'm implementing only the features I need. No
| one else will use it, I'll be finished the first version
| after about 4 weeks of weekend and night coding, and it's
| got some pretty awesome features I never would have thought
| possible (for me). Without AI I probably never would have
| done this. I'm sold, even just for the reduction of
| friction in getting a project off the ground. The first 80%
| was 80% AI developed and the last 20% has flipped to 80%
| coded by me. Which is great, because this part is the meat
| of the app and where I want most input.
| Aurornis wrote:
| I followed a lot of Twitter people who were vibecoding their
| way to SaaS platforms because I thought it would be
| interesting to follow.
|
| So far none of them are having a great time after their
| initial enthusiasm. A lot of it is people discovering that
| there's far more to a business than whipping up a SaaS app
| that does something. I'm also seeing a big increase in
| venting about how their progress is slowing to a crawl as the
| codebase gets larger. It's interesting to see the complaints
| about losing days or weeks to bugs that the LLM introduced
| that they didn't understand.
|
| I still follow because it's interesting, but I'm starting to
| think 90% of the benefit is convincing people that it's going
| to be easy and therefore luring them into working on ideas
| they'd normally not want to start.
| fragmede wrote:
| absolutely! It turns out that the code is just this one
| little corner of the whole thing. A critical corner, but
| still just one piece of many.
| caseyohara wrote:
| > I can now whip up a serious contender to any SaaS business
| in a week
|
| This reminds me of the famous HN comment when Drew Houston
| first announced Dropbox here in 2007:
| https://news.ycombinator.com/item?id=9224
| fragmede wrote:
| you don't get to choose why you get Internet famous, it
| chooses you.
|
| thankfully, I'm not important enough for my comment to
| amount to the same thing.
| dmamills wrote:
| I can echo your sentiment. Art is the manifestation of
| creativity, and to create any good art you need to train in
| whatever medium you choose. For the decade I've been a
| professional programmer, I've always argued that writing code
| was a creative job.
|
| It's been depressing to listen to people pretend that LLM
| generated code is "the same thing". To trivialize the
| thoughtful lessons one has learned honing their craft. It's the
| same reason the Studio Ghilbi AI image trend gives me the ick.
| bigpeopleareold wrote:
| Just think what the 19th century craftsmen were thinking! :D
| (i.e. they were right, but really good stuff is hard to make
| at scale)
| carpo wrote:
| I agree, but only to an extent. For me, the passion changed
| over time. I used to love getting an O'Reilly tome and
| learning something new, but now I don't really want to learn
| the latest UI framework, library/API or figure out how a
| client configures their DI container. If the AI can do most
| of that stuff, and I just leverage my knowledge of all the
| frameworks I've had to use, it's a huge timesaver and means I
| can work on more things at once. I want to work on the core
| solution, not the cruft that surrounds it.
|
| I agree though that the Studio Ghibli trend feels off. To me,
| art like this feels different to code. I know that's probably
| heresy around these parts of the internet, and I probably
| would have said something different 15-20 years ago. I _know_
| that coding is creative and fulfilling. I think I 've just
| had the fun of coding beat out of me over 25 years :) AI
| seems to be helping bring the fun back.
| carpo wrote:
| I'm the complete opposite. After being burnt out and feeling an
| almost physical repulsion to starting anything new, using AI
| has renewed my passion. I've almost finished a side project I
| started 4 weeks ago and it's been awesome. Used AI from the
| beginning for a Desktop app with a framework I'd never heard of
| before and the learning curve is almost non-existent. To be
| able to get the boring things done in minutes is amazing.
| crm9125 wrote:
| Similar sentiment here. I taught myself python a decade ago
| after college, and used it in side projects, during my
| masters degree, in a few work projects. So it's been handy,
| but also required quite a bit of time and effort to learn.
|
| But I've been using Claude to help with all kinds of side
| projects. One recently was to help create and refine some
| python code to take the latest Wikipedia zipped XML file and
| transform/load it locally into a PostgreSQL DB. The initial
| iteration of the code took ~16 hours to unzip, process, and
| load into the database. I wanted it to be faster.
|
| I don't know how to use multiple processes/multi-threading,
| but after some prompting, iterating, and persistent
| negotiations with Claude to refine the code (and an SSD
| upgrade) I can go from the 24gb zip file to all
| cleaned/transformed data in the DB in about 2.5 hours. Feels
| good man.
|
| Do I need to know exactly what's happening in the code (or at
| lowers levels, abstracted from me) to make it faster? not
| really. Could someone who was more skilled, that knew more
| about multi-threading, or other faster programming languages,
| etc..., make it even faster? probably. Is the code dog shit?
| it may not be production ready, but it works for me, and is
| clean enough. Someone who better knew what they were doing
| could work with it to make it even better.
|
| I feel like LLMs are great for brainstorming, idea
| generation, initial iterations. And in general can get you
| 80%+ the way to your goal, almost no matter what it is, much
| faster than any other method.
| carpo wrote:
| That's awesome! That's a lot of data and a great speed
| increase. I think that as long as you test and don't just
| accept exactly what it outputs without a little thought, it
| can be really useful.
|
| I take it as an opportunity to learn too. I'm working on a
| video library app that runs locally and wanted to extract
| images when the scene changed enough. I had no idea how to
| do this, and previously would have searched StackOverflow
| to find a way and then struggled for hours or days to
| implement it. This time I just asked Aider right in the IDE
| terminal what options I had, and it came back with 7
| different methods. I researched those a little and then
| asked it to implement 3 of them. It created an interface, 3
| implementations and a factory to easily get the different
| analyzers. I could then play around with each one and see
| what worked the best. It took like an hour. I wrote a test
| script to loop over multiple videos and run each analyzer
| on them. I then visually checked the results to see which
| worked the best. I ended up jumping into the code it had
| written to understand what was going on, and after a few
| tweaks the results are pretty good. This was all done in
| one afternoon - and a good chunk of that time was just me
| comparing images visually to see what worked best and
| tweaking thresholds and re-running to get it just right.
| FeepingCreature wrote:
| Same for me. The lack of effort to get started is amazing, as
| well as the ability to farm the parts I don't like out to the
| AI. (As opposed to me, it's actually passable at graphical
| design.)
| ManuelKiessling wrote:
| I'm very much in the same camp as you. I'm having the time of
| my (professional) live right now.
| theshrike79 wrote:
| I'm on this boat. I can use LLMs to skip the boring bits like
| generating API glue classes or simple output functions.
|
| Example:
|
| I'm building a tool to grab my data from different sites like
| Steam, Imdb, Letterboxd and Goodreads.
|
| I know perfectly well how to write a parser for the Goodreads
| CSV output, but it doesn't exactly tickle my brain. Cursor or
| Cline will do it in minutes.
|
| Now I've got some data to work with, which is the fun bit.
|
| Again if I want to format the output to markdown for
| Obsidian, the LLM can do it in a few minutes and maybe even
| add stuff I didn't think about at first.
| carpo wrote:
| Yeah, I've found it great at XML too. All sorts of
| boilerplate stuff works so well. As it the project gets
| bigger Ive had to think for longer about things and step in
| more, but as you said, that's usually the fun, meaty bit
| that you want to work on anyway. My app is about 15k lines
| of code now, and it's a bit more work than at the
| beginning.
| tediousgraffit1 wrote:
| this is the key idea right here. LLMs are not replacing
| good coders, they're electric yak shavers that we should
| all learn how to use. The value add is real, and
| incremental, not revolutionary.
| hombre_fatal wrote:
| I know what you mean. It takes the satisfaction out of a
| certain class of problems knowing that you can just generate
| the solution.
|
| On the other hand, most tasks aren't fun nor satisfying and
| frankly they are a waste of time, like realizing you're about
| to spend the afternoon recredentializing in some aspect of
| Webpack/Gradle/BouncyCastle/Next.js/Combine/x/y/z just to solve
| one minor issue. And it's pure bliss when the LLM knows the
| solution.
|
| I think the best antidote to the upset in your comment is to
| build bigger and more difficult things. Save your expertise for
| the stuff that could actually use your expertise rather than
| getting stuck wasting it on pure time burn like we had to in
| the past.
| conductr wrote:
| I like the antidote and does remind me that I tried to create
| a game, which gamedev has always been a challenge for me.
| I've attempted a few times and didn't get very far with this
| one either. I think I could do the coding even though scale
| is large, but I'm not artistic and asset generation/iteration
| is my block. I tried a handful of ai tools specifically for
| this and found they were all really far behind. I don't
| particularly like working with asset store art, maybe for
| parts but characters and the vibe of the game usually would
| excite and motivate me early on and I can't quite get there
| to sustain the effort
| hombre_fatal wrote:
| I had this in my comment but deleted it, maybe you were
| responding to it, but in case you didn't see it: I found
| multiplayer browser games to be a good example of a hard
| project that LLMs help a lot with so that you can focus on
| the rewarding part.
|
| LLMs can one-shot pretty good server-authority + client-
| prediction + rollback netcode, something I've probably
| spent weeks of my life trying to build and mostly failing.
| And they can get a basic frontend 'proof' working. And once
| you verify that the networked MVP works, you can focus on
| the game.
|
| But the cool thing about multiplayer games is that they can
| be really small in scope because all of the fun comes from
| mechanics + playing with other people. They can be
| spaceships shooting at each other in a single room or some
| multiplayer twist on a dumbed down classic game. And that's
| just so much more feasible than building a whole game
| that's expected to entertain you as a single player.
| woah wrote:
| Try being a drummer!
| conductr wrote:
| I've actually been considering it. Although I think I'm
| probably tone deaf or rhythmically challenged and my rotator
| cuff is kinda messed up from an old injury, I've been trying
| to convince my 6 year old to take it up so I could justify
| having a kit around the house
| johnnyanmac wrote:
| I'm mixed on the "craft" aspect of it. I don't hate coding, but
| there's definitely times where it feels like it's just some
| time plumbing. I don't get satisfaction out of that kind of
| stuff.
|
| I'm pragmatic and simply don't trust current LLM's to do much
| in my domain. All that tribal knowledge is kept under lock and
| key at studios, so good luck scraping the net to find more than
| the very basic samples of how to do something. I've spent well
| over a decade doing that myself; the advanced (and even a lot
| of intermediate) information is slim and mostly behind paywalls
| or books.
| Hojojo wrote:
| I've become so much more productive in my hobby programming
| projects, because it lets me skip over the tedious parts that
| suck the life out of me (boilerplate, figuring out library
| dependencies, etc), and I can focus on the parts that are
| actually interesting like experimenting with different
| solutions, iterating and learning new things.
|
| It also lets me explore other libraries and languages without
| having to invest too much time and effort before knowing if
| it's right for what I want to do. I know that if I want to
| continue in a different direction, I haven't wasted tons of
| time/effort and getting started in the new direction will be
| much less effortful and time consuming.
| atemerev wrote:
| This is excellent, and matches my experience.
|
| Those lamenting the loss of manual programming: we are free to
| hone our skills on personal projects, but for
| corporate/consulting work, you cannot ignore 5x speed advantage.
| It's over. AI-assisted coding won.
| skydhash wrote:
| Is it really 5x? I'm more surprised about 25+ years of
| experience, and being hard pressed to learn enough python to
| code the project. It's not like he's learning programming
| again, or being recently exposed to OOP. Especially when you
| can find working code samples about the subproblems in the
| project.
| atemerev wrote:
| It is 5x if you are already a senior SE knowing your
| programming language really well, constantly suggesting good
| architecture yourself ("seed files" is a brilliant idea), and
| not accepting any slop / asking to rewrite things if
| something is not up to your standards (of course, every piece
| of code should be reviewed).
|
| Otherwise, it can be 0.2x in some cases. And you should not
| use LLMs for anything security-related unless you are a
| security expert, otherwise you are screwed.
|
| (this is SOTA as of April 2025, I expect things to become
| better in the near future)
| skydhash wrote:
| > _It is 5x if you are already a senior SE knowing your
| programming language really well, constantly suggesting
| good architecture yourself ( "seed files" is a brilliant
| idea), and not accepting any slop / asking to rewrite
| things if something is not up to your standards (of course,
| every piece of code should be reviewed)._
|
| If you know the programming language really well, that
| usually means you know what libraries are useful, memorized
| common patterns, and have some project samples laying out.
| The actual speed improvement would be on typing the code,
| but it's usually the activity that requires the least time
| on any successful project. And unless you're a slow typist,
| I can't see 5x there.
|
| If you're lacking in fundamental, then it's just a skill
| issue, and I'd be suspicious of the result.
| atemerev wrote:
| "Given this code, extract all entities and create the
| database schema from these", "write documentation for
| these methods", "write test examples", "write README.md
| explaining how to use scripts in this directory",
| "refactor everything in this directory just like this
| example", etc etc
|
| Everything boring can be automated and it takes five
| seconds compared to half an hour.
| skydhash wrote:
| It can only be automated if the only thing you care about
| is having the code/text, and not making sure they are
| correct.
|
| > _Given this code, extract all entities and create the
| database schema from these_
|
| Sometimes, the best representation for storing and
| loading data is not the best for manipulating it and
| vice-versa. Directly mapping code entities to database
| relations (assuming it's SQL) is a sure way to land
| yourself in trouble later.
|
| > _write documentation for these methods_
|
| The intent of documentation is to explain how to use
| something and the multiple why's behind an
| implementation. What is there can be done using a symbol
| explorer. Repeating what is obvious from the name of the
| function is not helpful. And hallucinating something that
| is not there is harmful.
|
| > _write test examples_
|
| Again the type of tests matters more than the amount. So
| unless you're sure that the test is correct and the test
| suite really ensure that the code is viable, it's all for
| naught.
|
| ...
|
| Your use cases assume that the output is correct. And as
| the hallucination risk from LLM models is non-zero, such
| assumption is harmful.
| Marsymars wrote:
| This is actually a pretty compelling reason for me to suggest
| that my company _not_ hire consultants /contractors to write
| code for us. A ton of our dev budget is already spent on
| untangling edge-case bugs from poorly written/understood code.
| noodletheworld wrote:
| > Once again, the AI agent implemented this entire feature
| without requiring me to write any code manually.
|
| > For controllers, I might include a small amount of essential
| details like the route name: [code]
|
| Commit history: https://github.com/dx-tooling/platform-problem-
| monitoring-co...
|
| Look, I honestly think this is a fair article and some good
| examples, but what is with this inane "I didn't write any of it
| myself" claim that is clearly false that every one of these
| articles keeps bringing up?
|
| What's wrong with the fact you _did_ write some code as part of
| it? You clearly did.
|
| So weird.
| ManuelKiessling wrote:
| True, this needs to be fixed.
|
| What I wanted to express was that I didn't do any of the
| _implementation_ , that is, any _logic_.
|
| I need to phrase this better.
| ManuelKiessling wrote:
| Wait, I just saw what you meant: so no, for the Python tool
| my message stands. I did not write any code for it myself.
| aerhardt wrote:
| Not OP but did you edit the code via prompts, or was the
| whole thing a one-shot? That particular aspect is very
| confusing to me, I think you should clarify it.
| ManuelKiessling wrote:
| Absolutely not a one-shot, many iterations.
| colesantiago wrote:
| Hot take: I don't see a problem with this and in fact we will see
| in a few years that senior engineers will be needed less in the
| future.
|
| I have a business which is turning in millions in ARR at the
| moment (made in the pandemic) it's a pest control business and we
| have got a small team with only 1 experienced senior engineer, we
| used to have 5 but with AI we reduced it to one which we are
| still paying well.
|
| Even with maintenance, we plan ahead for this with an LLM and
| make changes accordingly.
|
| I think we will see more organizations opting for smaller teams
| and reducing engineer count since now the code generated is to
| the point that it works, it speeds up development and that it is
| "good enough".
| janderson215 wrote:
| This is interesting. Do you run a custom app in house? What are
| some of the main functions of the app? Internal or customer
| facing?
| windows2020 wrote:
| In my travels I find writing code to be natural and relaxing--a
| time to reflect on what I'm doing and why. LLMs haven't helped me
| out too much yet.
|
| Coding by prompt is the next lowering of the bar and vibe coding
| even more so. Totally great in some scenarios and adds noise in
| others.
| scandox wrote:
| One of the possibly obsolete things I enjoy about working with a
| human junior dev is that they learn and improve. It's nice to
| feel all this interaction is building something.
| plandis wrote:
| It's common practice to put your preferences and tips/advice
| into a readme solely for the LLM to consume to learn about what
| you want it to do.
|
| So you'd set things like code standards (and hopefully enforce
| them via feedback tools), guides for writing certain
| architectures, etc. Then when you have the LLM start working it
| will first read that readme to "learn" how you want it to
| generally behave.
|
| I've found that I typically edit this file as time goes on as a
| way to add semi-permanent feedback into the system. Even if
| your context window gets too large when you restart the LLM
| will start at that readme to prime itself.
|
| That's the closest analogy I can think of.
| pphysch wrote:
| I'm skeptical that
|
| 1. Clearly define requirements
|
| 2. Clearly sketch architecture
|
| 3. Setup code tool suite
|
| 4. Let AI agent write the remaining code
|
| Is better price-performance than going lighter on 1-3 and instead
| of 4, spending that time writing the code yourself with heavy
| input from LLM autocomplete, which is what LLMs are elite at.
|
| The agent will definitely(?) write the code faster, but quality
| and understanding (tech debt) can suffer.
|
| IOW the real takeaway is that knowing the requirements,
| architecture, and tooling is where the value is. LLM Agent value
| is dubious.
| istjohn wrote:
| We're just in a transitional moment. It's not realistic to
| expect LLM capabilities to leapfrog from marginally better
| autocomplete to self-guided autocoder without passing through a
| phase where it shows tantalizing hints of being able to go solo
| yet lacks the ability to follow through. Over the next couple
| years, the reliability, versatility, and robustness of LLMs as
| coders will steadily increase.
| denkmoon wrote:
| Senior developer skills haven't changed. Wrangling paragraphs of
| business slop into real technical requirements, communicating
| these to various stakeholders, understanding how all the discrete
| parts of whatever system you're building fit together, being
| responsible for timelines, getting the rest of the team
| coordinated/aligned and on-track, etc.
|
| Actually coding is a relatively small part of my job. I could use
| an LLM for the others parts but my employer does not appreciate
| being given word salad.
| 8note wrote:
| with the ai age, i think its also making sure team members use
| ai in similar was to get repeatable results
| yoyohello13 wrote:
| I've been pretty moderate on AI but I've been using Claude cli
| lately and it's been pretty great.
|
| First, I can still use neovim which is a massive plus for me.
| Second it's been pretty awesome to offload tasks. I can say
| something like "write some unit tests for this file, here are
| some edge cases I'm particularly concerned about" then I just let
| it run and continue with something else. Come back a few mins
| later to see what it came up with. It's a fun way to work.
| burntcaramel wrote:
| I agree, and I really like the concrete examples here. I tried
| relating it to the concept of "surprise" from information theory
| -- if what the LLM is producing is low surprise to you, you have
| a high chance of success as you can compare to the version you
| wrote in your experienced head.
|
| If it's high surprise then there's a greater chance that you
| can't tell right code from wrong code. I try to reframe this in a
| more positive light by calling it "exploration", where you can
| ask follow up questions and hopefully learn about a subject you
| started knowing little about. But it's important for you to
| realize which mode you are in, whether you are in familiar or
| unfamiliar waters.
|
| https://royalicing.com/2025/infinite-bicycles-for-the-mind
|
| The other benefit an experienced developer can bring is using
| test-driven development to guide and constrain the generated
| code. It's like a contract that must be fulfilled, and TDD lets
| you switch between using an LLM or hand crafting code depending
| on how you feel or the AI's competency at the task. If you have a
| workflow of writing a test beforehand it helps with either path.
|
| https://royalicing.com/2025/test-driven-vibes
| ramoz wrote:
| The gap between Sr and Jr skills with close as AI gets better. AI
| is coming for the SR developer, be assured.
|
| Also, Keyframing can be done in a more autonomous fashion. Sr
| Engineers can truly vibe code if they setup a proper framework
| for themselves. Key framing as described in the article is too
| manual.
| barotalomey wrote:
| > The gap between Sr and Jr skills with close as AI gets
| better.
|
| Sorry, but as the wind blows, we only have indications of the
| opposite so far and it's wildly reported as a generating losing
| their ability for deep thought, relying too much on automated
| code generation. This is especially damaging for juniors, who
| never get the chance to learn the tricks of the trade in the
| first place, and they're the ones having a hard time explaining
| what the code the LLM generated even does.
|
| Seniors will be few and far between, and we will charge premium
| for fixing up the crap LLM created, which will fit us great
| before retirement.
|
| Good luck when we are retired.
| mmazing wrote:
| When working with AI for software engineering assistance, I use
| it mainly to do three things -
|
| 1. Do piddly algorithm type stuff that I've done 1000x times and
| isn't complicated. (Could take or leave this, often more work
| than just doing it from scratch)
|
| 2. Pasting in gigantic error messages or log files to help
| diagnose what's going wrong. (HIGHLY recommend.)
|
| 3. Give it high level general requirements for a problem, and
| discuss POTENTIAL strategies instead of actually asking it to
| solve the problem. This usually allows me to dig down and come up
| with a good plan for whatever I'm doing quickly. (This is where
| real value is for me, personally.)
|
| This allows me to quickly zero in on a solution, but more
| importantly, it helps me zero in strategically too with less
| trial and error. It let's me have an in-person whiteboard meeting
| (as I can paste images/text to discuss too) where I've got
| someone else to bounce ideas off of.
|
| I love it.
| miningape wrote:
| Same 3 is the only use case I've found that works well enough.
| But I'll still usually take a look on google / reddit /
| stackoverflow / books first just because the information is
| more reliable.
|
| But it's usually an iterative process, I find pattern A and B
| on google, I'll ask the LLM and it gives A, B and C. I'll
| google a bit more about C. Find out C isn't real. Go back and
| try other people commenting on it on reddit, go back to the LLM
| to sniff out BS, so on and so on.
| plandis wrote:
| The main questions I have with using LLMs for this purpose in a
| business setting are:
|
| 1. Is the company providing the model willing to indemnify _your_
| company when using code generation? I know GitHub Copilot will do
| this with the models they provide on their hardware, but if
| you're using Claude Code or Cursor with random models do they
| provide equal guarantees? If not I wonder if it's only a matter
| of time before that landmine explodes.
|
| 2. In the US, AFAICT, software that is mostly generated by non-
| humans is not copyrightable. This is not an issue if you're
| creating code snippets from an LLM, but if you're generating an
| entire project this way then none or only small parts of the code
| base you generate would then be copyrightable. Do you still own
| the IP if it's not copyrightable? What if someone exfiltrates
| your software? Do you have no or little remedy?
| sivaragavan wrote:
| Aah.. I have been repeating this to my team for several days now.
| Thanks for putting this together.
|
| I start every piece of work, green or brown, with a markdown file
| that often contains my plan, task breakdown, data models
| (including key fields), API / function details, and sample
| responses.
|
| For the tool part, though, I took a slightly different approach.
| I decided to use Rust primarily for all my projects, as the
| compile-time checks are a great way to ensure the correctness of
| the generated code. I have noticed many more errors are detected
| in AI-generated Rust code than in any other language. I am happy
| about it because these are errors that I would have missed in
| other languages.
| manmal wrote:
| > I have noticed many more errors are detected in AI-generated
| Rust code than in any other language.
|
| Is that because the Rust compiler is just a very strong
| guardrail? Sounds like it could work well for Swift too. If
| only xcodebuild were less of a pain for big projects.
| sivaragavan wrote:
| Yes, exactly. AI isn't gonna check things like memory
| management, data type overflows and such, during generation.
| It would be great if we catch them at compile time.
|
| Regarding swift, totally hear you :) Also I haven't tried
| generating swift code - wondering how well that would be
| trained as there are fewer open source codebases for that.
| justanotherunit wrote:
| Interesting post, but this perspective seems to be the main
| focus, like all the time. I find this statement to be completely
| wrong usage of AI:
|
| "This is especially noteworthy because I don't actually know
| Python. Yes, with 25+ years of software development experience, I
| could probably write a few lines of working Python code if
| pressed -- but I don't truly know the language. I lack the muscle
| memory and intimate knowledge of its conventions and best
| practices."
|
| You should not use AI to just "do" the hard job, since as many
| have mentioned, it does it poorly and sloppy. Use AI to quickly
| learn the advantages and disadvantages of the language, then you
| do not have to navigate through documentation to learn
| everything, just validate what the AI outputs. All is contextual,
| and since you know what you want in high level, use AI to help
| you understand the language.
|
| This costs speed yes, but I have more control and gain knowledge
| about the language I chose.
| ManuelKiessling wrote:
| I agree 100%, but in this very specific case, I really just
| wanted a working one-off solution that I'm not going to spend
| much time on going forward, AND I wanted to use it as an excuse
| to see how far I can go with AI tooling in a tech stack I don't
| know.
|
| That being said, using AI as a teacher can be a wonderful
| experience. For us seniors, but also and probably more
| importantly, for eager and non-lazy juniors.
|
| I have one such junior on my team who currently speed-runs
| through the craft because he uses AI to explain EVERYTHING to
| him: What is this pattern? Why should I use it? What are the
| downsides? And so on.
|
| Of course I also still tutor him, as this is a main part of my
| job, but the availability of an AI that knows so much and
| always has time for him and never gets tired etc is just
| fantastic.
| switch007 wrote:
| As much as I am still sceptical about AI tools, the past month
| has been a revolution as a senior dev myself.
|
| I'm blasting through tickets, leaving more time to tutor and help
| junior colleagues and do refactoring. Guiding them has then been
| a multiplier, and also a bit of an eye opener about how little
| real guidance they've been getting up until now. I didn't realise
| how resource constrained we'd been as a team leading to not
| enough time guiding and helping them.
|
| I don't trust the tools with writing code very often but they are
| very good at architecture questions, outputting sample code etc.
| Supercharged google
|
| As a generalist, I feel less overwhelmed
|
| It's probably been the most enjoyable month at this job.
| quantadev wrote:
| As of 2025, it's no longer the case that older developers (like
| me at 57) are at a disadvantage just due to potentially lessened
| sheer brain power, as we had in our 20s. The reason is simple: We
| know what all the terminologies are, how to ask for things with
| proper and sufficient levels of detail and context, we know what
| the pitfalls and common error patterns are, and on and on, from
| decades of experience. Working with AI has similarities to
| management positions. You need to be a generalist. You need to
| know a little about everything, more so than a lot about one
| thing. All this can ONLY come with age, just like wisdom can only
| come thru experience.
|
| I just hope that most hiring managers now realize this. With AI
| the productivity of younger developers has gone up by a factor of
| 10x, but the productivity of us "Seasoned" developers has gone up
| 100x. This now evens the playing field, I hope, where us
| experienced guys will be given a fair shake in the hiring process
| rather than what's been happening for decades where the
| 20-somethings pretend to be interviewing the older guys, because
| some boss told them to, but they never had any actual intentions
| of hiring anyone over 40, just on the bases of age alone, even if
| some older guy aces the interview.
| ManuelKiessling wrote:
| This is a great observation and a beautiful perspective.
|
| Would it be okay for you if I quote this on a revised version
| of the article (with proper attribution, of course)?
| quantadev wrote:
| Sure, you have my permission. However, in my own case, I'm
| not currently looking for a job, and never in my life did it
| take more than 2 weeks to FIND a job, when looking. Been
| coding 24x7x365 since 1981. :)
|
| ...however I'm absolutely certain there are many interviews
| I've aced and not gotten the job, primarily for being in an
| several undesirable categories that Silicon Valley
| progressive wokesters despise with a passion: 1) Over 40. 2)
| White 3) Male 4) Straight 5) Handsome 6) Rich 7) American,
| speaking perfect English.
|
| All 7 of those demographic stats about me are absolutely
| despicable to Silicon Valley hiring managers, and any one
| alone is enough for them to pass by a candidate over a
| different one lacking the trait.
|
| You can also quote, this post about Silicon Valley too, and
| there was even a time when such rants like this would've been
| CENSORED immediately by HackerNews. Thankfully they've
| lightened up a bit.
| blatantly wrote:
| I think with AI my productivity had increased 1% at most. If I
| measure time saved per week.
| quantadev wrote:
| If your productivity is up 1% due to AI, that only means one
| of two things 1) You're not a developer, or 2) Your prompts
| are 99% poorly written, so that's a you problem not an AI
| problem.
| austin-cheney wrote:
| I didn't see anything of substance in the article. For example if
| they benefited from AI just how beneficial was it? Did it shrink
| their code by any amount or reduce execution time?
|
| No, the article was just something about enjoying AI. This is
| hardly anything related to _senior_ software developer skills.
| Olreich wrote:
| You know what's more fun than having a bad junior write crap code
| while you point out their mistakes? Writing good code yourself.
| yapyap wrote:
| > Context on Code Quality (via HackerNews): The HackerNews
| discussion included valid critiques regarding the code quality in
| this specific Python project example (e.g., logger configuration,
| custom config parsing, potential race conditions). It's a fair
| point, especially given I'm not a Python expert. For this
| particular green-field project, my primary goal was rapid
| prototyping and achieving a working solution in an unfamiliar
| stack, prioritizing the functional outcome over idiomatic code
| perfection or optimizing for long-term maintainability in this
| specific instance. It served as an experiment to see how far AI
| could bridge a knowledge gap. In brown-field projects within my
| areas of expertise, or projects demanding higher long-term
| maintainability, the human review, refinement, and testing
| process (using the guardrails discussed later) is necessarily
| much more rigorous. The critiques highlight the crucial role of
| experienced oversight in evaluating and refining AI-generated
| code to meet specific quality standards.
|
| We all know how big companies handle software, if it works ship
| it. Basically once this shit starts becoming very mainstream
| companies will want to shift into their 5x modes (for their oh so
| holy investors that need to see stock go up, obviously.)
|
| So once this sloppy prototype is seen as working they will just
| ship the shit sandwhich prototype. And the developers won't know
| what the hell it means so when something breaks in the future,
| and that is when not if. They will need AI to fix it for them,
| cause once again they do not understand what is going on.
|
| What I'm seeing here is you proposing replacing one of your legs
| with AI and letting it do all the heavy lifting, just so you can
| lift heavier things for the moment.
|
| Once this bubble crumbles the technical debt will be big enough
| to sink companies, I won't feel sorry for any of the AI boosties
| but do for their families that will go into poverty
| hansmayer wrote:
| Skimming through the article, it lacks clear structure and I am
| not sure what is the author attempting to show? Some new skills
| he had to use? Formulating requirements? That he can write
| extremely long prompts? The Conclusion section looks just like
| something a GenAI tool would produce. Or a first-year student in
| an essay.
| ManuelKiessling wrote:
| I've taken this feedback to heart and tried to improve the
| structure a bit.
|
| The main message I want to bring across is two-fold:
|
| 1. Senior developers are in a great position to make productive
| use of AI coding tools
|
| 2. I have (so far) identified three measures that make AI
| coding sessions much more successful
|
| I hope the reworked version makes these more central and clear.
| I'm not a native English speaker, thus it's probably not
| possible for me to end up with an optimal version.
|
| Still, I hope the new approach works a bit better for you --
| would love to receive another round of feedback.
| hansmayer wrote:
| Hey, thanks for not being offended about an honest comment,
| not a lot folks could take it as maturely, as you did :) May
| I suggest to add a short summary, half page max, of the
| important results that you consider to have achieved at the
| beginning of the article? Kind of like how scientific
| articles prepend the content with an abstract. I think it
| would help a lot in building the expectations for reading the
| rest of it.
| bsdimp wrote:
| I yesterday set chatgpt to a coding task. It utterly failed. Its
| error handling was extensive, but wrong. It didn't know file
| formats. It couldn't write the code when i told it the format.
| The structure of the code sucked. The style was worse. I've never
| had to work so hard for such garbage. I could have knocked it out
| from scratch faster with higher quality.
| vessenes wrote:
| This post is pretty much my exact experience with the coding
| tools.
|
| Basically the state of the art right now can turn me into an an
| architect/CTO that spends a lot of time complaining about poor
| architectural choices. Crucially Claude does not quite understand
| how to greenfield implement good architectures. 3.7 is also JUST
| . SO. CHATTY. It's better than 3.5, but more annoying.
|
| Gemini 2.5 needs one more round of coding tuning; it's excellent,
| has longer context and is much better at arch, but still
| occasionally misformats or forgets things.
|
| Upshot -- my hobby coding can now be 'hobby startup making' if
| I'm willing to complain a lot, or write out the scaffolding and
| requirements docs. It provides nearly no serotonin boost from
| getting into flow and delivering something awesome, but it does
| let me watch YouTube on the side while it codes.
|
| Decisions..
| g8oz wrote:
| The bit about being able to get something workable going in an
| unfamiliar tech stack hits home. In a similar vein I was able to
| configure a VyOS router, a nushell based api client and some
| MSOffice automation in Powershell with AI assistance. Not a big
| deal in and of itself but still very useful.
| JohnMakin wrote:
| I'm not quite 40 but starting to feel the effects of age, AI has
| been a great tool if not for the fact it saves my hands. I don't
| have it write the logic for me, mostly just stuff like smart
| autocomplete etc. I battle really severe tendonitis, I've noticed
| a definite improvement since I started using code complete.
|
| As far as knowledge/experience, I worry about a day where "vibe
| coding" takes over the world and it's only the greybeards that
| have any clue WTF is going on. Probably profitable, but also
| sounds like a hellscape to me.
|
| I would hate to be a junior right now.
| spoonfeeder006 wrote:
| Well, whats the difference between Vibe Coding in 5 - 10 years
| vs coding in C 5 - 10 years after compilers came out?
| curiousllama wrote:
| I would love to be a junior right now. I would just hate
| becoming a senior, after having been a junior right now.
| JohnMakin wrote:
| well said
| amflare wrote:
| > The very experience and accumulated know-how in software
| engineering and project management -- which might seem obsolete
| in the age of AI -- are precisely what enable the most effective
| use of these tools.
|
| I agree with the author here, but my worry is that by leaning on
| the LLMs, the very experience that allows me to uniquely leverage
| the LLMs now will start to atrophy and in a few years time I'll
| be relying on them just to keep up.
| scelerat wrote:
| I find myself spending so much time correcting bad -- or perhaps,
| more appropriately, misguided -- code that I constantly wonder if
| I'm saving time. I think I am, but a much higher percentage of my
| time is doing hard work of evaluating and thinking about things
| rather than mentally easy things the AI is good at, but what used
| to give me a little bit of a break.
|
| Sometimes I liken the promise of AI to my experience with
| stereoscopic images (I have never been able to perceive them) --
| I know there's something there but I frequently don't get it.
| hinkley wrote:
| The biggest thing I worry about with AI is that its current
| incarnation is anathema to the directions I think software needs
| to go next, and I'm at a loss to see what the judo-throw will
| look like that achieves that.
|
| Rob Pike has the right idea but the wrong execution. As the
| amount of second and third party code we use increases, the
| search time goes up, and we need better facilities to reduce the
| amount of time you need to spend looking at the internals of one
| package because you need that time to look at three others. So
| clarity and discoverability both need to matter, and AI has no
| answers here, only more problems.
|
| IMO, a lot of the success of Java comes from having provided 80%
| of the source code with the JDK. You could spend so much time
| single stepping into code that was not yours to figure out why
| your inputs didn't cause the outputs you expected. But those are
| table stakes now.
| lowsong wrote:
| > ... I believe our community should embrace it sooner rather
| than later -- but like all tools and practices, with the right
| perspective and a measured approach.
|
| There is no such thing as a measured approach. You can either use
| LLM agents to abdicate your intellectual honesty and produce
| slop, or you can refuse their use.
___________________________________________________________________
(page generated 2025-04-04 23:02 UTC)