[HN Gopher] GitHub CEO: manual coding remains key despite AI boom
       ___________________________________________________________________
        
       GitHub CEO: manual coding remains key despite AI boom
        
       Author : andrewstetsenko
       Score  : 74 points
       Date   : 2025-06-23 20:50 UTC (2 hours ago)
        
 (HTM) web link (www.techinasia.com)
 (TXT) w3m dump (www.techinasia.com)
        
       | CoffeeOnWrite wrote:
       | "Manual" has a negative connotation. If I understand the article
       | correctly, they mean " _human_ coding remains key". It's not
       | clear to me the GitHub CEO actually used the word "manual", that
       | would surprise me. Is there another source on this that's either
       | more neutral or better at choosing accurate words? The last thing
       | we need is to put down human coding as "manual"; human coders
       | have a large toolbox of non-AI tools to automate their coding.
       | 
       | (Wow I sound triggered! _sigh_ )
        
         | upghost wrote:
         | It's almost as bad as "manual" thinking!
        
         | vram22 wrote:
         | > _Man_ ual" has a negative connotation. If I understand the
         | article correctly, they mean "human coding remains key".
         | 
         | A _man_ is a human.
        
           | layer8 wrote:
           | Humanual coding? ;)
           | 
           | "Manual" comes from Latin _manus_ , meaning "hand":
           | https://en.wiktionary.org/wiki/manus. It literally means "by
           | hand".
        
         | anamexis wrote:
         | What is the distinction between manual coding and human coding?
        
         | GuinansEyebrows wrote:
         | > Wow I sound triggered! sigh
         | 
         | this is okay! it's a sign of your humanity :)
        
         | layer8 wrote:
         | How about "organic coding"? ;)
        
         | dalyons wrote:
         | Acoustic coding
        
       | FirmwareBurner wrote:
       | I wonder how much coding he does and how does he know which code
       | is human written and which by machine.
        
       | treefarmer wrote:
       | I get a 403 forbidden error when trying to view the page. Anyone
       | else get that?
        
       | strict9 wrote:
       | It's interesting to see a CEO express thoughts on AI and coding
       | go in a slightly different direction.
       | 
       | Usually the CEO or investor says 30% (or some other made up
       | number) of all code is written by AI and the number will only
       | increase, implying that developers will soon be obsolete.
       | 
       | It's implied that 30% of all code submitted and shipped to
       | production is from AI agents with zero human interaction. But of
       | course this is not the case, it's the same developers as before
       | using tools to more rapidly write code.
       | 
       | And writing code is only one part of a developer's job in
       | building software.
        
         | heisenbit wrote:
         | Well, I suspect GitHub's income is a function of the number of
         | developers using it so it is not surprising that he takes this
         | position.
        
         | madeofpalk wrote:
         | He's probably more right than not. But he also has a vested
         | interest in this (just like the other CEOs who say the
         | opposite), being in the business of human-mediated code.
        
           | yodon wrote:
           | Presumably you're aware that the full name of Microsoft's
           | Copilot AI code authoring tool is "GitHub Copilot", that
           | GitHub developed it, and that he runs GitHub.
        
             | Imustaskforhelp wrote:
             | Yea, which is why I was surprised too when he said this.
        
             | madeofpalk wrote:
             | _Co_ pilot. Not Pilot.
        
       | p2detar wrote:
       | I use Cline with 3.7-sonnet to code a side gig Go web app. It
       | does help a lot, even in complex scenarios, but 90% of the time I
       | still need to do some "manual" adjustments on the code it
       | produced. Even as a Go novice, I still know what I'm doing but I
       | can't imagine doing this as a newbie.
       | 
       | I vibed probably around 40-50% of the code so far, the rest I
       | wrote myself. $30 spent up to this point and the app is 60% done.
       | I'm curious what my total expenses will be at the end of all
       | this.
       | 
       | edit: typos
        
       | jstummbillig wrote:
       | Going by the content of the linked post, this is very much a
       | misleading headline. There is nothing in the quotes that I would
       | read as an endorsement of "manual coding", at least not in the
       | sense that we have used the term "coding" for the past decades.
        
       | jasonthorsness wrote:
       | "He warned that depending solely on automated agents could lead
       | to inefficiencies. For instance, spending too much time
       | explaining simple changes in natural language instead of editing
       | the code directly."
       | 
       | Lots of changes where describing them in English takes longer
       | than just performing the change. I think the most effective
       | workflow with AI agents will be a sort of active back-and-forth.
        
         | sodality2 wrote:
         | Yeah, I can't count the number of times I've thought about a
         | change, explained it in natural language, pressed enter, then
         | realized I've already arrived at the exact change I need to
         | apply just by thinking it through. Oftentimes I even beat the
         | agent at editing it, if it's a context-heavy change.
        
           | dgfitz wrote:
           | Rubber duck. I've kept one on my desk for over a decade. It
           | was also like a dollar, which is more than I've spent on
           | LLMs. :)
           | 
           | https://en.m.wikipedia.org/wiki/Rubber_duck_debugging
        
         | neom wrote:
         | How active are you ok with/want? I've just joined an agent
         | tooling startup (yesh...I wrote that huh...) - and it's
         | something we talk a lot about internally, we're thinking it's
         | fine to do back and forth, tell it frankly it's not doing it
         | right, etc, but some stuff might be annoying? Do you have a
         | sense of how this might work to your mind? Thanks! :)
        
       | sysmax wrote:
       | AI can very efficiently apply common patterns to vast amounts of
       | code, but it has no inherent "idea" of what it's doing.
       | 
       | Here's a fresh example that I stumbled upon just a few hours ago.
       | I needed to refactor some code that first computes the size of a
       | popup, and then separately, the top left corner.
       | 
       | For brevity, one part used an "if", while the other one had a
       | "switch":                   if (orientation == Dock.Left ||
       | orientation == Dock.Right)             size = /* horizontal
       | placement */         else             size = /* vertical
       | placement */              var point = orientation switch
       | {             Dock.Left => ...             Dock.Right => ...
       | Dock.Top => ...             Dock.Bottom => ...         };
       | 
       | I wanted the LLM to refactor it to store the position rather than
       | applying it immediately. Turns out, it just could not handle
       | different things (if vs. switch) doing a similar thing. I tried
       | several variations of prompts, but it very strongly leaning to
       | either have two ifs, or two switches, despite rather explicit
       | instructions not to do so.
       | 
       | It sort of makes sense: once the model has "completed" an if, and
       | then encounters the need for a similar thing, it will pick an
       | "if" again, because, well, it is completing the previous tokens.
       | 
       | Harmless here, but in many slightly less trivial examples, it
       | would just steamroll over nuance and produce code that appears
       | good, but fails in weird ways.
       | 
       | That said, splitting tasks into smaller parts devoid of such
       | ambiguities works really well. Way easier to say "store size in
       | m_StateStorage and apply on render" than manually editing 5
       | different points in the code. Especially with stuff like
       | Cerebras, that can chew through complex code at several kilobytes
       | per second, expanding simple thoughts faster than you could
       | physically type them.
        
         | gametorch wrote:
         | Yeah that's one model that you happen to be using in June 2025.
         | 
         | Give it to o3 and it could definitely handle that today.
         | 
         | Sweeping generalizations about how LLMs will never be able to
         | do X, Y, or Z _coding task_ will all be proven wrong with time,
         | imo.
        
           | npinsker wrote:
           | Sweeping generalizations about how LLMs will always (someday)
           | be able to do arbitrary X, Y, and Z don't really capture me
           | either
        
             | gametorch wrote:
             | In response to your sweeping generalization, I posit a
             | sweeping generalization of my own, said the bard:
             | 
             |  _Whatever can be statistically predicted_
             | 
             |  _by the human brain_
             | 
             |  _Will one day also be_
             | 
             |  _statistically predicted by melted sand_
        
               | agentultra wrote:
               | Until the day that thermodynamics kicks in.
               | 
               | Or the current strategies to scale across boards instead
               | of chips gets too expensive in terms of cost, capital,
               | and externalities.
        
               | gametorch wrote:
               | I mean fair enough, I probably don't know as much about
               | hardware and physics as you
        
               | agentultra wrote:
               | Just pointing out that there are limits and there's no
               | reason to believe that models will improve indefinitely
               | at the rates we've seen these last couple of years.
        
               | soulofmischief wrote:
               | There is reason to believe that humans will keep trying
               | to push the limitations of computation and computer
               | science, and that recent advancements will greatly
               | accelerate our ability to research and develop new
               | paradigms.
               | 
               | Look at how well Deepseek performed with the limited,
               | outdated hardware available to its researchers. And look
               | at what demoscene practitioners have accomplished on much
               | older hardware. Even if physical breakthroughs ceased or
               | slowed down considerably, there is still a ton left on
               | the table in terms of software optimization and theory
               | advancement.
               | 
               | And remember just how _young_ computer science is as a
               | field, compared to other human practices that have been
               | around for hundreds of thousands of years. We have so
               | much to figure out, and as knowledge begets more
               | knowledge, we will continue to figure out more things at
               | an increasing pace, even if it requires increasingly
               | large amounts of energy and human capital to make a
               | discovery.
               | 
               | I am confident that if it is _at all_ possible to reach
               | human-level intelligence at least in specific categories
               | of tasks, we 're gonna figure it out. The only real
               | question is whether access to energy and resources
               | becomes a bigger problem in the future, given humanity's
               | currently extraordinarily unsustainable path and the risk
               | of nuclear conflict or sustained supply chain disruption.
        
           | DataDaoDe wrote:
           | The interesting questions happen when you define X, Y and Z
           | and time. For example, will llms be able to solve the P=NP
           | problem in two weeks, 6 months, 5 years, a century? And then
           | exploring why or why not
        
           | guappa wrote:
           | If you need a model per task, we're very far from AGI.
        
           | sysmax wrote:
           | I am working on a GUI for delegating coding tasks to LLMs, so
           | I routinely experiment with a bunch of models doing all kinds
           | of things. In this case, Claude Sonnet 3.7 handled it just
           | fine, while Llama-3.3-70B just couldn't get it. But that is
           | literally the simplest example that illustrates the problem.
           | 
           | When I tried giving top-notch LLMs harder tasks (scan an
           | abstract syntax tree coming from a parser in a particular
           | way, and generate nodes for particular things) they
           | completely blew it. Didn't even compile, let alone logical
           | errors and missed points. But once I broke down the problem
           | to making lists of relevant parsing contexts, and generating
           | one wrapper class at a time, it saved me a whole ton of work.
           | It took me a day to accomplish what would normally take a
           | week.
           | 
           | Maybe they will figure it out eventually, maybe not. The
           | point is, right now the technology has fundamental
           | limitations, and you are better off knowing how to work
           | around them, rather than blindly trusting the black box.
        
             | gametorch wrote:
             | Yeah exactly.
             | 
             | I think it's a combination of
             | 
             | 1) wrong level of granularity in prompting
             | 
             | 2) lack of engineering experience
             | 
             | 3) autistic rigidity regarding a single hallucination
             | throwing the whole experience off
             | 
             | 4) subconscious anxiety over the threat to their jerbs
             | 
             | 5) unnecessary guilt over going against the tide; anything
             | pro AI gets heavily downvoted on Reddit and is, at best,
             | controversial as hell here
             | 
             | I, for one, have shipped like literally a product per day
             | for the last month and it's amazing. Literally 2,000,000+
             | impressions, paying users, almost 100 sign ups across the
             | various products. I am fucking flying. Hit the front page
             | of Reddit and HN countless times in the last month.
             | 
             | Idk if I break down the prompts better or what. But this is
             | production grade shit and I don't even remember the last
             | time I wrote more than two consecutive lines of code.
        
               | sysmax wrote:
               | If you are launching one product per day, you are using
               | LLMs to convert unrefined ideas into proof-of-concept
               | prototypes. That works really well, that's the kind of
               | work that nobody should be doing by hand anymore.
               | 
               | Except, not all work is like that. Fast-forward to
               | product version 2.34 where a particular customer needs a
               | change that could break 5000 other customers because of
               | non-trivial dependencies between different parts of the
               | design, and you will be rewriting the entire thing by
               | humans or having it collapse under its own weight.
               | 
               | But out of 100 products launched on the market, only 1 or
               | 2 will ever reach that stage, and having 100 LLM
               | prototypes followed by 2 thoughtful redesigns is way
               | better than seeing 98 human-made products die.
        
         | soulofmischief wrote:
         | > AI can very efficiently apply common patterns to vast amounts
         | of code, but it has no inherent "idea" of what it's doing.
         | 
         | AI stands for Artificial Intelligence. There are no inherent
         | limits around what AI can and can't do or comprehend. What you
         | are specifically critiquing is the capability of _today 's_
         | popular models, specifically _transformer models_ , and
         | accompanying tooling. This is a rapidly evolving landscape, and
         | your assertions might no longer be relevant in a month, much
         | less a year or five years. In fact, your criticism might not
         | even be relevant between _current models_. It 's one thing to
         | speak about idiosyncrasies between models, but any broad
         | conclusions drawn outside of a comprehensive multi-model review
         | with strict procedure and controls is to be taken with a
         | massive grain of salt, and one should be careful to avoid
         | authoritative language about capabilities.
         | 
         | It would be useful to be precise in what you are critiquing, so
         | that the critique actually has merit and applicability. Even
         | saying "LLM" is a misnomer, as modern transformer models are
         | multi-modal and trained on much more than just textual
         | language.
        
       | hnthrow90348765 wrote:
       | My guess is they will settle for 2x the productivity as a before-
       | AI developer as their skill floor, but then _not_ take a look at
       | how long meetings and other processes take.
       | 
       | Why not look at Bob who takes like 2 weeks to write tickets on
       | what they actually want in a feature? Or Alice who's really slow
       | getting Figma designs done and validated? How nice would having a
       | "someone's bothered a developer" metric be and having the
       | business seek to get that to zero and talk very loudly about it
       | as they have about developers?
        
       | mycocola wrote:
       | I think most programmers would agree that thinking represents the
       | majority of our time. Writing code is no different than writing
       | down your thoughts, and that process in itself can be immensely
       | productive -- it can spark new ideas, grant epiphanies, or take
       | you in an entirely new direction altogether. Writing is thinking.
       | 
       | I think an over-reliance, or perhaps any reliance, on AI tools
       | will turn good programmers into slop factories, as they
       | consistently skip over a vital part of creating high-quality
       | software.
       | 
       | You could argue that the prompt == code, but then you are adding
       | an intermediary step between you and the code, and something will
       | always be lost in translation.
       | 
       | I'd say just write the code.
        
         | sothatsit wrote:
         | I think this misses the point. You're right that programmers
         | still need to think. But you're wrong thinking that AI does not
         | help with that.
         | 
         | With AI, instead of starting with zero and building up, you can
         | start with a result and iterate on it straight away. This
         | process really shines when you have a good idea of what you
         | want to do, and how you want it implemented. In these cases, it
         | is really easy to review the code, because you knew what you
         | wanted it to look like. And so, it lets me implement some basic
         | features in 15 minutes instead of an hour. This is awesome.
         | 
         | For more complex ideas, AI can also be a great idea sparring
         | partner. Claude Code can take a paragraph or two from me, and
         | then generate a 200-800 line planning document fleshing out all
         | the details. That document: 1) helps me to quickly spot
         | roadblocks using my own knowledge, and 2) helps me iterate
         | quickly in the design space. This lets me spend more time
         | thinking about the design of the system. And Claude 4 Opus is
         | near-perfect at taking one of these big planning specifications
         | and implementing it, because the feature is so well specified.
         | 
         | So, the reality is that AI opens up new possible workflows.
         | They aren't always appropriate. Sometimes the process of
         | writing the code yourself and iterating on it is important to
         | helping you build your mental model of a piece of
         | functionality. But a lot of the time, there's no mystery in
         | what I want to write. And in these cases, AI is brilliant at
         | speeding up design and implementation.
        
           | mycocola wrote:
           | Based on your workflow, I think there is considerable risk of
           | you being wooed by AI into believing what you are doing is
           | worthwhile. The plan AI offers is coherent, specific, it
           | sounds good. It's validation. Sugar.
        
       | lunarboy wrote:
       | It was only 2 years ago we were still taking about GPTs making up
       | completely nonsense, and now hallucinations are almost gone from
       | the discussions. I assume it will get even better, but I also
       | think there is an inherent plateau. Just like how machines solved
       | mass manufacturing work, but we still have factory workers and
       | overseers. Also, "manually" hand crafted pieces like fashion and
       | watches continue to be the most expensive luxury goods. So I
       | don't believe good design architects and consulting will ever be
       | fully replaced.
        
         | jashmatthews wrote:
         | Hallucinations are now plausibly wrong which is in some ways
         | harder to deal with. GPT4.1 still generates Rust with imaginary
         | crates and says "your tests passed, we can now move on" to a
         | completely failed test run.
        
       | OJFord wrote:
       | This seems to be an AI summary of a (not linked) podcast.
        
       | agentultra wrote:
       | ... because programming languages are the right level of
       | precision for specifying a program you want. Natural language
       | isn't it. Of course you need to review and edit what it
       | generates. Of course it's often easier to make the change
       | yourself instead of describing how to make the change.
       | 
       | I wonder if the independent studies that show Copilot increasing
       | the rate of errors in software have anything to do with this less
       | bold attitude. Most people selling AI are predicting the
       | obsolescence of human authors.
        
         | soulofmischief wrote:
         | Transformers can be used to automate testing, create deeper and
         | broader specification, accelerate greenfield projects, rapidly
         | and precisely expand a developer's knowledge as needed,
         | navigate unfamiliar APIs without relying on reference, build
         | out initial features, do code review and so much more.
         | 
         | Even if code is the right medium for specifying a program,
         | transformers act as an automated interface between that medium
         | and natural language. Modern high-end transformers have no
         | problem producing code, while benefiting from a wealth of
         | knowledge that far surpasses any individual.
         | 
         | > Most people selling AI are predicting the obsolescence of
         | human authors.
         | 
         | It's entirely possible that we do become obsolete for a wide
         | variety of programming domains. That's simply a reality, just
         | as weavers saw massive layoffs in the wake of the automated
         | loom, or scribes lost work after the printing press, or human
         | calculators became pointless after high-precision calculators
         | became commonplace.
         | 
         | This replacement might not happen tomorrow, or next year, or
         | even in the next decade, but it's clear that we are able to
         | build capable models. What remains to be done is R&D around
         | things like hallucinations, accuracy, affordability, etc. as
         | well as tooling and infrastructure built around this new
         | paradigm. But the cat's out of the bag, and we are not
         | returning to a paradigm that doesn't involve intelligent
         | automation in our daily work; programming is _literally_ about
         | automating things and transformers are a massive forward step.
         | 
         | That doesn't really mean anything, though; You can still be as
         | involved in your programming work as you'd like. Whether you
         | can find paid, professional work depends on your domain, skill
         | level and compensation preferences. But you can always program
         | for fun or personal projects, and decide how much or how little
         | automation you use. But I will recommend that you take these
         | tools seriously, and that you aren't too dismissive, or you
         | could find yourself left behind in a rapidly evolving
         | landscape, similarly to the advent of personal computing and
         | the internet.
        
         | JoeOfTexas wrote:
         | Doesn't AI have diminishing returns on it's pseudo creativity?
         | Throw all the training output of LLM into a circle. If all
         | input comes from other LLM output, the circle never grows.
         | Humans constantly step outside the circle.
         | 
         | Perhaps LLM can be modified to step outside the circle, but as
         | of today, it would be akin to monkeys typing.
        
       | randomNumber7 wrote:
       | Code monkeys that doesn't understand the limits of LLMs and can't
       | solve problems where the LLM fails are not needed in the world of
       | tomorrow.
       | 
       | Why wouldn't your boss ask ChatGPT directly?
        
       | exabrial wrote:
       | Amazingly, so does air and water. What AI salesman could have
       | predicted this?
        
       | guluarte wrote:
       | AI is good for boilerplate, suggestions, nothing more.
        
         | johnisgood wrote:
         | For you, perhaps.
        
       | layer8 wrote:
       | One of the most useful properties of computers is that they
       | enable reliable, eminently reproducible automation. Formal
       | languages (like programming languages) not only allow to
       | unambiguously specify the desired automation to the upmost level
       | of precision, they also allow humans to reason about the
       | automation with precision and confidence. Natural language is a
       | poor substitute for that. The ground truth of programs will
       | always be the code, and if humans want to precisely control what
       | a program does, they'll be best served by understanding,
       | manipulating, and reasoning about the code.
        
       | swyx wrote:
       | > In an appearance on "The MAD Podcast with Matt Turck," Dohmke
       | said that
       | 
       | > Source: The Times of India
       | 
       | what in the recycled content is this trash?
        
       | dboreham wrote:
       | That's him out the Illuminati then.
        
       | lawgimenez wrote:
       | Not gonna lie, first time I've heard of manual coding.
        
       | another_twist wrote:
       | I think these are coordinated posts by Microsoft execs. First
       | their director of product, now this. Its like they're trying to
       | calm the auto coding hype until they catchup and thus keep OpenAI
       | from running away.
        
       | taysix wrote:
       | I had a fun result the other day from Claude. I opened a script
       | in Zed and asked it to "fix the error on line 71". Claude happily
       | went and fixed the error on line 91....
       | 
       | 1. There was no error on line 91, it did some inconsequential
       | formatting on that line 2. More importantly, it just ignored the
       | very specific line I told it to go to. It's like I was playing
       | telephone with the LLM which felt so strange with text-based
       | communication.
       | 
       | This was me trying to get better at using the LLM while coding
       | and seeing if I could "one-shot" some very simple things. Of
       | course me doing this _very_ tiny fix myself would have been
       | faster. Just felt weird and reinforces this idea that the LLM
       | isn't actually thinking at all.
        
         | klysm wrote:
         | LLMs probably have bad awareness of line numbers
        
           | mcintyre1994 wrote:
           | I suspect if OP highlighted line 71 and added it to chat and
           | said fix the error, they'd get a much better response. I
           | assume Cursor could create a tool to help it interpret line
           | numbers, but that's not how they expect you to use it really.
        
         | senko wrote:
         | > This was me trying to get better at using the LLM while
         | coding
         | 
         | And now you've learned that LLMs can't count lines. Next time,
         | try asking it to "fix the error in function XYZ" or copy/paste
         | the line in question, and see if you get better results.
         | 
         | > reinforces this idea that the LLM isn't actually thinking at
         | all.
         | 
         | Of course it's not thinking, how could it? It's just a (rather
         | big) equation.
        
         | toephu2 wrote:
         | Sounds like operator error to me.
         | 
         | You need to give LLMs context. Line number isn't good context.
        
       | klysm wrote:
       | CEOs are possibly the last person you should listen to on any
       | given subject.
        
       ___________________________________________________________________
       (page generated 2025-06-23 23:00 UTC)