[HN Gopher] Tips for programmers to stay ahead of generative AI
       ___________________________________________________________________
        
       Tips for programmers to stay ahead of generative AI
        
       Author : mfiguiere
       Score  : 104 points
       Date   : 2023-07-04 13:17 UTC (9 hours ago)
        
 (HTM) web link (spectrum.ieee.org)
 (TXT) w3m dump (spectrum.ieee.org)
        
       | FpUser wrote:
       | I do not need to "survive" ChatGPT world. It actually helps me.
       | Also I am not sure what does "coder" mean exactly. Personally I
       | design and implement software (sometimes other types) of products
       | and "coding" as in writing the actual code is the least of my
       | worries.
        
       | OwenFM wrote:
       | If people get left behind, it mightn't be such a bad thing - the
       | cheaper software will lower the cost of living, making UBI more
       | feasible.
       | 
       | The glut of unemployed people will hopefully popularise UBI
       | rather than trying to jump back on the treadmill of GDP
       | maximisation, exploiting each other and destroying our planet.
       | 
       | https://nonhuman.party/post/replacing_capitalism/
        
         | skeeter2020 wrote:
         | Interesting take, ignoring the fact there will be far less
         | people to pay into your magical UBI pool. How do you reconcile
         | a huge glut of unemployed people and the resources to hand out
         | extensive welfare benefits?
        
       | boringuser2 wrote:
       | GPT-4 does a lot of my work at this point.
       | 
       | I instruct it what to do, and it writes the code.
       | 
       | Is anyone else brave enough to admit it?
        
       | manicennui wrote:
       | It is difficult to take pieces like this seriously. "AI" is not
       | ahead of people who are competent. And you should be worried if
       | something that produces semi-correct boilerplate is ahead of you.
        
       | ape4 wrote:
       | It seems similar to off-shoring (a while ago). People in another
       | country would bang together tons of inelegant code that somebody
       | else had to check.
        
         | discreteevent wrote:
         | > that somebody else had to check
         | 
         | Or eventually just throw out and re-write:
         | 
         | -------------------------
         | 
         | blibble 3 months ago
         | 
         | the result of this will be similar to hiring infosys hundreds
         | of thousands of lines of buggy incomprehensible boilerplate
         | that doesn't work on anything but the easy cases
         | 
         | then you have to rip the entire thing apart and start again
         | with people that know what they're doing
         | 
         | --------------------------
         | 
         | https://news.ycombinator.com/item?id=35265921
        
         | rqtwteye wrote:
         | That comparison makes sense. And a lot of management will
         | probably use AI because it's "cheaper" although the results
         | aren't very good.
        
         | antifa wrote:
         | It'll at least look a lot nicer though
        
       | aksss wrote:
       | This title sounds like, "How writers can survive-and thrive-in a
       | spell-check world". I imagine it will sound absurd to you in time
       | if it doesn't already.
        
       | oofsa wrote:
       | The current AI may improve coder performance by only 5%, but it
       | can improve non-coders' learning speed by 1000%.
       | 
       | Learning to code has become significantly easier because of
       | ChatGPT, and many university students are already using it for
       | learning. Not only can they let ChatGPT write boilerplate code,
       | but they can also let ChatGPT write comments for code snippets
       | they don't understand and explain unfamiliar syntax.
       | 
       | I wonder if coders can survive in a world where more and more
       | people have coding skills. Edit: "majority" was not a good
       | wording
        
         | dt3ft wrote:
         | Majority having "coding" skills is not happening. Writing code
         | is boring for majority. Why write code when you could be
         | playing games and having fun on TikTok? There is your answer.
         | We love writing code because we are nerds who love to solve
         | complex problems. New kids who are not interested in writing
         | code but using shortcuts to get code written for them by
         | ChatGPT while not understanding it - is the least of our
         | concerns. Let them have at it, but once they see the monoliths
         | we tackle with at work - they will burnout on the spot. Cobol?
         | Still alive and well. For a reason.
        
         | zerodensity wrote:
         | I don't know about this, from my point of view to learn how to
         | program you need to actually... program.
         | 
         | Trying stuff over and over again in different variations using
         | maybe different languages, dealing with all the errors the
         | frustration and overcoming them.
         | 
         | I think that using chat gpt or similar llms to learn how to
         | code is similar to using Midjorney to learn how to draw.
         | 
         | Don't get me wrong you might be able to produce results fast
         | but taking shortcuts is not going to speed up understanding.
        
         | analog31 wrote:
         | Imagine an organization that once depended on 10 million lines
         | of code, now depends on 10 billion lines of code.
        
       | fknorangesite wrote:
       | > as they tend to hallucinate and produce inaccurate or incorrect
       | code.
       | 
       | Jesus, these things really _are_ coming for my job.
        
       | ewuhic wrote:
       | This article touches on llms for mostly code generation, I
       | however would be more interested in visuals.
       | 
       | What are the good resources to learn about image editing AI
       | tools, prompts and techniques?
       | 
       | My understanding is pretty limited, and correct me if I'm wrong,
       | but like one would be using Stable Diffusion or Midjourney, and
       | for a "professional" tool - Photoshop with official AI plug-ins?
        
         | ilaksh wrote:
         | Check out Playground.ai, Leonardo.ai, Adobe Firefly.
        
       | samsk wrote:
       | I'll start to be afraid, when ChatGPT or anything similar will
       | take a vague Jira issue, simulate described bug and then make a
       | fix for it ;-)
       | 
       | But seriously, what developers do most of their time is
       | maintanace, they spend a day searching for a bug, just to write
       | maybe one line of code to fix it.
        
         | onion2k wrote:
         | If GPT can improve the code that's written in the first place
         | then a lot of that work would just go away.
         | 
         | I _strongly_ suspect that it will. There are whole classes of
         | bugs that occur because some work is boring  'not quite copy
         | paste' work that devs just don't like doing, and don't pay any
         | attention to when they're doing it. Linters and syntax
         | highlighters already catch a ton of those issues before they
         | make it to production, and GPT will make the rest much less
         | likely to happen.
         | 
         | Maintenance is one area where GPT will also _shine_ , because
         | it's 'just' updating some code to do the same thing, so using
         | the existing code as a set of tokens with a prompt like 'update
         | this code to work with v2 of library X' will be extremely
         | effective. It'll be like having something write a codemod for
         | you.
         | 
         | The future is bright. We'll get a lot more productive stuff
         | done, and spend a lot less time on boring grunt work.
        
         | pessimizer wrote:
         | It will do that now, it will just be buggy and wrong. But
         | that's obviously just a stage that you don't even need more AI
         | to fix. Tell it how to write tests, tell it how to justify
         | those tests and how to make sure they make sense, tell it how
         | to look for the bug, tell it how to attempt a fix for the bug,
         | tell it to evaluate whether that fix satisfies the bug report,
         | tell it how to read the errors after it tries the fix, and to
         | iterate on that fix or to abandon it and try something else,
         | when it reaches all standards for success, tell it how to
         | report what it did on the ticket.
         | 
         | If one doubts that all this can be done by an LLM, use a
         | different LLM for each step. Use committees of LLMs that vote
         | on proposals made by other LLMs.
         | 
         | I don't know, I feel like the sky's the limit, especially if
         | they can be made significantly more power efficient. I think
         | that if they never get any better than they are now, and they
         | _just_ get more power efficient, they 'll be useful for almost
         | anything.
        
       | impissedoff1 wrote:
       | I sure picked the worst time to be unemployed and look at a
       | coding career again after a decade
        
         | ilaksh wrote:
         | Well the thing is though that if you focus on it for a week or
         | two you can pick up useful skills. Just play around with
         | ChatGPT for example for generating SQL for a particular table
         | or follow a tutorial using llamaindex for a "chat with
         | documents" thing. Try out a Stable Diffusion API or something
         | using replicate.com
         | 
         | There are a ton of people looking for help with generative AI
         | and you can be useful if you just play around with it for a few
         | weeks, because a lot of them have no idea about the basics. If
         | you are willing to be underpaid there is no need to be
         | unemployed -- just spend a few weeks studying and then go on
         | Upwork.
        
       | oldstrangers wrote:
       | My fear isn't that I'll be replaced, it's the technology becoming
       | so good that it'll be kept far out of reach of the common person.
       | I genuinely believe OpenAI knows what a GPT 5+ type world looks
       | like, and they're probably having a lot of debate on how best to
       | monetize it. They could practically charge anything in the world
       | for it assuming it still undercuts the cost of hiring a human.
       | One Nvidia super cluster running a local instance of GPT 5 at
       | $250k a year doing the work of 30 humans.
        
         | firebirdn99 wrote:
         | this is one underrated point. Altman talks about democratizing
         | this technology. But if the leading LLMs concentrate at a few
         | companies, unless governmentally mandated, they could keep
         | guardrails from regular folks accessing it, and also with
         | regulation capture.
        
           | pessimizer wrote:
           | "democratizing this technology" = working closely with
           | government
           | 
           | > unless governmentally mandated, they could keep guardrails
           | from regular folks accessing it
           | 
           | That's not what's going to happen. Government will mandate
           | that regular folks can't access it. Government will also do
           | its best to make sure LLMs concentrate at a few companies,
           | which it will often refer to as "partners."
        
       | irrational wrote:
       | I work for a Fortune 100 company. Recently an email was sent to
       | all 100,000 employees saying that nobody was allowed to use
       | DALL-E 2, ChatGPT, Codex, Stable Diffusion, Midjourney,
       | Microsoft's Copilot, and GitHub Copilot, etc. due to concerns
       | about those tools using other people's IP (meaning our company
       | might end up illegally using their IP) or the potential that the
       | tools might get a hold of our IP through our use of the tools and
       | share it with others. I'm not terribly worried about generative
       | AI taking my job when none of the thousands of programmers at my
       | company are allowed to use it.
        
       | Swizec wrote:
       | How to survive in an AI world? Shift your mindset from "I write
       | code" to "I deliver value".
       | 
       | Now instead of AI replacing you, it's helping you get more done
       | (in theory). Everyone wins.
       | 
       | Staking your career on being a pair of hired hands that executes
       | somebody else's exacting specifications was always a long-term
       | losing proposition. And we are very _very_ far away from AI being
       | able to ask the right questions to help business stakeholders and
       | customers clearly express what they need.
        
         | varispeed wrote:
         | I also find it bizarre that so many people feel precious about
         | the code. They have too much ego attached to what they type in
         | and the AI kind of make them feel insecure.
         | 
         | I couldn't care less if I write the code, the AI or someone I
         | told to write it.
         | 
         | What matters is the value it provides.
        
           | laputan_machine wrote:
           | > I also find it bizarre that so many people feel precious
           | about the code.
           | 
           | Would you find it bizarre that a joiner feels precious about
           | not _just_ the cabinet they made (value), but _how_ they made
           | it? The joints they used, the process they went through, the
           | wood (i.e., the _code_ )?
           | 
           | A plumber, electrician, architect, designer, programmer -- we
           | take pride in our skills.
           | 
           | Craftmanship is a virtue, not a vice.
        
             | varispeed wrote:
             | Your analogy suggests that the code is the final product,
             | akin to a cabinet or a building, something tangible that
             | can be appreciated for its craftsmanship. In some
             | instances, like open-source software, the code might indeed
             | be viewed this way, but in most cases, it's not the code
             | itself that end-users appreciate, it's the functionality it
             | provides.
             | 
             | To refine your analogy, the code isn't the cabinet - it's
             | more like the blueprint or the process used to create the
             | cabinet. The user doesn't care if a hand saw or a power saw
             | was used, as long as the cabinet is well-crafted and
             | functional. Similarly, end-users of software don't see or
             | appreciate the code. They only interact with the user
             | interface and the functionality it provides. As a result,
             | being "precious" about the code can sometimes be more about
             | personal ego and less about delivering value to the end-
             | user.
             | 
             | In terms of pride in craftsmanship, of course, it's crucial
             | to take pride in one's work. However, this doesn't mean
             | that one should be resistant to using better tools when
             | they become available. The introduction of AI in coding
             | doesn't negate craftsmanship - instead, it's an opportunity
             | to refine it and make it more efficient. It's like a
             | carpenter transitioning from using manual tools to using
             | power tools. The carpenter still needs knowledge, skill,
             | and an eye for detail to create a good product, but now
             | they can do it more efficiently.
        
               | [deleted]
        
             | ramesh31 wrote:
             | >Craftmanship is a virtue, not a vice.
             | 
             | Exactly, ultimately we are _craftsmen_ , not artisans. They
             | are two very distinct things. The difference being that the
             | value of our output is directly tied to its' functional
             | utility, not any sense of aesthetic or artistic expression.
             | You can take pride in the means used to achieve an end, but
             | they ultimately must be superseded by more efficient
             | techniques or you just become an artisan using traditional
             | tools, and not a craftsman who uses the industry standard.
        
               | dragonwriter wrote:
               | > Exactly, ultimately we are craftsmen, not artisans.
               | They are two very distinct things. The difference being
               | that the value of our output is directly tied to its'
               | functional utility, not any sense of aesthetic or
               | artistic expression.
               | 
               | That's the usual definition of an _artisan_ as opposed to
               | an _artist_. (Artisan vs. craftsman is a fuzzier
               | distinction.)
        
               | ramesh31 wrote:
               | An artisan uses artistic techniques and their own
               | intuitive judgement to produce a practical good. Think
               | bakers. No one would be upset that their local bakery was
               | using tools and techniques from a thousand years ago to
               | make their bread. But a carpenter, for example, is a
               | craftsman. He may take an aesthetic pride in the finished
               | result of his work, but it _must_ match all technical
               | specifications and building codes. And the customer would
               | be pretty upset to see them using wood planes and hand
               | saws to frame their house.
        
               | randcraw wrote:
               | Craftsmen are individual contributors. When you
               | coordinate others, you're no longer an IC, you're a
               | foreman, a boss. Coding with LLMs is about managing the
               | contributions of other ICs. It's no different from
               | coordinating low-level human coders who simply implement
               | the design that was given them.
               | 
               | If that's the kind of 'craftsmanship' you enjoy, great.
               | To me, this new model of 'bionic coding' feels a lot like
               | factory work, where my job is to keep my team from
               | falling behind the assembly line.
               | 
               | BTW, I've worked factory lines as both IC and foreman. In
               | either role, that life sucks.
        
           | currant_ wrote:
           | I feel like there's probably an unspoken division between
           | people who enjoy building systems (providing value in your
           | terms) and people who enjoy more of the detailed coding. The
           | later group had a good thing going where they were extremely
           | well compensated for doing an activity they enjoyed, so I
           | think it make sense for them to be a bit distressed.
        
           | AnimalMuppet wrote:
           | When assemblers were introduced, there were programmers who
           | complained, because it ruined the intimacy of their
           | communication with the computer that they had with octal.
           | 
           | They meant it, too. Noticing that the "or" instruction
           | differed only by one bit from the "subtract" instruction told
           | you something about the probable inner workings of the CPU.
           | It just turned out that it didn't matter - knowing that level
           | of detail didn't help you write code nearly as much as it
           | helped to be able to say "OR" instead of "032".
        
         | sublinear wrote:
         | > AI being able to ask the right questions to help business
         | stakeholders and customers clearly express what they need
         | 
         | This is where the biggest impact will be: better requirements.
         | I see no effect on writing code because humans already know how
         | to write code just fine, but so many of the people running the
         | business are absolutely clueless as to what it needs.
        
       | oofta-boofta wrote:
       | [dead]
        
       | sublinear wrote:
       | "You will have to worry about people who are using AI replacing
       | you"
       | 
       | Oh yes. I should be _very very_ afraid of the flying copypasta
       | monster. As if my productivity is reduced to the mere rate at
       | which I can write code! What 's even project planning? Why even
       | have meetings if it's all down to "is it done yet"? Who works at
       | these coding sweatshops that are so afraid of AI? If they get
       | fired and find a better place to work, that's a win.
        
       | medler wrote:
       | I use LLM-based autocomplete in my IDE, and it's not taking away
       | my job unless/until it improves by multiple orders of magnitude.
       | It's good at filling in boilerplate, but even for that I have to
       | carefully check its output because it can make little errors even
       | when I feel like what I want should be obvious. The article is
       | absolutely correct in saying you have to be critical of its
       | output.
       | 
       | I would say it improves my productivity by maybe 5%, which is an
       | incredible achievement. I'm already getting to where coding
       | without it feels very tedious.
        
         | fnordpiglet wrote:
         | Given AI (today) has no direct agency and can't create anything
         | unless directly prompted, and engineering is largely domain
         | discovery and resolving of unforeseen edges in a domain, I
         | don't think we are going to see a time where generative AI
         | alone is able to be more than an assistant. It'll likely
         | improve, but given it only can react to what it is
         | given/told/fed and inherently can't innovate or create or
         | discover, despite the illusion that it might be from the
         | position of the users ignorance of details the AI can produce,
         | it'll be an increasingly powerful adjunct to increasingly
         | capable engineers. The problems we solve will be more
         | interesting, we will produce better software faster, but I've
         | never seen the world as lacking in problems to solve but rather
         | capacity to solve them well or quickly enough given the
         | iterative time it takes to develop software. I think this
         | current trend of generative AI will help improve that
         | situation, but will likely make software engineers _even more
         | in demand_ as the possible uses of software become more
         | ubiquitous as the per unit cost of development goes down.
        
         | dave333 wrote:
         | Best way to check LLM output is to make it write its own tests
         | and do TDD. Obviously someone has to check the tests but that
         | is a 1% of the effort problem.
        
         | jmiskovic wrote:
         | "Very tedious without it" doesn't sound like just 5%
         | improvement?
         | 
         | I've started developing in a new language and I can hardly do
         | any work without the LLM assistance, the friction is just too
         | high. Even when auto-competitions are completely wrong they
         | still get the ball rolling, it's so much easier to fix the
         | nicely formatted code than to write from scratch. In my case
         | the improvement is vast, a difference from slacking off and
         | actually being productive.
        
         | firebirdn99 wrote:
         | Agreed 100%. It's helpful at filling out some functions maybe
         | if you name them correctly, and boiler plate code. Eventually,
         | they will get better, because these things get orders better
         | with orders more scale. Society has to do something about all
         | the jobs at that point, but we'll hopefully get a sense of how
         | close/far is that, with ChatGPT 5, and the next versions coming
         | up.
        
           | Filligree wrote:
           | The biggest benefit, I've found, is it makes me comment my
           | code. If I can make the AI understand what I want, then it
           | turns out that three months later I'll also be able to
           | understand the code.
        
             | moonchrome wrote:
             | That's the worst part about generative AI IMO - it makes
             | writing new code faster - it barely helps with editing
             | existing code. So when someone eventually updates the code
             | and forgets to update the comments I wouldn't be surprised
             | if the misleading comments made AI hallucinate.
        
               | coliveira wrote:
               | I believe that AI will get so good at creating new code
               | that a lot of existing libraries will be let unused. What
               | is the point of using lots of libraries if AI can
               | generate the code we need directly? The AI will be the
               | library itself, and the generated code will embed the
               | knowledge about doing lots of things for which we used
               | libraries.
        
               | pydry wrote:
               | >What is the point of using lots of libraries if AI can
               | generate the code we need directly?
               | 
               | Theyve been debugged.
        
         | arrowsmith wrote:
         | I find it increases my productivity about 5-10% when working
         | with the technologies I'm the most familiar with and use
         | regularly (Elixir, Phoenix, JavaScript, general web dev.) But
         | when I'm doing something unfamiliar and new, it's more like
         | 90%. It's incredible.
         | 
         | Recently at work, for example, I've been setting up a bunch of
         | stuff with some new technologies and libraries that I'd never
         | really used before. Without ChatGPT I'd have spent hours if not
         | days poring through tedious documentation and outdated
         | tutorials while trying to hack something together in an
         | agonising process of trial and error. But ChatGPT gave me a
         | fantastic proof-of-concept app that has everything I needed to
         | get started. It's been enormously helpful and I'm convinced it
         | saved me days of work. This technology is miraculous.
         | 
         | As for my job security... well, I think I'm safe for now;
         | ChatGPT sped me up in this instance but the generated app still
         | needs a skilled programmer to edit it, test it and deploy it.
         | 
         | On the other hand I _am_ slightly concerned that ChatGPT will
         | destroy my side income from selling programming courses... so
         | if you 're a Rails developer who wants to learn Elixir and
         | Phoenix, please check out my course _Phoenix on Rails_ before
         | we 're both replaced by robots: PhoenixOnRails.com
         | 
         | (Sorry for the self promotion but the code ELIXIRFORUM will
         | give a $10 discount.)
        
           | wouldbecouldbe wrote:
           | The thing is the hallucinations, I also wasted few hours
           | trying to work on solutions with GPT where it just kept
           | making up parameters and random functions.
        
             | tehwebguy wrote:
             | It referenced a made up a function I needed (that should
             | probably exist lol) in BrightScript, the letdown after
             | realizing as much was painful.
        
               | pydry wrote:
               | It did the same thing to me with docker compose the other
               | day.
               | 
               | For features that probably should exist but don't it does
               | a really good job of sending you on a wild goose chase.
        
             | yieldcrv wrote:
             | Yeah it can give you something that works out of the box,
             | but fixing it requires even more effort
             | 
             | Better to ask it for a bunch of small things and piece them
             | together
        
               | mattkevan wrote:
               | Yes. Start small and build up.
               | 
               | I've found it to be very forgetful and have to work
               | function-by-function, giving it the current code as part
               | of the next prompt. Otherwise it randomly changes class
               | names, invents new bits that weren't there before or
               | forgets entire chunks of functionality.
               | 
               | It's a good discipline as I have to work out exactly what
               | I want to achieve first and then build it up piece by
               | piece. A great way to learn a new framework or language.
               | 
               | It also sometimes picks convoluted ways of doing things,
               | so regularly asking whether there's a simpler way of
               | doing things can be useful.
        
         | yieldcrv wrote:
         | It writes my unit tests super fast, and my method comments
         | 
         | Its hard to say if it improves _my_ productivity because I just
         | wouldn't have done those things
         | 
         | But for the overall applications I think its improved a lot
         | because we can implement best practices more consistently and
         | catch regressions due to the aforementioned unit tests and
         | documentation
        
         | AlotOfReading wrote:
         | I've also been using an LLM autocomplete for a few months, and
         | yeah, it's pretty nice. My spouse was able to use it to write
         | an Easter egg into a game while I was doing housework the other
         | day.
        
         | jiggawatts wrote:
         | The only widely available LLM-based autocomplete is GitHub
         | Copilot, which is based on GPT 3.
         | 
         | Notably, it's not GPT 3.5, it's 3.0, which is pretty stupid as
         | far as the state of the art goes.
         | 
         | The upcoming Copilot X will be based on GPT 4, which has
         | "sparks of AGI".
         | 
         | In my experience there is no comparison. GPT 3 is barely good
         | enough for some trivial tab-complete tasks. GPT 4 can do quite
         | complex tasks like generating documentation, useful tests,
         | finding obscure bugs, etc...
        
           | fnordpiglet wrote:
           | LLMs no matter how clever have no agency or creativity or
           | ability to innovate, anticipate beyond what they're prompted,
           | etc. It's crucial to realize that LLM chat interfaces
           | disguise the fact they're still completing a prompt. This
           | isn't AGI as AGI requires agency. GPT4/5 or whatever
           | successor might be a key building block, and I suspect we've
           | already discovered the missing elements in classical AI and
           | the challenge will be integration, constraint, feedback, etc,
           | but _nothing_ will make LLMs alone AGI. That shouldn't be
           | surprising. Our brains are composed of many models, some
           | heuristic, some optimizers, some solvers, some constrainers,
           | and some generative. The answer won't be a single magic thing
           | operating in a black box. It'll be an ensemble. We already
           | see this effort beginning with plugins and things like
           | langchain. This is the path forward.
        
             | jiggawatts wrote:
             | > "No agency or creativity or ability to innovate,
             | anticipate beyond what they're prompted, etc."
             | 
             | Sadly, you've just described the majority of the developers
             | I've had to work with recently.
             | 
             | Most have no agency, write boilerplate code with no
             | creativity, need their hand held every step of they way,
             | and won't do anything they're not explicitly ordered to do.
             | 
             | You probably work in an SV startup with a highly skilled
             | workforce. Out there in the real world there are armies of
             | low-skill H1Bs and outsourcers that will soon be replaced
             | with automation.
             | 
             | It's a recurring theme in economics. Outsource to low cost
             | labour, insource with automation, repeat.
        
         | yayitswei wrote:
         | I agree with 5%. That said, I've found rubber duck debugging to
         | be an exceptionally effective use case for ChatGPT. Often it
         | will surprise me by pinpointing the solution outright, but I'll
         | always be making progress by clarifying my own thinking.
        
       | penjelly wrote:
       | the next 5+ years will see software proliferate rapidly as a
       | result of things like chatgpt and llm augmented documentation
       | pages. Building things end to end just got easier and will become
       | more so soon.
        
         | pavel_lishin wrote:
         | > a result of things like chatgpt and llm augmented
         | documentation pages.
         | 
         | You mean like this?
         | 
         | https://github.com/mdn/yari/issues/9208
         | 
         | https://news.ycombinator.com/item?id=36542267
        
         | substation13 wrote:
         | I would love LLMs to write documentation for me, even though I
         | don't trust them with the code.
        
       | Animats wrote:
       | Survival may require getting out of the mainstream. LLMs are
       | going to get really good at stuff that's been done thousands of
       | times and they can train on that data. Like web front end work.
       | 
       | If you're doing industrial embedded work and have an oscilloscope
       | and a logic analyzer on your desk, and spend part of your time
       | going into the plant and working directly with the machinery,
       | you're in better shape.
        
       | al_be_back wrote:
       | with every new comp-language advance or compiler/transpiler etc,
       | coder's life gets easier - but an engineer's life is more broad,
       | it's about Solving Problems that involve: Tech, Design, User and
       | Business. These problems will only get more complex.
       | 
       | I think the Coder's role will become a very niche market, highly
       | expert/specialist. the Engineer's will grow, very much needing AI
       | to help-out, especially with tasks around: Discovery,
       | Mapping/Relating, Projecting/Simulating.
        
       | pedalpete wrote:
       | Perhaps the title of the post has changed as it now reads "How
       | Coders can Survive - and Thrive - in a ChatGPT world".
       | 
       | My initial reaction when looking at the HN post title "Tips for
       | programmers to stay ahead of generative AI" made me think "we
       | don't want to stay ahead, we want to leverage the new
       | capabilities".
       | 
       | Can you imagine a weaver thinking "how can I stay ahead of the
       | loom?" It's crazy to try. Instead, figure out what the new
       | technology enables you to do which you couldn't do before, and
       | leverage that.
        
       | thumbuddy wrote:
       | I'm surviving just fine by not using any LLMs.
        
         | weevil wrote:
         | Post this comment on LinkedIn and see how long it takes for
         | some unqualified tech 'speaker' to tell you you're about to be
         | left in the dirt!
        
           | isanjay wrote:
           | They are called AI bros I think.
        
             | pavel_lishin wrote:
             | aiholes.
        
           | 1270018080 wrote:
           | 12 months ago, they were web3 experts
        
             | impissedoff1 wrote:
             | Hey now, connecting a wallet to your browser by JavaScript
             | and approving limitless permissions you can't see is the
             | future!
        
         | claytongulick wrote:
         | Me too.
         | 
         | I'm not an expert, but I'd say I'm a strongly average vim user.
         | 
         | But even at that skill level with vim, I haven't seen an area
         | where LLMs would increase my velocity.
         | 
         | Quite the opposite. It would completely interrupt my flow to
         | have to constantly stop and do a code review while I'm writing.
         | 
         | With good plugins, templates and macros in vim/vscode -
         | velocity writing code isn't the issue.
         | 
         | The stuff that takes all the time is UX tweaks and reasoning
         | about architecture, business constraints, and the correct level
         | of optimization for the company's maturity.
        
           | gfodor wrote:
           | Have you actually tried it? Copilot + vim for me is faster,
           | and less frustrating tbh, than vim without Copilot. Typing
           | obvious things is a PITA, and in code we type a lot of
           | obvious things.
        
             | lumb63 wrote:
             | The hype around AI coding assistants has recently inspired
             | me to improve my efficiency in writing code. I started
             | using NeoVim instead of vim to get access to LSP, for
             | autocomplete. I've found it actually slows me down because
             | I can type nearly anything faster than the LSP can
             | synthesize a response and I'm able to visually process the
             | options, select one, and input it into the computer. I've
             | never compared against an AI coding assistant, so maybe
             | that's different, but my experience has been that fast
             | typing speed combined with understanding of the task at
             | hand and what code must be written nullifies almost all
             | benefit of a coding assistant.
        
               | gfodor wrote:
               | An AI coding assistant is different, it will write an
               | entire loop, including the body, with proper guards, and
               | so on, in 1-2 seconds.
        
         | relativeMeans wrote:
         | [dead]
        
       | carride wrote:
       | > One of the most integral programming skills continues to be the
       | domain of human coders: problem solving. Analyzing a problem and
       | finding an elegant solution for it is still a highly regarded
       | coding expertise.
        
       | ehutch79 wrote:
       | I mean, right now the only thing at risk is the todo.app
       | industry...
       | 
       | I've yet to see anything maintaining legacy apps, or generating
       | line of business apps with requirements... even simple stuff,
       | like departure needs to be before arrival, etc.
       | 
       | I do see a whole bunch of youtube videos about generating a whole
       | codebase, but it's the kind of stuff that there's a hundred
       | tutorials covering.
        
         | anotherpaulg wrote:
         | My open source command line tool aider [0] is specifically for
         | this use case of working with GPT on an existing code base.
         | 
         | Let me know if you have a chance to try it out.
         | 
         | Here is a chat transcript [1] that illustrates how you can use
         | aider to explore an existing git repo, understand it and then
         | make changes. As another example I needed a new feature in the
         | glow tool and was able to make a PR [2] for it, even though I
         | don't know anything about that codebase or even how to write
         | golang.
         | 
         | [0] https://github.com/paul-gauthier/aider
         | 
         | [1] https://aider.chat/examples/2048-game.html
         | 
         | [2] https://github.com/charmbracelet/glow/pull/502
        
           | amusingimpala75 wrote:
           | This is an instance of the dangers of LLM. Because you (self-
           | admittedly) know nothing about the language or codebase, you
           | have no idea the semantically correct way to do things, so if
           | GPT tells you to metaphorically jump off a cliff, you won't
           | know that it isn't the right thing to do.
        
       | omgmajk wrote:
       | > Software engineers should be critical of the outputs of large
       | language models, as they tend to hallucinate and produce
       | inaccurate or incorrect code.
       | 
       | This seems to be the case more than not for certain tasks,
       | anything assembler or C I have ever asked it has turned out to be
       | at least somewhat wrong. Mixing styles and syntax all over the
       | place. I am not afraid that some generative AI will take my job
       | anytime soon.
        
         | intelVISA wrote:
         | Thankfully, as a human I never produce inaccurate or incorrect
         | code.
        
           | tiffanyg wrote:
           | Brings to mind the excellent team behind REAPER - "cockos":
           | 
           | John Schwartz, in the role of "Italics" is reportedly "so
           | highly trained that he can type code that compiles correctly
           | almost 3% of the time"!
           | 
           | Remarkable!
           | 
           | https://www.cockos.com/team.php
           | 
           |  _(I think just about anyone 'serious' learns pretty quickly
           | that compiler errors are in the 'our Lord and Saviour'
           | category. Unusually distributed generally quite rare but
           | easily catastrophic runtime 'Heisenbugs' are the fruit of the
           | devil!)_
        
         | varispeed wrote:
         | These things are a matter of writing a correct prompt. Like
         | "use this and that naming convention for variables and keep
         | this and that style".
         | 
         | You can also ask it to write tests for the code so you can
         | verify it is working or add your own additional tests.
         | 
         | Even if the produced code is wrong, it usually takes a few
         | steps to correct it and still saves time.
         | 
         | It is an amazing tool and time saver if you know what you are
         | doing, but helps with research as well. For instance if you
         | want to code something in the domain you know little about, it
         | can give you ideas where to look and then improve your prompts
         | based on that.
         | 
         | In the context of taking anyone's job is like saying that a
         | spreadsheet is going to replace accountants.
         | 
         | It's just a tool.
        
           | weevil wrote:
           | My first step when using an LLM is asking it to produce a
           | test suite for a function, with a load of example inputs and
           | outputs. 9 times out of 10, I've been presented with
           | something incorrect which I need to correct first.
           | 
           | I'm reasonably good at being specific and clear in my
           | directions, but I quickly arrived at the conclusion that LLMs
           | are simply not good at producing accurate code in a way that
           | saves me time.
        
           | DonaldPShimoda wrote:
           | > These things are a matter of writing a correct prompt.
           | 
           | No, they aren't.
           | 
           | ChatGPT doesn't _know_ things. It 's just a very fancy
           | predictive text engine. For any given prompt, it will provide
           | a response that is engineered to sound authoritative,
           | regardless of whether any information is correct.
           | 
           | It will summon case law out of the aether when prompted by a
           | lawyer; it will conjure paper titles and author names from
           | thin air when prompted by a researcher; it will certainly
           | generate semantically meaningless code very often. It's
           | absolutely ludicrous to assert that you just need a "better
           | prompt" to counteract these kinds of responses because this
           | is not a bug -- it's literally just how it works.
        
             | Kiro wrote:
             | Read the next sentence after your quote. The point is that
             | you should include code and examples in your prompt
             | (Copilot is so good since it includes the surrounding code
             | and open files in the prompt to understand your specific
             | context), not that you should craft an exceptional "act as
             | rockstar engineer" prompt.
        
               | DonaldPShimoda wrote:
               | I did read it, but the whole premise is flawed due to an
               | apparently incomplete understanding of how LLMs work.
               | Including code samples in your prompt won't have the
               | effect you think it will.
               | 
               | LLMs are trained to produce results that are
               | statistically likely to be syntactically well-formed
               | according to assumptions made about how "language" works.
               | So when you provide code samples, the model incorporates
               | those into the response. But it doesn't have any actually
               | comprehension of what's going on in those code samples,
               | or what any code "means"; it's all just pushing syntax
               | around. So what happens is you end up with responses that
               | are more likely to _look_ like what you want, but there
               | 's no guarantee or even necessarily a correlation that
               | the tuned responses will actually produce _meaningfully_
               | good code. This increases the odds of a bug slipping by
               | because, at a glance, it _looked_ correct.
               | 
               | Until LLMs can generate code with proofs of semantic
               | meaning, I don't think it's a good idea to trust them.
               | You're welcome to do as you please, of course, but I
               | would never use them for anything I work on.
        
           | omgmajk wrote:
           | The question is, do you actually save time by coaxing the
           | language model into an answer or would you just save time by
           | writing it yourself?
           | 
           | I have this friend who gets obsessed with things very easily
           | and ChatGPT got to him quite a bit. He spent about two months
           | perfecting his AI persona and starts every chat with several
           | hundred words of directions before asking any questions. I
           | find that this also produces the wrong answers many times.
        
       | wiradikusuma wrote:
       | I've been thinking about this.. most programmers use
       | frameworks/libraries, e.g. Spring/Hibernate in Java or React in
       | JavaScript. Is there a way to train LLM to "specialize" in our
       | frameworks/libraries of choice? I assume it would result in
       | faster/smaller/more accurate result?
        
         | ilaksh wrote:
         | Things like Falcon 40B are trainable with something like a LoRA
         | technique but the coding ability is weak. In the near future we
         | will have better open source models. But it is possible to do
         | for certain narrow domains.
         | 
         | Normally with the ChatGPT API you just feed API information or
         | examples into the prompt. One version of GPT-4 has 32k context.
         | The other has 8k and 3.5 has 16k now. So you can give it a lot
         | of useful information and make it work quite a lot better for
         | some specific task. When you pick something like React or
         | Spring in general, depending on what you mean that might be
         | huge amount of info to keep them current on. But if you narrow
         | it down to a few modules then you can give them the latest API
         | info etc.
         | 
         | Another option is now to feed ChatGPT a list of functions it
         | can call with the arguments. It generally won't screw the
         | actual function call part up, even with 3.5.
         | 
         | ChatGPT Plugins you can give an OpenAPI spec.
         | 
         | Then you implement the functions/API you give it. So they could
         | be a wrapper for an existing library.
        
       | dave333 wrote:
       | Occurs to me as a retired 69 year old former coder that AI makes
       | us old geezers and geezesses somewhat competitive again with our
       | younger colleagues. Need to learn yet another new framework? Let
       | AI do the nitty gritty bit. Capitalize on your experience and
       | higher level know how.
        
         | coliveira wrote:
         | Yes, this is my thinking too. No need to learn tons of new
         | frameworks, just ask ChatGPT what framework we can use to do a
         | particular task and ask for sample code. You can learn from
         | that much more easily.
        
           | mattkevan wrote:
           | I built an ebook reader in Vue with ChatGPT the other day,
           | never having used Vue before. Took about a day.
           | 
           | Learned absolutely loads - far more than sitting down with a
           | book and trying to learn from that. Not least because I've
           | tried before and quickly lost interest.
           | 
           | Instead I've learned the basics and made a working web app,
           | which I'm pretty pleased with.
        
         | riwsky wrote:
         | I'm reminded of those pro StarCraft players who retire at like
         | 30 because, while their strategy is perfect, their fingers
         | simply can't click or hotkey as fast as the twenty somethings.
        
       | nickjj wrote:
       | I like how the emphasis is on coders here because one of the
       | article's headers of "Clear and Precise Conversations Are Key" is
       | important.
       | 
       | I think this is why it will be a long time before the general
       | masses will be able to take advantage of AI to solve general
       | problems. Most people haven't built up a human skill level of
       | being able to explain their problem in a clear way to another
       | human.
       | 
       | Imagine if you have no other context about the problem below
       | other than these 2 prompts. Both of them are describing the same
       | problem which is related to entering in orders with a point of
       | sale system. Assume that you're talking to a human doing phone
       | support for the company that provided you the hardware:
       | 
       | - My orders aren't coming up at the register
       | 
       | - I have 2 devices to take orders, when I manually place orders
       | into the one hanging on the wall (ID: "Wall") it doesn't show up
       | in the list of orders at the register (ID: "Register") but when I
       | manually place an order at the register it does sync up at the
       | wall
       | 
       | The first prompt is typically what a non-technical business owner
       | may say over the phone when trying to get support. The second
       | prompt is what someone who has experience describing problems
       | might say even if they have no experience with the hardware other
       | than spending 2 minutes identifying what each device is and
       | chatting with the business owner to understand the real root
       | problem is one of the devices isn't pushing its orders to the
       | other device.
       | 
       | The 2nd one could become more precise too, but the context here
       | is you're speaking with another human who works for the company
       | that provides you the hardware and service so there's a lot of
       | information you can expect they have on hand which can be left
       | unsaid. They also have various technical specs about each device
       | since they know your account.
       | 
       | It would take many follow up questions from a human to get the
       | same information if you only provided the first question. I wish
       | a general AI tool good luck to extract that information out when
       | the direct person with the problem can barely type on their phone
       | and doesn't have a laptop or personal computer.
        
       ___________________________________________________________________
       (page generated 2023-07-04 23:00 UTC)