[HN Gopher] The Prompt Engineering Playbook for Programmers
___________________________________________________________________
The Prompt Engineering Playbook for Programmers
Author : vinhnx
Score : 132 points
Date : 2025-06-04 15:58 UTC (7 hours ago)
(HTM) web link (addyo.substack.com)
(TXT) w3m dump (addyo.substack.com)
| sherdil2022 wrote:
| So it looks like we all need to have firm understanding and
| tailor our prompts now to effectively use LLMs. Isn't this all
| subjective? I get different answers based upon how I word my
| question. Shouldn't things be little bit more objective? Isn't
| this random that I get different results based upon just wording?
| This whole thing is just discombobulating to me.
| fcoury wrote:
| And to add to it, here's my experience: sometimes you spend a
| lot of time on this upfront prompt engineering and get bad
| results and sometimes you just YOLO it and get good results.
| It's hard to advocate for a determined strategy for prompt
| engineering when the tool you're prompting itself is non-
| deterministic.
|
| Edit: I also love that the examples come with "AI's response to
| the poor prompt (simulated)"
| prisenco wrote:
| Also that non-determinism means every release will change the
| way prompting works. There's no guarantee of consistency like
| an API or a programming language release would have.
| akkad33 wrote:
| I'd rather write my own code than do all that
| echelon wrote:
| Your boss (or CEO) probably wouldn't.
| ColinEberhardt wrote:
| There are so many prompting guides at the moment. Personally I
| think they are quite unnecessary. If you take the time to use
| these tools, build familiarity with them and the way they work,
| the prompt you should use becomes quite obvious.
| Disposal8433 wrote:
| It reminds me that we had the same hype and FOMO when Google
| became popular. Books were being written on the subject and you
| had to buy those or you would become a caveman in a near
| future. What happened is that anyone could learn the whole
| thing in a day and that was it, no need to debate about whether
| you would miss anything if you didn't knew all those tools.
| sokoloff wrote:
| I think there are people for whom reading a prompt guide (or
| watching an experienced user) will be very valuable.
|
| Many people just won't put any conscious thought into trying to
| get better on their own, though some of them will read or watch
| one thing on the topic. I will readily admit to picking up
| several useful tips from watching other people use these tools
| and from discussing them with peers. That's improvement that I
| don't think I achieve by solely using the tools on my own.
| haolez wrote:
| Sometimes I get the feeling that making super long and intricate
| prompts reduces the cognitive performance of the model. It might
| give you a feel of control and proper engineering, but I'm not
| sure it's a net win.
|
| My usage has converged to making very simple and minimalistic
| prompts and doing minor adjustments after a few iterations.
| tgv wrote:
| For another kind of task, a colleague had written a very
| verbose prompt. Since I had to integrate it, I added some CRUD
| ops for prompts. For a test, I made a very short one, something
| like "analyze this as a <profession>". The output was pretty
| much comparable, except that the output on the longer prompt
| contained (quite a few) references to literal parts of that
| prompt. It wasn't incoherent, but it was as if that model
| (gemini 2.5, btw) has a basic response for the task it extracts
| from the prompt, and merges the superfluous bits in. It would
| seem that, at least for this particular task, the model cannot
| (easily) be made to "think" differently.
| wslh wrote:
| Same here: it starts with a relatively precise need, keeping a
| roadmap in mind rather than forcing one upfront. When it
| involves a technology I'm unfamiliar with, I also ask questions
| to understand what certain things mean before "copying and
| pasting".
|
| I've found that with more advanced prompts, the generated code
| sometimes fails to compile, and tracing the issues backward can
| be more time consuming than starting clean.
| lodovic wrote:
| I use specs in markdown for the more advanced prompts. I ask
| the llm to refine the markdown first and add implementation
| steps, so i can review what it will do. When it starts
| implementing, i can always ask it to "just implement step 1,
| and update the document when done". You can also ask it to
| verify if the spec has been implemented correctly.
| taosx wrote:
| That's exactly how I started using them as well. 1. Give it
| just enough context, the assumptions that hold and the goal. 2.
| Review answer and iterate on the initial prompt. It is also the
| economical way to use them. I've been burned one too many times
| by using agents (they just spin and spin, burn 30 dollars for
| one prompt and either mess the code base or converge on the
| previous code written ).
|
| I also feel the need to caution others that by letting the AI
| write lots of code in your project it makes it harder to
| advance it, evolve it and just move on with confidence (code
| you didn't think about and write it doesn't stick as well into
| your memory).
| scarface_74 wrote:
| How is that different than code I wrote a year ago or when I
| have to modify someone else's code?
| apwell23 wrote:
| > they just spin and spin, burn 30 dollars for one prompt and
| either mess the code base or converge on the previous code
| written ).
|
| My experience as well. I fear admitting this for fear of
| being labled a luddite.
| pjm331 wrote:
| Yeah I had this experience today where I had been running code
| review with a big detailed prompt in CLAUDE.md but then I ran
| it in a branch that did not have that file yet and got better
| results.
| neves wrote:
| Any tip that my fellow programmers find useful that's not in the
| article?
| didibus wrote:
| Including a coding style guide can help the code looks like
| what you want. Also including an explanation of the project
| structure, and overall design of the code base. Always specify
| what libraries it should make use of (or it'll bring in
| anything or implement stuff a library has already).
|
| You can also make the AI review itself. Have it modify code,
| than ask to review the code, than ask to address review
| comments, and iterate until it has no more comments.
|
| Use an agentic tool like Claude Code or Amazon Q CLI. Then ask
| it to run tests after code changes and to address all issues
| until test pass. Make sure to tell it not to change the test
| code.
| taosx wrote:
| Unless your employer pays for you to use agentic tools, avoid
| them. They burn through money and tokens like there's no
| tomorrow.
| trcf22 wrote:
| I found that presenting your situation and asking for a
| plan/ideas + << do not give me code. Make sure you understand
| the requirements and ask questions if needed.>> works much
| better for me.
|
| It also allows me to more easily control what the llm will do
| and not end up reviewing and throwing 200 lines of code.
|
| In a nextjs + vitest context, I try to really outline which
| tests I want and give it proper data examples so that it does
| not cheat around mocking fake objects.
|
| I do not buy into the whole you're a senior dev etc. Most
| people use Claude for coding so I guess it's engrained by
| default.
| ofrzeta wrote:
| In the "Debugging example", the first prompt doesn't include the
| code but the second does? No wonder it can find the bug! I guess
| you may prompt as you like, as long as you provide the actual
| code, it usually finds bugs like this.
|
| About the roles: Can you measure a difference in code quality
| between the "expert" and the "junior"?
| namanyayg wrote:
| It's cool to see Addy's prompts. I also wrote about some of mine!
| (ignoring the obvious ones): https://nmn.gl/blog/ai-prompt-
| engineering
| b0a04gl wrote:
| tighter prompts, scoped context and enforced function signatures.
| let it selfdebug with eval hooks. consistency > coherence.
| leshow wrote:
| using the term "engineering" for writing a prompt feels very
| unserious
| morkalork wrote:
| For real. Editing prompts bares no resemblance to engineering
| at all, there is no accuracy or precision. Say you have a
| benchmark to test against and you're trying to make an
| improvement. Will your change to the prompt make the benchmark
| go up? Down? Why? Can you predict? No, it is not a science at
| all. It's just throwing shit and examples at the wall in hopes
| and prayers.
| echelon wrote:
| > Will your change to the prompt make the benchmark go up?
| Down? Why? Can you predict? No, it is not a science at all.
|
| Many prompt engineers _do_ measure and quantitatively
| compare.
| morkalork wrote:
| Me too but it's after the fact. I make a change then
| measure, if it doesn't I roll back. But it's as good as
| witch craft or alchemy. Will I get I get gold with this
| adjustment? Nope, still lead. _Tries variation #243 next_
| a2dam wrote:
| This is literally how the light bulb filament was
| discovered.
| MegaButts wrote:
| And Tesla famously described Edison as an idiot for this
| very reason. Then Tesla revolutionized the way we use
| electricity while Edison was busy killing elephants.
| dwringer wrote:
| Isn't this basically the same argument that comes up all the
| time about software engineering in general?
| leshow wrote:
| I have a degree in software engineering and I'm still
| critical if its inclusion as an engineering discipline, just
| given the level of rigour that's applied to typical software
| development.
|
| When it comes to "prompt engineering", the argument is even
| less compelling. Its like saying typing in a search query is
| engineering.
| vunderba wrote:
| I came across a pretty amusing analogy back when prompt
| "engineering" was all the rage a few years ago.
|
| _> Calling someone a prompt engineer is like calling the guy
| who works at Subway an artist because his shirt says 'Sandwich
| Artist.'_
|
| All jokes aside I wouldn't get to hung up on the title, the
| term engineer has long since been diluted to the point of
| meaninglessness.
|
| https://jobs.mysubwaycareer.eu/careers/sandwich-artist.htm
| ozim wrote:
| Because your imagination stopped at chat interface asking for
| funny cat pictures.
|
| There are prompts to be used with API an inside automated
| workflows and more to it.
| Avalaxy wrote:
| I feel like all of this is nonsense for people who want to
| pretend they are very good at using AI. Just copy pasting a stack
| trace with error message works perfectly fine, thank you.
| yoyohello13 wrote:
| Seriously, I seem to get good results by just "being a good
| communicator". Not only that, but as the tools get better,
| prompt engineering should get less important.
| abletonlive wrote:
| In most HN LLM Programming discussion there's a vocal majority
| saying LLMs are useless. Now we have this commenter saying all
| they need to do is vibe and it all works out.
|
| WHICH IS IT?
| rognjen wrote:
| I downvoted your comment because of your first sentence. Your
| point makes is made even without it.
| groby_b wrote:
| "State your constraints and requirements well and exhaustively".
|
| Meanwhile, I just can't get over the cartoon implying that a
| React Dev is just a Junior Dev who lost their hoodie.
| yuvadam wrote:
| Seems like so much over (prompt) engineering.
|
| I get by just fine with pasting raw code or errors and asking
| plain questions, the models are smart enough to figure it out
| themselves.
| orochimaaru wrote:
| A long time back for my MS CS I took a science of programming
| course. The way to verify has helped me craft prompts when I do
| data engineering work. Basically:
|
| Given input (...) and preconditions (...) write me spark code
| that gives me post conditions (...). If you can formally specify
| the input, preconditions and post conditions you usually get good
| working code.
|
| 1. Science of programming, David Gries 2. Verification of
| concurrent and sequential systems
| nexoft wrote:
| "prompt engineering" ....ouch. I would say "common sense"\ also
| the problem with software engineering is that there is an
| inflation of SWE, too much people applying for it for
| compensation level rather than being good at it and really liking
| it, we ended up having a lot of bad software engineers that
| requires this crutch crap.
| adamhartenz wrote:
| "Common sense" doesn't exist. It is a term people use when they
| can't explain what they actually mean.
| ozim wrote:
| Also how common sense can exist with LLM?
|
| There is no common sense with it - it is just an illusion.
| jjmarr wrote:
| Markets shift people to where they are needed with salaries as
| a price signal.
|
| There aren't enough software engineers to create the software
| the world needs.
| the_d3f4ult wrote:
| >There aren't enough software engineers to create the
| software the world needs.
|
| I think you mean "to create the software the market demands."
| We've lost a generation of talented people to instagram
| filters and content feed algorithms.
| ozim wrote:
| To maintain ;) the software.
| ozim wrote:
| Lots of those ,,prompt engineering" things would be nice to
| teach to business people as they seem to lack common sense.
|
| Like writing out clear requirements.
| DebtDeflation wrote:
| In my experience there's really only three true prompt
| engineering techniques:
|
| - In Context Learning (providing examples, AKA one shot or few
| shot vs zero shot)
|
| - Chain of Thought (telling it to think step by step)
|
| - Structured output (telling it to produce output in a specified
| format like JSON)
|
| Maybe you could add what this article calls Role Prompting to
| that. And RAG is its own thing where you're basically just having
| the model summarize the context you provide. But really
| everything else just boils down to tell it what you want to do in
| clear plain language.
| faustocarva wrote:
| Did you find it hard to create structured output while also
| trying to make it reason in the same prompt?
| bongodongobob wrote:
| None of this shit is necessary. I feel like these prompt
| collections miss the point entirely. You can talk to an LLM and
| reason about stuff going back and forth. I mean, some things are
| nice to one shot, but I've found keeping it simple and just
| "being yourself" works much better than a page long prompt.
___________________________________________________________________
(page generated 2025-06-04 23:00 UTC)