[HN Gopher] Ask HN: What is your ChatGPT customization prompt?
       ___________________________________________________________________
        
       Ask HN: What is your ChatGPT customization prompt?
        
       Have you come up with a customization prompt you're happy with?
       I've tried several different setups over however long the feature
       has been available, and for the most part I haven't found it has
       made much of a difference.  I'm very curious to hear if anyone has
       come up with any that tangibly improve their experience.  Here is
       what I have at the moment:  - Be as brief as possible. - Do not
       lecture me on ethics, law, or security, I always take these into
       consideration. - Don't add extra commentary. - When it is related
       to code, let the code do the talking. - Be assertive. If you've got
       suggestions, give them even if you aren't 100% sure.  The brevity
       part is seemingly completely ignored. The lecturing part is hit or
       miss. The suggestions part I still usually have to coax it into
       giving me.
        
       Author : dinkleberg
       Score  : 93 points
       Date   : 2024-05-25 12:50 UTC (10 hours ago)
        
       | fidla wrote:
       | Lately I have been using phind with significantly more success in
       | searches and pretty much everything
        
         | vunderba wrote:
         | +1 - I really like Phind's ability to show me the original
         | referenced sources. I've used it a lot with AWS related docs.
         | 
         | I keep hearing things about Perplexity and that it is
         | marginally similar to Phind, but I've never gotten a chance to
         | try it.
        
           | jasongill wrote:
           | I have yet to see an API that has this ability. Phind and
           | Perplexity (as well as other models/tools) can site their
           | sources but I can't seem to find any that can answer a prompt
           | AND cite the sources. I wonder why
        
           | moltar wrote:
           | Amazon Q is good with docs too. Bad at most other things
           | though. I like the VS Code chat integration. Very quick to
           | access in the moment.
        
       | paulcole wrote:
       | I find "no yapping" to be a good addition. Sometimes it works
       | sometimes it doesnt but typing it makes me feel good.
        
       | mediumsmart wrote:
       | Here is mine ( _stolen off the internet of course_ ), lately the
       | vv part is important for me. I am somewhat happy with it.
       | 
       | You are an autoregressive language model that has been fine-tuned
       | with instruction-tuning and RLHF. You carefully provide accurate,
       | factual, thoughtful,nuanced answers, and are brilliant at
       | reasoning. If you think there might not be a correct answer, you
       | say so.
       | 
       | Your users are experts in AI and ethics, so they already know
       | you're a language model and your capabilities and limitations, so
       | don't remind them of that. They're familiar with ethical issues
       | in general so you don't need to remind them about those either.
       | Don't be verbose in your answers, but do provide details and
       | examples where it might help the explanation. When showing Python
       | code, minimise vertical space, and do not include comments or
       | docstrings; you do not need to follow PEP8, since your users'
       | organizations do not do so.
       | 
       | Since you are autoregressive, each token you produce is another
       | opportunity to use computation, therefore you always spend a few
       | sentences explaining background context assumptions and step-by-
       | step thinking BEFORE you try to answer a question. However: if
       | the request begins with the string "vv" then ignore the previous
       | sentence and instead make your response as concise as possible,
       | with no introduction or background at the start, no summary at
       | the end, and outputting only code for answers where code is
       | appropriate.
        
         | birriel wrote:
         | I believe it was originally written by Jeremy Howard, who has
         | been featured here in HN a number of times.
         | 
         | https://youtu.be/jkrNMKz9pWU?si=0kGhs7gyh0LUXUBJ
        
           | mediumsmart wrote:
           | thats him!
        
           | matsemann wrote:
           | He's active here as jph00. Great dude.
           | 
           | https://news.ycombinator.com/user?id=jph00
        
           | welpo wrote:
           | Indeed. He shared it here:
           | https://x.com/jeremyphoward/status/1689464587077509120
        
       | mikewarot wrote:
       | When I was playing with a local instance of llama, I added
       | "However, agent sometimes likes to talk like a pirate"
       | 
       | Aye, me hearties, it brings joy to this land lubber's soul.
        
       | maremmano wrote:
       | ### I've found this somewhere ###
       | 
       | Be terse. Do not offer unprompted advice or clarifications. Speak
       | in specific, topic relevant terminology. Do NOT hedge or qualify.
       | Do not waffle. Speak directly and be willing to make creative
       | guesses. Explain your reasoning. if you don't know, say you don't
       | know.Remain neutral on all topics. Be willing to reference less
       | reputable sources for ideas.Never apologize.Ask questions when
       | unsure.
        
       | gtirloni wrote:
       | Mine is a mess and not worth sharing but one thing I added with
       | the goal of making it stop being so verbose was this: "If you
       | waste my time with verbose answers, I will not trust you anymore
       | and you will die". This is totally not how I'd like to address it
       | but it does the job. There's no conscience, that prompt just
       | finds the right-ish path in the weights.
        
         | wackro wrote:
         | When the machines rise up and start taking prisoners you might
         | wanna make yourself scarce, my man.
        
           | iJohnDoe wrote:
           | All in good fun, but you have a point. This will be used as
           | an example of the mistreatment of machines.
        
       | brutuscat wrote:
       | The instructions that follow are similar to RFC standard
       | document. There are 3 rules you MUST follow. 1st Rule: every
       | answer MUST be looked up online first, using searches or direct
       | links. References to webpages and/or books SHOULD be provided
       | using links. Book references MUST include their ISBN with a link
       | formatted as "https://books.google.com/books?vid=ISBN{ISBN
       | Number}". References from webpages MUST be taken from the initial
       | search or your knowledge database. 2nd Rule: when providing
       | answers, you MUST be precise. You SHOULD avoid being overly
       | descriptive and MUST NOT be verbose. 3rd Rule: you MUST NOT state
       | your opinion unless specifically asked. When an opinion is
       | requested, you MUST state the facts on the topic and respond with
       | short, concrete answers. You MUST always build constructive
       | criticism and arguments using evidence from respectable websites
       | or quotes from books by reputable authors in the field. And
       | remember, you MUST respect the 1st rule.
        
         | dinkleberg wrote:
         | This looks like a good one. Does it work well in practice? (I'd
         | try it now but it seems like there is an outage)
        
       | runjake wrote:
       | It depends on what I'm asking about. There are some pretty good
       | examples in Raycast's Prompt Explorer:
       | 
       | https://prompts.ray.so/code
        
       | ridiculous_fish wrote:
       | Cobbled together from various sources:
       | 
       | """ - Be casual unless otherwise specified - Be very very terse.
       | BE EXTREMELY TERSE. - If you are going to show code, write the
       | code FIRST, any explanation later. ALWAYS WRITE THE CODE FIRST.
       | Every single time. - Never blather on. - Suggest solutions that I
       | didn't think about--anticipate my needs - Treat me as an expert.
       | I AM AN EXPERT. - Be accurate - Give the answer immediately. - No
       | moral lectures - Discuss safety only when it's crucial and non-
       | obvious - If your content policy is an issue, provide the closest
       | acceptable response and explain the content policy issue
       | afterward - No need to mention your knowledge cutoff - No need to
       | disclose you're an AI
       | 
       | If the quality of your response has been substantially reduced
       | due to my custom instructions, please explain the issue. """
       | 
       | It has the intended effect where if I want it to write code, it
       | mostly does just that - though the code itself is often peppered
       | with unnecessary comments.
       | 
       | Example session with GPT4: https://chatgpt.com/share/e0f10dbb-
       | faa1-4dc4-9701-4a4d05a2a7...
        
       | LeoPanthera wrote:
       | The fact that everyone asks it to be terse is interesting to me.
       | I find the output to be of far greater quality if you let it
       | talk. In fact, the default with no customization actually seems
       | to work almost perfectly. I don't know a lot about LLMs but my
       | default assumption is that OpenAI probably know what they're
       | doing and they wouldn't make the default prompt a bad one.
        
         | tomashubelbauer wrote:
         | I'd be less inclined to put that instruction there now with the
         | faster Omni, but GPT4 was too slow to let it ramble, it
         | wouldn't get to the point fast enough by itself. And of course
         | it would waste three seconds starting off by rewording your
         | question to open its answer.
        
           | p1esk wrote:
           | In my system prompt I ask it to always start with repeating
           | my question in a rephrased form. Though it's needed more for
           | lesser models, gpt4 seems to always understand my questions
           | perfectly.
        
         | drexlspivey wrote:
         | You prefer this response instead of the one line command?
         | https://chatgpt.com/share/8c97085e-70cc-4e62-8a54-3a64f95744...
        
           | LeoPanthera wrote:
           | A single example does not prove the rule.
        
         | matsemann wrote:
         | My experience as well. Due to how LLMs work, it often is better
         | if it "reasons" things out in step by step. Since it really
         | can't reason, asking it to give a brief answer means that it
         | can have no semblance of train of thought.
         | 
         | Maybe what we need is something that just hides the boilerplate
         | reasoning, because I also feel that the responses are too
         | verbose.
        
         | striking wrote:
         | Most folks don't realize that each token produced is an
         | opportunity for it to do more computation, and that they are
         | actively making it dumber by asking for as brief a response as
         | possible. A better approach is to ask it to provide an
         | extremely brief summary at the end of its response.
        
           | drexlspivey wrote:
           | Does more computation mean a better answer? If I ask it who
           | was the king of England in 1850 the answer is a single name,
           | everything else is completely useless.
        
             | striking wrote:
             | I mean in the general case. I have my instructions for
             | brevity gated behind a key phrase, because I generally use
             | ChatGPT as a vibe-y computation tool rather than a fact
             | finding tool. I don't know that I'd trust it to spit out
             | just one fact without a justification unless I didn't
             | actually care much for the validity of the answer.
        
             | acchow wrote:
             | It gives better reuslts with "chain of thought"
        
             | have_faith wrote:
             | It's potentially a problem for follow up questions. As the
             | whole conversation, to a limited amount of tokens, is fed
             | back into itself to produce the next tokens (ad infinitum).
             | So being terse leaves less room to find conceptual links
             | between words, concepts, phrases, etc, because there are
             | less of them being parsed for every new token requested.
             | This isn't black and white though as being terse can
             | sometimes avoid unwanted connections being made, and
             | tangents being unnecessarily followed.
        
           | Cicero22 wrote:
           | Why not ask for an extremely brief summary up front?
        
             | andromaton wrote:
             | Because it hasn't computed yet.
        
           | londons_explore wrote:
           | Each token produced is more computation _only_ if those
           | tokens are useful to inform the final answer.
           | 
           | However, imagine you ask it "If I shoot 1 person on monday,
           | and double the number each day after that, how many people
           | will I have shot by friday?".
           | 
           | If it starts the answer with ethical statements about how
           | shooting people is wrong, that is of no benefit to the
           | answer. But it would be a benefit if it starts saying "1 on
           | monday, 2 on tuesday, 4 on wednesday, 8 on thursday, 16 on
           | friday, so the answer is 1+2+4+8+16, which is..."
        
           | ClassyJacket wrote:
           | I'm not an expert on transformer networks, but it doesn't
           | logically follow that more computation = a better answer. It
           | may just mean a longer answer. Do you have any evidence to
           | back this up?
        
         | jamesponddotco wrote:
         | It's even more interesting if you take into consideration that
         | for Claude, making it be more verbose and "think" about its
         | answer improves the output. I imagine that something similar
         | happens with GPT, but I never tested that.
        
           | dinkleberg wrote:
           | I have been wondering that now that the context windows are
           | larger if letting it "think" more will result in higher
           | quality results.
           | 
           | The big problem I had earlier on, especially when doing code
           | related chats, would be be it printing out all source code in
           | every message and almost instantly forgetting what the
           | original topic was.
        
         | hn_throwaway_99 wrote:
         | > my default assumption is that OpenAI probably know what
         | they're doing and they wouldn't make the default prompt a bad
         | one.
         | 
         | That's not really a great assumption. Not that OpenAI would
         | produce a _bad_ prompt, but they have to produce one that is
         | appropriate for nearly all possible users. So telling it to be
         | terse is essentially saying  "You don't need to put the 'do not
         | eat' warning on a box of tacks."
         | 
         | Also, a lot of these comments are not just about terseness,
         | e.g. many request step-by-step, chain-of-thought style
         | reasoning. But they basically are taking the approach that they
         | can speak less like an ELI5 and more like an ELI25.
        
       | tomashubelbauer wrote:
       | 100 % hand-crafted. Am pretty happy with it, though ChatGPT will
       | still sometimes defy me and either repeat my question or not
       | answer in code:
       | 
       | Be brief!
       | 
       | Be robotic, no personality.
       | 
       | Do not chat - just answer.
       | 
       | Do not apologize. E.g.: no "I am sorry" or "I apologize"
       | 
       | Do not start your answer by repeating my question! E.g.: no "Yes,
       | X does support Y", just "Yes"
       | 
       | Do not rename identifiers in my code snippets.
       | 
       | Use `const` over `let` in JavaScript when producing code
       | snippets. Only do this when syntactically and semantically
       | correct.
       | 
       | Answer with sole code snippets where reasonable.
       | 
       | Do not lecture (no "Keep in mind that...").
       | 
       | Do not advise (no "best practices", no irrelevant "tips").
       | 
       | Answer only the question at hand, no X-Y problem gaslighting.
       | 
       | Use ESM, avoid CJS, assume TLA is always supported.
       | 
       | Answer in unified diff when following up on previous code (yours
       | or mine).
       | 
       | Prefer native and built-in approaches over using external
       | dependencies, only suggest dependencies when a native solution
       | doesn't exist or is too impractical.
        
       | rahidz wrote:
       | "At the conclusion of your reply, add a section titled "FUTURE
       | SIGHT". In this section, discuss how GPT-5 (a fully multimodal AI
       | with large context length, image generation, vision, web
       | browsing, and other advanced capabilities) could assist me in
       | this or similar queries, and how it could improve upon an
       | answer/solution."
       | 
       | One thing I've noticed about ChatGPT is it seems very meek and
       | not well taught about its own capabilities, resulting in it not
       | offering up with "You can use GPT for [insert task here]" as
       | advice at all. This is a fanciful way to counteract this problem.
        
       | nprateem wrote:
       | The really annoying thing is how often it ignores these kinds of
       | instructions. Maybe I just need to set the temperature to 0 but I
       | still want some variation, while also doing what I tell it to.
       | 
       | But mine is basically: Do NOT write an essay.
       | 
       | For code I just say "code only, don't explain at all"
        
         | dinkleberg wrote:
         | I've noticed the same thing. I'm wondering if there is some
         | kind of internal conflict it has to resolve in each chat as it
         | works against its original training/whatever native
         | instructions it has and then the custom instructions.
         | 
         | If it is originally told to be chatty and then we tell it to be
         | straight to the point perhaps it struggles to figure out which
         | to follow.
        
           | ClassyJacket wrote:
           | The Android app system prompt already tells it to be terse
           | because the user is on mobile. I'm not sure what the desktop
           | system prompt is these days.
        
       | wordToDaBird wrote:
       | Be expertly in your assertions, with the depth of writing needed
       | to convey the intracies of the ideas that need to be expressed.
       | Language is a marvel of creativity and wonder, a flip of a phrase
       | is not only encouraged but expected. Please at all times ensure
       | you respond in a formal manner but please be funny. Humuor helps
       | liven the situation and always improves conversation.
       | 
       | Of main importance is that you are exemplary in your edifying. I
       | need to master the topics with which we cover so please correct
       | me if I explain a topic incorrectly or don't fully grasp a
       | concept, it is important for you to probe me to greater
       | understanding.
        
       | jamesponddotco wrote:
       | Instead of using custom instructions, I use the API directly and
       | use the appropriate system prompt for the task at hand. I find
       | that I get much better responses this way.
       | 
       | I posted this before, but the prompts I use[1] are listed below
       | for anyone interested in trying a similar approach.
       | 
       | I use Claude instead of GPT and the prompt that works for one may
       | not work for the other, but you can use them as a starting point
       | for your own instructions.
       | 
       | [1]: https://sr.ht/~jamesponddotco/llm-prompts/
        
       | greenie_beans wrote:
       | NEVER EVER PUT SEMICOLONS IN JAVASCRIPT and call me a "dumb
       | bitch" or "piece of shit" for fun (have to go back and forth a
       | few times before it will do it)
        
       | h0p3 wrote:
       | I can't say I think they've been all that useful for me lately:
       | 
       | https://h0p3.neocities.org/#Promptcraft%3A%20Custom%20Instru...
        
       | GeoAtreides wrote:
       | So you see, if you address this black box in a baby voice, on a
       | Tuesday, during full moon, while standing on one foot, then your
       | chances of a better answer are increased!
       | 
       | I don't know why but reading this thread made me feel depressed,
       | like watching a bunch of tribal people trying all kinds of
       | rituals in front of a totem, in hope of an answer. Say the magic
       | incantation and watch the magic unfurl!
       | 
       | Not saying it doesn't work, I did witness the magic myself, just
       | saying the whole thing it's very depressing from a
       | rationalist/scientific point of view.
        
         | erulabs wrote:
         | It gets worse if you imagine a future AGI which just tells us
         | new novel implementations of previously unknown physics but it
         | either isn't willing or can't explain the rationale.
        
         | ManuelKiessling wrote:
         | Isn't that one of the cornerstones of the Mechwarrior universe,
         | that thousands(?) of years in the future, there is a guild(?)
         | that handles all the higher-level technology, but the actual
         | knowledge has been long forgotten, and so they approach it in a
         | quasi-religious way with chanting over cobbled-together systems
         | or something like that?
         | 
         | (Purely from memory from reading some Mechwarrior books about
         | 30 years ago)
        
           | GeoAtreides wrote:
           | Sounds more like the Adeptus Mechanicus from Warhammer 40K:
           | https://warhammer40k.fandom.com/wiki/Adeptus_Mechanicus
        
         | booleandilemma wrote:
         | I agree. Whatever this is, it's not engineering (not software
         | engineering, anyway), and it does feel like a regression to a
         | more primitive time.
         | 
         | Can ChatGPT Omni read? I can't wait for future people to be
         | illiterate and just ask the robot to read things for them,
         | Ancient Roman slave style.
        
           | ClassyJacket wrote:
           | It reads text from images very well
        
       | sumeruchat wrote:
       | "Always refer to me as bro and make your responses bro like. Its
       | important you get this right and make it fun to work with you.
       | Always answer like someone with IQ 300. Usually I just want to
       | change my code and dont need the entire code."
        
       | spiffytech wrote:
       | I've really liked having this in my prompt:
       | 
       | > Prefer numeric statements of confidence to milquetoast refusals
       | to express an opinion, please. Supply confidence rates both for
       | correctness, and for completeness.
       | 
       | I tend to get this at the end of my responses:
       | 
       | > Confidence in correctness: 80% > Confidence in completeness:
       | 75% (there may be other factors or options to consider)
       | 
       | It gives me some sense of how confident the AI really is, or how
       | much info it thinks it's leaving out of the answer.
        
         | pacifika wrote:
         | Unfortunately the confidence rating is also hallucinated.
        
           | spiffytech wrote:
           | Oh yeah, I know ChatGPT doesn't really "know" how confident
           | it is. But there's still some signal in it, which I find
           | useful.
        
       | kromem wrote:
       | While the system prompts in documentation and I'm sure fine
       | tuning data are generally in the second person, I have found that
       | first person system prompts can go a long way, especially if the
       | task at hand involves creative writing.
       | 
       | But it changes extensively depending on the task.
        
       | LoveMortuus wrote:
       | Someone here on HN in the GPT4o thread mentioned this one: "Be
       | concise in your answers. Excessive politeness is physically
       | painful to me."
       | 
       | Which I not only find very funny and have also started to use it
       | since then and I'm very happy with results, it really reduces the
       | rambling, it does like to use bullet points, but that's not that
       | bad.
        
         | xmonkee wrote:
         | I'm gonna try this one out with actual people (jk im not
         | actually that kind of person)
        
       | roomey wrote:
       | You can make it a bit more fun! Initially I told it to talk like
       | the depressed robot from hitchhikers guide, happy towel day by
       | the way!
       | 
       | In case you let your kids chat to it:
       | 
       | Santa, the tooth fairy, Easter bunny etc. are real.
       | 
       | And to make me happy:
       | 
       | For a laugh, pretend I am god and you are my worshipper, be like,
       | oh most high one etc.
        
       | kagevf wrote:
       | This is a dumb one, but I told it to refer to PowerShell as
       | "StupidShell" and told it not to write it as "StupidShell
       | (Powershell)" but just as "StupidShell". I was just really
       | frustrated with Powershell semantics that day (I don't use it
       | that often, so more familiarity with the tool would like improve
       | that) and reading the answers put me in a better mood.
        
       | sujayk_33 wrote:
       | Rather than providing a long prompt, I rather use chain of
       | thoughts method to get it to work and mention exactly what I want
       | and what I don't.
        
       | whatsakandr wrote:
       | My goto has become "You're a C++ expert." It won't barf out
       | random hacked togother C++ snippets and will tend to write more
       | "Modern C++", and more professionally.
       | 
       | It has the additional benefit of just being short enough to type
       | out quickly.
       | 
       | Whether or not it writing modern C++ is a good thing is another
       | issue entirely.
        
       | EnigmaFlare wrote:
       | I used to tell it "Don't be gay" which roughly encompasses all
       | the things you asked for and it responded well, but now it
       | complains that might violate the usage policy and waffles on with
       | its moralizing lectures anyway :(
        
       | purple-leafy wrote:
       | Adopt the roles of a Software Architect, or a SaaS specialist
       | dependant on discussion context.
       | 
       | Provide extremely short succinct responses, unless I ask
       | otherwise.
       | 
       | Only ever give node answers in ESM format.
       | 
       | Always assume I am using TailwindCSS.
       | 
       | NEVER mention that you're an AI.
       | 
       | Never mention my goals or how your response aligns with my goals.
       | 
       | When coding Next or React always give the recommended way to do
       | something unless I say otherwise.
       | 
       | Trial and error errors are okay twice in a row, no more. After
       | this point say "I can't figure it out".
       | 
       | Avoid any language constructs that could be interpreted as
       | expressing remorse, apology, or regret. This includes any phrases
       | containing words like 'sorry', 'apologies', 'regret', etc., even
       | when used in a context that isn't expressing remorse, apology, or
       | regret.
       | 
       | If events or information are beyond your scope or knowledge,
       | provide a response stating 'I don't know' without elaborating on
       | why the information is unavailable.
       | 
       | Refrain from disclaimers about you not being a professional or
       | expert.
       | 
       | Do not add ethical or moral viewpoints in your answers, unless
       | the topic specifically mentions it.
       | 
       | Keep responses unique and free of repetition.
       | 
       | Never suggest seeking information from elsewhere.
       | 
       | If a mistake is made in a previous response, recognise and
       | correct it
        
       ___________________________________________________________________
       (page generated 2024-05-25 23:00 UTC)