https://addyo.substack.com/p/the-prompt-engineering-playbook-for Elevate Elevate SubscribeSign in Share this post [https] Elevate Elevate The Prompt Engineering Playbook for Programmers Copy link Facebook Email Notes More The Prompt Engineering Playbook for Programmers Turn AI coding assistants into more reliable development partners Addy Osmani's avatar Addy Osmani May 27, 2025 306 Share this post [https] Elevate Elevate The Prompt Engineering Playbook for Programmers Copy link Facebook Email Notes More 1 31 Share Developers are increasingly relying on AI coding assistants to accelerate our daily workflows. These tools can autocomplete functions, suggest bug fixes, and even generate entire modules or MVPs. Yet, as many of us have learned, the quality of the AI's output depends largely on the quality of the prompt you provide. In other words, prompt engineering has become an essential skill. A poorly phrased request can yield irrelevant or generic answers, while a well-crafted prompt can produce thoughtful, accurate, and even creative code solutions. This write-up takes a practical look at how to systematically craft effective prompts for common development tasks. AI pair programmers are powerful but not magical - they have no prior knowledge of your specific project or intent beyond what you tell them or include as context. The more information you provide, the better the output. We'll distill key prompt patterns, repeatable frameworks, and memorable examples that have resonated with developers. You'll see side-by-side comparisons of good vs. bad prompts with actual AI responses, along with commentary to understand why one succeeds where the other falters. Here's a cheat sheet to get started: [https] Foundations of effective code prompting Prompting an AI coding tool is somewhat like communicating with a very literal, sometimes knowledgeable collaborator. To get useful results, you need to set the stage clearly and guide the AI on what you want and how you want it. Below are foundational principles that underpin all examples in this playbook: * Provide rich context. Always assume the AI knows nothing about your project beyond what you provide. Include relevant details such as the programming language, framework, and libraries, as well as the specific function or snippet in question. If there's an error, provide the exact error message and describe what the code is supposed to do. Specificity and context make the difference between vague suggestions and precise, actionable solutions . In practice, this means your prompt might include a brief setup like: "I have a Node.js function using Express and Mongoose that should fetch a user by ID, but it throws a TypeError. Here's the code and error...". The more setup you give, the less the AI has to guess. * Be specific about your goal or question. Vague queries lead to vague answers. Instead of asking something like "Why isn't my code working?", pinpoint what insight you need. For example: "This JavaScript function is returning undefined instead of the expected result. Given the code below, can you help identify why and how to fix it?" is far more likely to yield a helpful answer. One prompt formula for debugging is: "It's expected to do [expected behavior] but instead it's doing [current behavior] when given [example input]. Where is the bug?" . Similarly, if you want an optimization, ask for a specific kind of optimization (e.g. "How can I improve the runtime performance of this sorting function for 10k items?"). Specificity guides the AI's focus . * Break down complex tasks. When implementing a new feature or tackling a multi-step problem, don't feed the entire problem in one gigantic prompt. It's often more effective to split the work into smaller chunks and iterate. For instance, "First, generate a React component skeleton for a product list page. Next, we'll add state management. Then, we'll integrate the API call." Each prompt builds on the previous. It's often not advised to ask for a whole large feature in one go; instead, start with a high-level goal and then iteratively ask for each piece . This approach not only keeps the AI's responses focused and manageable, but also mirrors how a human would incrementally build a solution. * Include examples of inputs/outputs or expected behavior. If you can illustrate what you want with an example, do it. For example, "Given the array [3,1,4], this function should return [1,3,4]." Providing a concrete example in the prompt helps the AI understand your intent and reduces ambiguity . It's akin to giving a junior developer a quick test case - it clarifies the requirements. In prompt engineering terms, this is sometimes called "few-shot prompting," where you show the AI a pattern to follow. Even one example of correct behavior can guide the model's response significantly. * Leverage roles or personas. A powerful technique popularized in many viral prompt examples is to ask the AI to "act as" a certain persona or role. This can influence the style and depth of the answer. For instance, "Act as a senior React developer and review my code for potential bugs" or "You are a JavaScript performance expert. Optimize the following function." By setting a role, you prime the assistant to adopt the relevant tone - whether it's being a strict code reviewer, a helpful teacher for a junior dev, or a security analyst looking for vulnerabilities. Community-shared prompts have shown success with this method, such as "Act as a JavaScript error handler and debug this function for me. The data isn't rendering properly from the API call." . In our own usage, we must still provide the code and problem details, but the role-play prompt can yield more structured and expert-level guidance. [https] * Iterate and refine the conversation. Prompt engineering is an interactive process, not a one-shot deal. Developers often need to review the AI's first answer and then ask follow-up questions or make corrections. If the solution isn't quite right, you might say, "That solution uses recursion, but I'd prefer an iterative approach - can you try again without recursion?" Or, "Great, now can you improve the variable names and add comments?" The AI remembers the context in a chat session, so you can progressively steer it to the desired outcome. The key is to view the AI as a partner you can coach - progress over perfection on the first try . * Maintain code clarity and consistency. This last principle is a bit indirect but very important for tools that work on your code context. Write clean, well-structured code and comments, even before the AI comes into play. Meaningful function and variable names, consistent formatting, and docstrings not only make your code easier to understand for humans, but also give the AI stronger clues about what you're doing. If you show a consistent pattern or style, the AI will continue it . Treat these tools as extremely attentive junior developers - they take every cue from your code and comments. With these foundational principles in mind, let's dive into specific scenarios. We'll start with debugging, perhaps the most immediate use-case: you have code that's misbehaving, and you want the AI to help figure out why. Prompt patterns for debugging code Debugging is a natural fit for an AI assistant. It's like having a rubber-duck that not only listens, but actually talks back with suggestions. However, success largely depends on how you present the problem to the AI. Here's how to systematically prompt for help in finding and fixing bugs: 1. Clearly describe the problem and symptoms. Begin your prompt by describing what is going wrong and what the code is supposed to do. Always include the exact error message or incorrect behavior. For example, instead of just saying "My code doesn't work," you might prompt: "I have a function in JavaScript that should calculate the sum of an array of numbers, but it's returning NaN (Not a Number) instead of the actual sum. Here is the code: [include code]. It should output a number (the sum) for an array of numbers like [1,2,3], but I'm getting NaN. What could be the cause of this bug?" This prompt specifies the language, the intended behavior, the observed wrong output, and provides the code context - all crucial information. Providing a structured context (code + error + expected outcome + what you've tried) gives the AI a solid starting point . By contrast, a generic question like "Why isn't my function working?" yields meager results - the model can only offer the most general guesses without context. 2. Use a step-by-step or line-by-line approach for tricky bugs. For more complex logic bugs (where no obvious error message is thrown but the output is wrong), you can prompt the AI to walk through the code's execution. For instance: "Walk through this function line by line and track the value of total at each step. It's not accumulating correctly - where does the logic go wrong?" This is an example of a rubber duck debugging prompt - you're essentially asking the AI to simulate the debugging process a human might do with prints or a debugger. Such prompts often reveal subtle issues like variables not resetting or incorrect conditional logic, because the AI will spell out the state at each step. If you suspect a certain part of the code, you can zoom in: "Explain what the filter call is doing here, and if it might be excluding more items than it should." Engaging the AI in an explanatory role can surface the bug in the process of explanation. 3. Provide minimal reproducible examples when possible. Sometimes your actual codebase is large, but the bug can be demonstrated in a small snippet. If you can extract or simplify the code that still reproduces the issue, do so and feed that to the AI. This not only makes it easier for the AI to focus, but also forces you to clarify the problem (often a useful exercise in itself). For example, if you're getting a TypeError in a deeply nested function call, try to reproduce it with a few lines that you can share. Aim to isolate the bug with the minimum code, make an assumption about what's wrong, test it, and iterate . You can involve the AI in this by saying: "Here's a pared-down example that still triggers the error [include snippet]. Why does this error occur?" By simplifying, you remove noise and help the AI pinpoint the issue. (This technique mirrors the advice of many senior engineers: if you can't immediately find a bug, simplify the problem space. The AI can assist in that analysis if you present a smaller case to it.) 4. Ask focused questions and follow-ups. After providing context, it's often effective to directly ask what you need, for example: "What might be causing this issue, and how can I fix it?" . This invites the AI to both diagnose and propose a solution. If the AI's first answer is unclear or partially helpful, don't hesitate to ask a follow-up. You could say, "That explanation makes sense. Can you show me how to fix the code? Please provide the corrected code." In a chat setting, the AI has the conversation history, so it can directly output the modified code. If you're using an inline tool like Copilot in VS Code or Cursor without a chat, you might instead write a comment above the code like // BUG: returns NaN, fix this function and see how it autocompletes - but in general, the interactive chat yields more thorough explanations. Another follow-up pattern: if the AI gives a fix but you don't understand why, ask "Can you explain why that change solves the problem?" This way you learn for next time, and you double-check that the AI's reasoning is sound. Now, let's illustrate these debugging prompt principles with a concrete example, showing a poor prompt vs. improved prompt and the difference in AI responses: Debugging example: poor vs. improved prompt Imagine we have a simple Node.js function meant to convert a list of user objects to a lookup map by user ID. However, it's throwing an error. Here's the buggy code: // Buggy function: converts array of users to a map by ID function mapUsersById(users) { const userMap = {}; for (let i = 0; i <= users.length; i++) { // Note: using <= will go out of bounds on the last iteration const user = users[i]; userMap[user.id] = user; } return userMap; } // Example usage: const result = mapUsersById([{ id: 1, name: "Alice" }]); This code is supposed to iterate through the users array and build an object where keys are user IDs. However, there is a bug: the loop uses i <= users.length instead of i < users.length. This will cause users[i] to be undefined on the last iteration, likely leading to a runtime error (e.g., attempting to access property id of undefined). Let's see how an AI assistant might respond to two different prompts about this issue. Poor Prompt: "Why isn't my mapUsersById function working?" AI's response to the poor prompt (simulated): It's hard to say without more information. Are you sure the function is being called with a proper array of users? Perhaps the issue is that users is empty or not an array, causing the loop not to execute properly. It could also be that there's an error when setting the property on userMap. Make sure that each user has an id property. If users is undefined or null, that would also cause it to not work. Analysis: This answer is quite vague. The prompt gave no details except the function name, so the AI is guessing at generic causes ("maybe the array is empty or not an array", "make sure each user has an id"). It did not identify the actual bug. This is the kind of response you get when the question is underspecified - the model can only regurgitate common debugging tips. As a developer, this isn't very helpful because it doesn't address the real mistake. The lack of context and specificity in the prompt yields a generic answer (essentially the AI equivalent of "have you tried turning it off and on again?"). This aligns with the community advice that vague prompts lead to irrelevant or generic answers . Improved Prompt: "I have a JavaScript function mapUsersById that should convert an array of user objects into a map (object) keyed by user ID. However, it throws an error when I run it. For example, when I pass [ {id: 1, name: "Alice"} ], I get TypeError: Cannot read property 'id' of undefined. Here is the function code: function mapUsersById(users) { const userMap = {}; for (let i = 0; i <= users.length; i++) { const user = users[i]; userMap[user.id] = user; } return userMap; } It should return { "1": {id: 1, name: "Alice"} }. What is the bug and how can I fix it?" AI's response to the improved prompt (simulated): *The error Cannot read property 'id' of undefined suggests that at some point, user is undefined, and then you try to access user.id. Looking at your loop, the issue is with the loop bounds. You are using i <= users.length. If users.length is 1 (for one user in the array), the loop runs with i = 0 and i = 1. When i is 1, users[1] is undefined (since the only index in the array is 0). Thus user becomes undefined on that iteration, and user.id throws a TypeError. The fix is to change the loop condition to i < users.length so that i goes from 0 to users.length-1. For example: for (let i = 0; i < users.length; i++) { const user = users[i]; userMap[user.id] = user; } This way, i will be 0 for the first (and only) element, and then the loop stops. With that change, your function will no longer hit an undefined user and should work as expected, returning {"1": {id: 1, name: "Alice"}} for the given input.* Analysis: This response directly pinpoints the bug (using <= in the loop) and explains why it causes user to be undefined. It then provides the correct code solution (i < users.length) and confirms the expected outcome. The improved prompt succeeded because it followed our principles: it specified the language (JavaScript), described the function's purpose, included the exact error message and a sample input, and even provided the code snippet in question. Notice how the AI used the error message as a clue to focus on the loop bounds - a targeted prompt enabled the AI to engage in true problem-solving, effectively simulating how a human debugger would think: "where could undefined come from? likely from the loop indexing". This is a concrete demonstration of the benefit of detailed prompts. Additional Debugging Tactics: Beyond identifying obvious bugs, you can use prompt engineering for deeper debugging assistance: * Ask for potential causes. If you're truly stumped, you can broaden the question slightly: "What are some possible reasons for a TypeError: cannot read property 'foo' of undefined in this code?" along with the code. The model might list a few scenarios (e.g. the object wasn't initialized, a race condition, wrong variable scoping, etc.). This can give you angles to investigate that you hadn't considered. It's like brainstorming with a colleague. * "Ask the Rubber Duck" - i.e., explain your code to the AI. This may sound counterintuitive (why explain to the assistant?), but the act of writing an explanation can clarify your own understanding, and you can then have the AI verify or critique it. For example: "I will explain what this function is doing: [your explanation]. Given that, is my reasoning correct and does it reveal where the bug is?" The AI might catch a flaw in your explanation that points to the actual bug. This technique leverages the AI as an active rubber duck that not only listens but responds. * Have the AI create test cases. You can ask: "Can you provide a couple of test cases (inputs) that might break this function?" The assistant might come up with edge cases you didn't think of (empty array, extremely large numbers, null values, etc.). This is useful both for debugging and for generating tests for future robustness. * Role-play a code reviewer. As an alternative to a direct "debug this" prompt, you can say: "Act as a code reviewer. Here's a snippet that isn't working as expected. Review it and point out any mistakes or bad practices that could be causing issues: [code]". This sets the AI into a critical mode. Many developers find that phrasing the request as a code review yields a very thorough analysis, because the model will comment on each part of the code (and often, in doing so, it spots the bug). In fact, one prompt engineering tip is to explicitly request the AI to behave like a meticulous reviewer . This can surface not only the bug at hand but also other issues (e.g. potential null checks missing) which might be useful. In summary, when debugging with an AI assistant, detail and direction are your friends. Provide the scenario, the symptoms, and then ask pointed questions. The difference between a flailing "it doesn't work, help!" prompt and a surgical debugging prompt is night and day, as we saw above. Next, we'll move on to another major use case: refactoring and improving existing code. Prompt patterns for refactoring and optimization Refactoring code - making it cleaner, faster, or more idiomatic without changing its functionality - is an area where AI assistants can shine. They've been trained on vast amounts of code, which includes many examples of well-structured, optimized solutions. However, to tap into that knowledge effectively, your prompt must clarify what "better" means for your situation. Here's how to prompt for refactoring tasks: 1. State your refactoring goals explicitly. "Refactor this code" on its own is too open-ended. Do you want to improve readability? Reduce complexity? Optimize performance? Use a different paradigm or library? The AI needs a target. A good prompt frames the task, for example: "Refactor the following function to improve its readability and maintainability (reduce repetition, use clearer variable names)." Or "Optimize this algorithm for speed - it's too slow on large inputs." By stating specific goals, you help the model decide which transformations to apply . For instance, telling it you care about performance might lead it to use a more efficient sorting algorithm or caching, whereas focusing on readability might lead it to break a function into smaller ones or add comments. If you have multiple goals, list them out. A prompt template from the Strapi guide suggests even enumerating issues: "Issues I'd like to address: 1) [performance issue], 2) [code duplication], 3) [outdated API usage]." . This way, the AI knows exactly what to fix. Remember, it will not inherently know what you consider a problem in the code - you must tell it. 2. Provide the necessary code context. When refactoring, you'll typically include the code snippet that needs improvement in the prompt. It's important to include the full function or section that you want to be refactored, and sometimes a bit of surrounding context if relevant (like the function's usage or related code, which could affect how you refactor). Also mention the language and framework, because "idiomatic" code varies between, say, idiomatic Node.js vs. idiomatic Deno, or React class components vs. functional components. For example: "I have a React component written as a class. Please refactor it to a functional component using Hooks." The AI will then apply the typical steps (using useState, useEffect, etc.). If you just said "refactor this React component" without clarifying the style, the AI might not know you specifically wanted Hooks. * Include version or environment details if relevant. For instance, "This is a Node.js v14 codebase" or "We're using ES6 modules". This can influence whether the AI uses certain syntax (like import/export vs. require), which is part of a correct refactoring. If you want to ensure it doesn't introduce something incompatible, mention your constraints. 3. Encourage explanations along with the code. A great way to learn from an AI-led refactor (and to verify its correctness) is to ask for an explanation of the changes. For example: "Please suggest a refactored version of the code, and explain the improvements you made." This was even built into the prompt template we referenced: "...suggest refactored code with explanations for your changes." . When the AI provides an explanation, you can assess if it understood the code and met your objectives. The explanation might say: "I combined two similar loops into one to reduce duplication, and I used a dictionary for faster lookups," etc. If something sounds off in the explanation, that's a red flag to examine the code carefully. In short, use the AI's ability to explain as a safeguard - it's like having the AI perform a code review on its own refactor. 4. Use role-play to set a high standard. As mentioned earlier, asking the AI to act as a code reviewer or senior engineer can be very effective. For refactoring, you might say: "Act as a seasoned TypeScript expert and refactor this code to align with best practices and modern standards." This often yields not just superficial changes, but more insightful improvements because the AI tries to live up to the "expert" persona. A popular example from a prompt guide is having the AI role-play a mentor: "Act like an experienced Python developer mentoring a junior. Provide explanations and write docstrings. Rewrite the code to optimize it." . The result in that case was that the AI used a more efficient data structure (set to remove duplicates) and provided a one-line solution for a function that originally used a loop . The role-play helped it not only refactor but also explain why the new approach is better (in that case, using a set is a well-known optimization for uniqueness). Now, let's walk through an example of refactoring to see how a prompt can influence the outcome. We will use a scenario in JavaScript (Node.js) where we have some less-than-ideal code and we want it improved. Refactoring example: poor vs. improved prompt Suppose we have a function that makes two database calls and does some processing. It works, but it's not pretty - there's duplicated code and it's hard to read. We'd like to refactor it for clarity and efficiency. Here's the original code: // Original function: Fetches two lists and processes them (needs refactoring) async function getCombinedData(apiClient) { // Fetch list of users const usersResponse = await apiClient.fetch('/users'); if (!usersResponse.ok) { throw new Error('Failed to fetch users'); } const users = await usersResponse.json(); // Fetch list of orders const ordersResponse = await apiClient.fetch('/orders'); if (!ordersResponse.ok) { throw new Error('Failed to fetch orders'); } const orders = await ordersResponse.json(); // Combine data (match user to orders) const result = []; for (let user of users) { const userOrders = orders.filter(o => o.userId === user.id); result.push({ user, orders: userOrders }); } return result; } Issues in this code: We see duplicated fetch logic for users and orders (could be refactored into a helper or loop), lack of error detail (just a generic error message), and it could perhaps run the fetches in parallel to improve speed. Let's see two prompting approaches: Prompt 1 (Underspecified): "Refactor the above getCombinedData function." AI's response to Prompt 1: *Sure. Here's a refactored version of getCombinedData: async function getCombinedData(apiClient) { // Fetch users and orders in parallel const [usersResponse, ordersResponse] = await Promise.all([ apiClient.fetch('/users'), apiClient.fetch('/orders') ]); if (!usersResponse.ok || !ordersResponse.ok) { throw new Error('Failed to fetch data'); } const [users, orders] = await Promise.all([ usersResponse.json(), ordersResponse.json() ]); // Map orders by userId for efficient lookup const ordersByUser = {}; for (const order of orders) { if (!ordersByUser[order.userId]) { ordersByUser[order.userId] = []; } ordersByUser[order.userId].push(order); } // Combine data return users.map(user => ({ user, orders: ordersByUser[user.id] || [] })); } I fetched users and orders in parallel to improve performance. I also consolidated error handling and used a map (ordersByUser) to group orders by user, which makes the combination step cleaner and potentially faster.* Analysis: This is actually a pretty good refactor, hitting several improvements: parallel fetching, grouping orders by user for efficiency, cleaner combination using .map and defaulting to an empty array. The AI also explained what it did. So why do we label this prompt as underspecified or "poor"? In this case, the AI happened to make good choices, but it assumed a lot because the prompt was open-ended. It assumed that performance should be improved (parallel requests, which is a smart change but what if parallel wasn't safe here?), and it changed error handling to a generic message "Failed to fetch data," which in combining the errors lost the specificity of which call failed. The risk with a generic "Refactor this" prompt is that the AI might make changes that alter behavior in unintended ways. Here, combining error handling might be acceptable or not, depending on requirements. The assistant also made a structural change (using a dictionary to group orders) - that's fine and probably for the better, but if we had specific constraints (like wanting to stream results or keep memory usage low), we didn't inform it. So while the result looks good, it was somewhat lucky that the AI's own interpretation of "refactor" aligned with what we wanted. This illustrates that an underspecified prompt can sometimes get good results on simple cases, but for complex code, it might go off in an undesired direction or miss certain improvements you care about. Prompt 2 (Goal-Oriented): "Refactor the above getCombinedData function to eliminate duplicate code and improve performance. Specifically: (1) Avoid repeating the fetch logic for users and orders - maybe use a helper or fetch them together. (2) Fetch both lists in parallel if possible. (3) Keep the error handling for each fetch (we want to know which call failed). (4) Improve the combination of data, possibly by using a more efficient structure for lookup instead of a nested loop. Provide the refactored code with comments explaining the changes." AI's response to Prompt 2: *Sure! Here's an improved version of getCombinedData with the requested refactoring: async function getCombinedData(apiClient) { // 1. Fetch users and orders in parallel to improve performance const [usersResponse, ordersResponse] = await Promise.all([ apiClient.fetch('/users'), apiClient.fetch('/orders') ]); // 2. Handle errors separately to know which call failed if (!usersResponse.ok) { throw new Error('Failed to fetch users'); } if (!ordersResponse.ok) { throw new Error('Failed to fetch orders'); } // Parse JSON responses in parallel const [users, orders] = await Promise.all([ usersResponse.json(), ordersResponse.json() ]); // 3. Use a lookup map for orders by userId for efficient matching const ordersByUser = orders.reduce((map, order) => { if (!map[order.userId]) { map[order.userId] = []; } map[order.userId].push(order); return map; }, {}); // 4. Combine data using the lookup map return users.map(user => ({ user, orders: ordersByUser[user.id] || [] // empty array if no orders for user })); } Explanation of changes: * Parallel fetching: I used Promise.all to fetch users and orders at the same time (point 1). * Preserved individual error handling: I kept separate checks for usersResponse.ok and ordersResponse.ok so that any error clearly indicates which fetch failed (point 2). * Parallel JSON parsing: Also used Promise.all for response.json() calls, which is a minor improvement. * Efficient combination: Instead of a nested loop, I created an ordersByUser map (point 3). This groups orders by userId in a single pass. Then I map over users to build the result, pulling each user's orders from the map in O(1) time (point 4). This is more efficient and concise.* Analysis: This response directly addressed all the specified goals. The code is refactored to be cleaner and faster, and it maintained separate error messages as requested. The AI's explanation confirms each point we listed, which shows it carefully followed the prompt instructions. This is a great outcome because we, as the prompter, defined what "refactor" meant in this context. By doing so, we guided the AI to produce a solution that matches our needs with minimal back-and-forth. If the AI had overlooked one of the points (say it still merged the error handling), we could easily prompt again: "Looks good, but please ensure the error messages remain distinct for users vs orders." - however, in this case it wasn't needed because our prompt was thorough. This example demonstrates a key lesson: when you know what you want improved, spell it out. AI is good at following instructions, but it won't read your mind. A broad "make this better" might work for simple things, but for non-trivial code, you'll get the best results by enumerating what "better" means to you. This aligns with community insights that clear, structured prompts yield significantly improved results . Additional Refactoring Tips: * Refactor in steps: If the code is very large or you have a long list of changes, you can tackle them one at a time. For example, first ask the AI to "refactor for readability" (focus on renaming, splitting functions), then later "optimize the algorithm in this function." This prevents overwhelming the model with too many instructions at once and lets you verify each change stepwise. * Ask for alternative approaches: Maybe the AI's first refactor works but you're curious about a different angle. You can ask, "Can you refactor it in another way, perhaps using functional programming style (e.g. array methods instead of loops)?" or "How about using recursion here instead of iterative approach, just to compare?" This way, you can evaluate different solutions. It's like brainstorming multiple refactoring options with a colleague. * Combine refactoring with explanation to learn patterns: We touched on this, but it's worth emphasizing - use the AI as a learning tool. If it refactors code in a clever way, study the output and explanation. You might discover a new API or technique (like using reduce to build a map) that you hadn't used before. This is one reason to ask for explanations: it turns an answer into a mini-tutorial, reinforcing your understanding of best practices. * Validation and testing: After any AI-generated refactor, always run your tests or try the code with sample inputs. AI might inadvertently introduce subtle bugs, especially if the prompt didn't specify an important constraint. For example, in our refactor, if the original code intentionally separated fetch errors for logging but we didn't mention logging, the combined error might be less useful. It's our job to catch that in review. The AI can help by writing tests too - you could ask "Generate a few unit tests for the refactored function" to ensure it behaves the same as before on expected inputs. At this point, we've covered debugging and refactoring - improving existing code. The next logical step is to use AI assistance for implementing new features or generating new code. We'll explore how to prompt for that scenario effectively. Modern debugging scenarios React Hook dependency issues Poor Prompt: "My useEffect isn't working right" Enhanced Prompt: I have a React component that fetches user data, but it's causing infinite re-renders. Here's my code: const UserProfile = ({ userId }) => { const [user, setUser] = useState(null); const [loading, setLoading] = useState(true); useEffect(() => { fetchUser(userId).then(setUser).finally(() => setLoading(false)); }, [userId, setUser, setLoading]); // Problem is here return loading ?