Developers - Stop Asking LLMs (GenAI) to Write Code!!
Instead - Here are 7 ways to get the most of Generative AI pair programming tools like ChatGPT or Claude
It's no longer a question of whether to use GenAI tools, but how to use them effectively. Just as digital literacy was crucial for navigating the internet era, AI literacy is becoming essential for engineers to achieve equitable outcomes from AI tools.
For the beginner software engineer, it is very tempting to begin by simply asking the LLM to "write code to do X". In fact, the majority of tutorials that explore guiding users on how to integrate LLM in their software engineering or data science workflow begin with this step.
I think that is probably the wrong thing to do.
This post is a reflection on some of my observations heavily using LLMs as part of my day to day software engineering workflow and the emerging strategies I have followed.
TLDR
Don't start by asking LLMs to write code directly, instead analyze and provide context
Provide complete context upfront and verify what the LLM needs
Ask probing questions and challenge assumptions
Watch for subtle mistakes (outdated APIs, mixed syntax)
Checkpoint progress to avoid context pollution
Understand every line to maintain knowledge parity
Invest in upfront design
An important or useful mindset to have within all of this is to treat the model/system/agent as a junior but competent pair programming colleague, while also realizing that LLMs are autoregressive next token generators.
Note: This post focuses on a chat workflow e.g., while using an interface like ChatGPT or Claude where the developer directly drives the interaction and context. The opposite of this being a workflow like GitHub Copilot or Cursor where context is inferred.
1. Don't Write Code - Analyze First!
Experience has taught me that the best results come when I instruct the model to NOT WRITE CODE immediately. Instead, I start with a message like this:
I need help refactoring some code.
Please pay full attention.
Think deeply and confirm with me before you make any changes.
We might be working with code/libs where the API has changed so be mindful of that.
If there is any file you need to inspect to get a better sense, let me know.
As a rule, do not write code. Plan, reason and confirm first.
---
I refactored my db manager class, how should I refactor my tests to fit the changes?
Half the time, the LLM will make massive assumptions about your code and problem (e.g., about data types, about the behaviors of imported functions, about unnecessary optimizations, etc.). Instead, prime it to be upfront about those assumptions. More importantly, spend time correcting the plan and closing gaps before any code is written.
2. Focus on Providing Context
Context is absolutely critical. It's like trying to solve a puzzle with a colleague - you typically don’t want any pieces hidden from them, would you? This is also akin to a design meetings that typically begin with the senior dev providing a “lay of the land” of the codebase and offering to answer questions.
I've learned to start by asking the LLM "what context do you need?" Sometimes, what seems obvious to us isn't visible to the AI at all. A component bug might exist simply because the LLM cannot see the JSON or types it's supposed to process, or because the JSON structure has changed.
What do you need ?
Here are some files ..?
I use ant design for components, tailwindcss for layout and lucide-react for icons
All of this saves you time -saves you time correcting mistakes the LLM will make without the context.
3. Ask Many Questions, Learn
Sometimes, your task is to implement new features, get up to speed with some new codebase e.t.c. In these scenarios you can think of your codebase as a game map with undiscovered locations, where the AI tool can help you uncover new sections - and more importantly learn. Experience has taught me to:
Push for deeper understanding of side effects
Challenge assumptions about implementation
Ask for pros and cons of different approaches.
Probe for edge cases that might break the solution
Ask for options .. (e.g., I have found it insightful to ask for alternative algorithms to what I have, alternate libraries etc).
Note that LLMs will often converge to the mean solution. This is particularly problematic because many niche use cases need formulas and accommodations that are at the long tail of possibilities. Your senior engineer instinct should kick in here - if you don't ask these questions, you'll end up with mistakes that are even more difficult to find and debug.
In many cases it is important to prompt the model to explore the space of ideas and not present any opinion. Assuming the goal is to build a visualization component, there is potentially a wrong and right way to work best with an LLM
Right: What are good options with pros and cons for visualizing a multiline area chart? This way the LLM can list out options and you can make the decision, while also learning about directions you probably have not considered.
Wrong: Should I use ReChart? The LLM is very likely to say yes even if the context and other factors make it the suboptimal choice. Simply mentioning ReChart, fixes the direction of the LLM’s response. Recall, LLMs are autoregressive machines.
Keep reading with a 7-day free trial
Subscribe to Designing with AI to keep reading this post and get 7 days of free access to the full post archives.