Prompt Engineering for Code Reviews: Get Useful Feedback from ChatGPT

May 17, 2026 7 min read 3 views
A clean code editor interface with abstract AI circuit patterns in soft blue tones, representing AI-assisted code review

You paste your function into ChatGPT, type "review this code", and get back a list of formatting suggestions and a reminder to add docstrings. That's not a code review β€” that's a linter with extra steps. The problem isn't ChatGPT; it's the prompt.

With the right framing, ChatGPT can act like a thorough senior reviewer: flagging edge cases, spotting security gaps, questioning your assumptions, and explaining trade-offs. This guide shows you exactly how to write those prompts.

What you'll learn

  • Why generic prompts produce generic feedback and what to do instead
  • How to give ChatGPT the context it needs to review intelligently
  • Prompt templates for security reviews, logic checks, and performance audits
  • How to iterate on feedback so the conversation stays useful
  • Common mistakes that tank the quality of AI code reviews

Why Generic Prompts Fail

Language models are pattern-completion engines. When your prompt is thin, the model fills in the gaps with the most statistically common response β€” which for code review means surface-level style comments. It has no idea what your code is supposed to do, who calls it, or what failure modes matter most to you.

Compare these two prompts:

  • Weak: "Review this Python function."
  • Strong: "You are a senior backend engineer reviewing a Python function that processes user-uploaded CSV files in a Django REST API. The function will run in a multi-tenant environment. Focus on: input validation, potential for path traversal, memory usage for large files, and error handling. Do not comment on formatting."

The second prompt constrains the model's output space dramatically. It tells ChatGPT the stack, the threat model, the performance concern, and explicitly excludes the noise you don't need.

The Four Ingredients of a Good Review Prompt

Every high-quality code review prompt contains four things. Miss any one of them and the feedback quality drops noticeably.

1. A Role

Assign ChatGPT a specific reviewer persona. "Senior security engineer", "Python performance expert", "API designer with REST experience" β€” these narrow the vocabulary and priorities the model draws on. Don't use "expert" without qualification; it's too broad to do anything.

2. Context About the System

Tell ChatGPT where this code lives. Is it a hot path called ten thousand times per second? A one-off migration script? A public API endpoint? The same code has very different review priorities depending on its role in your system.

3. A Focused Checklist

Ask ChatGPT to check for specific things rather than "everything". Pick three to five concerns per review session. You can always run a second pass for a different concern list. Unbounded scope produces unfocused feedback.

4. Output Format Instructions

Tell the model how to present its findings. "Use a numbered list. For each issue, state: the line or code block with the problem, why it's a problem, and a corrected version." Without this, you'll often get a wall of prose that's hard to act on.

Prompt Templates You Can Use Today

The following templates are starting points. Swap in your language, framework, and concern list as needed.

Security Review

You are a senior application security engineer specializing in Python web applications.

Review the following Django view function for security vulnerabilities.

Context:
- This endpoint accepts JSON from unauthenticated users.
- It writes data to a PostgreSQL database.
- Input comes from a public mobile app.

Focus only on:
1. SQL injection risks (even via ORM misuse)
2. Authentication and authorization gaps
3. Input validation and type coercion issues
4. Sensitive data exposure in error responses

For each issue found:
- Quote the relevant code
- Explain the risk in one sentence
- Provide a corrected version

Code:
[paste your code here]

Logic and Edge Case Review

You are a senior software engineer doing a logic review.

Context:
- This function calculates invoice totals for a billing system.
- It must handle: zero-quantity line items, negative adjustments (credits), and currency values with up to 4 decimal places.
- It is called from both the web app and a background job.

Check for:
1. Off-by-one errors or boundary condition bugs
2. Incorrect handling of edge-case inputs (None, empty list, negative values)
3. Floating-point precision issues
4. Any path where the function could silently return a wrong result

Do NOT comment on variable naming or formatting.

For each issue: quote the code, describe the bug, show a fix.

Code:
[paste your code here]

Performance Review

You are a backend engineer with expertise in Python performance and database query optimization.

Context:
- This function runs on every page load for authenticated users.
- The database is PostgreSQL with roughly 500,000 rows in the relevant tables.
- We are using Django ORM. We cannot switch to raw SQL for this ticket.

Review for:
1. N+1 query problems
2. Missing select_related or prefetch_related calls
3. Unnecessary data loaded into memory
4. Any loop that could be replaced with a bulk operation

Ignore style and formatting issues entirely.

Code:
[paste your code here]

How to Handle the Response

ChatGPT's first response is rarely its best. Treat it as a draft and iterate. Here are follow-up prompts that consistently produce better output:

  • "You flagged X as a problem. Show me a concrete example input that would trigger it." β€” Forces the model to justify its concern with a real scenario rather than a theoretical one.
  • "Are there any issues you didn't mention because I didn't ask? Briefly list them." β€” Opens a second pass without re-running the whole prompt.
  • "Rewrite the corrected version of the function with all your suggested fixes applied." β€” Produces a diff-ready replacement you can compare against your original.
  • "Which of the issues you found is the most critical and why?" β€” Useful when you have limited time and need to prioritize.

Each of these follow-ups costs only a few tokens and often uncovers something the first pass missed.

Structuring Long Code Reviews

ChatGPT has a context window limit, and pasting a 600-line module as one blob rarely works well. The model tends to give shallow feedback when overwhelmed with code it has no map for.

A better approach is to review function by function, or layer by layer. Start with the function that carries the most risk β€” authentication checks, data ingestion, payment processing β€” and work outward from there. You can summarize earlier feedback at the start of a new message to maintain continuity: "In the previous messages, you found an input validation gap in parse_upload(). Now review the function that calls it: handle_upload_request()."

This keeps the model's attention focused and the conversation history relevant.

Specifying What You Don't Want

Exclusions are as important as inclusions. If you're doing a logic review and ChatGPT keeps drifting into naming conventions, your prompt needs a negative constraint. Add a line like:

Do not comment on: variable naming, code style, missing docstrings, or anything covered by a linter. Focus only on correctness and security.

This isn't about being rude to the model β€” it's about protecting your attention. Every style comment ChatGPT produces is a line you read, evaluate, and discard. Excluding them upfront means the signal-to-noise ratio of the response is much higher.

Common Pitfalls to Avoid

Even with good prompt discipline, a few habits will consistently produce poor results.

  • Pasting code without context and hoping for the best. ChatGPT doesn't know your codebase. Without context, it reviews the code as if it exists in isolation, which it never does.
  • Asking for a review and a rewrite in the same prompt. These are separate tasks with different goals. Run them as separate sessions so the model stays focused on each.
  • Accepting the first response as final. The first pass is always a starting point. Follow-up questions almost always surface something useful.
  • Reviewing too much code in one go. Large pastes produce generic feedback. Keep each review to a function or a small set of related functions.
  • Treating AI feedback as ground truth. ChatGPT can be confidently wrong, especially about framework-specific behavior or library internals. Verify any fix it suggests before merging.

Using ChatGPT Alongside Your Existing Review Process

AI code review works best as a pre-review step, not a replacement for human review. Before you open a pull request, run your changes through a focused ChatGPT session targeting your highest-risk concern. Fix what you find. Then open the PR for human review.

This pattern means your human reviewers spend less time catching obvious issues and more time on architecture, business logic, and team-specific concerns that ChatGPT can't access. It also means you arrive at the PR with more confidence in what you're shipping.

If your team uses a PR template, consider adding a short field: "AI review performed: yes/no, focus areas: ____". It creates a lightweight paper trail and nudges the habit without making it mandatory.

Wrapping Up

The gap between a useless AI code review and a useful one is almost entirely in how you ask. Here are the concrete steps to take from here:

  1. Pick one function from your current project that carries real risk and run it through the security review template above.
  2. Add a "do not comment on" line to every review prompt you write from now on.
  3. Run at least one follow-up prompt after the initial response β€” specifically the one asking for a concrete example input that triggers each flagged issue.
  4. Build a small prompt library for your team: a security template, a logic template, and a performance template that already include your stack and common context.
  5. Verify every suggested fix against your framework's documentation before applying it. Trust but verify.

Better prompts take thirty extra seconds to write and save you hours of back-and-forth. Start with one review today.

πŸ“€ Share this article

Sign in to save

Comments (0)

No comments yet. Be the first!

Leave a Comment

Sign in to comment with your profile.

πŸ“¬ Weekly Newsletter

Stay ahead of the curve

Get the best programming tutorials, data analytics tips, and tool reviews delivered to your inbox every week.

No spam. Unsubscribe anytime.