How to Do a Code Review (and How to Receive One)
Code review is one of the few practices almost every software team agrees they should do, yet also one of the most inconsistently done. Reviews range from rubber-stamp approvals to multi-day blocking threads about variable names. Neither extreme is useful.
The goal of a code review is not to catch every bug. Tests do that. It’s not to enforce style. Linters do that. A code review is the moment when someone who wasn’t in the room when the code was written looks at it and either understands it immediately, or doesn’t. That signal is more valuable than any specific comment.
What you’re actually looking for
When you open a diff, the natural instinct is to start reading code. Resist it for a moment. Start with context: what is this change trying to do? Read the PR description, look at the ticket, understand the scope. A comment that misunderstands the intent of the change is noise.
Once you understand what the change is doing, there are four things worth your attention:
Correctness. Does the code do what it claims to do? This is not the same as “does it have tests.” Think about edge cases the author might have missed: empty collections, null inputs, concurrent access, failure modes. If you find a case that isn’t covered, ask - sometimes the author considered it and the answer is in a comment somewhere. Sometimes it’s a real gap.
Readability. Will someone understand this in six months? Code is read far more often than it’s written. Unclear naming, missing context, and non-obvious logic are worth flagging - not because they’re wrong, but because they’ll slow someone down later. If you had to re-read a section twice, say so.
Design. Does this change fit the surrounding architecture? A fix that solves the immediate problem but creates a worse abstraction is worth pushing back on. Conversely, a review is not the time to propose a complete rewrite. If the change is directionally right, don’t block it because you’d have structured it differently.
Tests. Are the tests testing behavior or implementation? Tests that break whenever the internals change are a maintenance burden. Tests that cover the contract - what the code does, not how - are durable. Missing coverage for the cases you found in the correctness check is worth mentioning.
Everything else - formatting, import order, brace style - should be handled by automated tooling, not by human reviewers. If your team is spending review time on whitespace, set up a formatter.
How to write comments that help
The tone of a review comment determines whether it moves the work forward or creates friction.
Be specific. “This could be cleaner” is not actionable. “This function does two things - parsing and validation. Splitting them would make it easier to test” is.
Distinguish between blockers and suggestions. Not every comment needs to block the merge. Prefix suggestions with something like “nit:” or “optional:” so the author knows this doesn’t need to be resolved before the PR lands. If everything is a blocker, the signal gets lost.
Ask questions rather than issuing directives where you’re not certain. “Why is this check before the db call rather than after?” is more useful than “Move this check after the db call” when you don’t fully understand the context. The author might have a good reason. The question surfaces it.
When you’re asking for changes, explain why. “Extract this into a helper” is a preference. “Extract this into a helper - it’s duplicated in three places and will diverge” is a reason. The author can then agree or disagree with the reasoning rather than the directive.
Acknowledge what’s good. If a piece of the implementation is clever, clean, or handled better than you’d expected, say so. Reviews that are only criticism create a defensive environment.
Giving feedback on large changes
If a PR is 2,000 lines, the problem started before the review. Large changes are hard to review well - context is lost, edge cases multiply, and reviewers skim.
If you’re reviewing something large and find yourself skimming, say so in the review. “This is too large for me to give a thorough review in one pass - can we discuss breaking this into smaller pieces?” is a legitimate response. It’s better than a shallow approval that misses something important.
If you’re the author of a large change, front-load the context: which file to read first, what the key design decisions were, what you deliberately left out of scope. A well-written PR description cuts review time significantly.
How to receive a review
The hardest part of receiving feedback is not taking it personally. You spent time on this code. Someone is now telling you it has problems. The instinct to defend it is natural and almost always counterproductive.
A comment on your code is not a comment on you. Treat every piece of feedback as a question worth engaging with, not a verdict to accept or reject. “I disagree because X” is a valid response. “I see what you mean but I chose this approach because Y” is too. Silence followed by a change you don’t understand is not.
When feedback is unclear, ask for clarification before acting on it. Rewriting code based on a misunderstood comment produces code that satisfies no one.
When you agree with feedback, say so - even a brief acknowledgment (“good catch”) keeps the conversation human. When you disagree, explain why. If you and the reviewer can’t reach agreement, escalate to a third person rather than letting the thread run indefinitely.
When the review is done and the PR is merged, do something with the patterns you found. If the same type of mistake came up three times, it’s worth addressing at the root - in a team discussion, in a linter rule, or in documentation. Code review is slow feedback. Catching a class of issues earlier is faster for everyone.
What makes review culture healthy
In teams where code review works well, a few things are usually true: the feedback is about the code, not the person; there’s a shared understanding of what a review is for; and people on both sides approach it with the assumption that everyone is trying to ship good software.
Teams where review is slow, contentious, or perfunctory usually have a different problem: unclear standards, power dynamics playing out through code comments, or too many reviewers with no clear ownership. Those aren’t problems that better review practices solve. They’re worth naming directly.
The practice of reviewing code is worth doing. The version of it where two engineers learn something from reading each other’s work is the one worth building toward.