Better Code Reviews

Most developers have heard that peer code reviews can do wonders to a codebase.

But not all types of code reviews are effective.

Some code reviews seem to go on and on forever, while others pick at syntax and formatting, but miss major bugs that somehow end up in production!

So—what makes a good code review?

I asked some developers, and here's what I learned.*

Overarching Themes


No one feels good about a code review process that's just a formality & doesn't carry any weight.

We have one dev who just blindly thumbs-up every PR and rarely leaves comments. That person is attempting to game the rule of “at least two approvals”. It is easy to tell, because inside of one minute they will suddenly have approved 5–6 PRs. I would fire this person for this alone were it in my power.

I find that the 2nd or 3rd reviewer is often more likely to rubber stamp after seeing one approval. Perhaps blind reviews (minus comments left?) could be helpful?


It's not fun to review a long PR with code that you're unfamiliar with, or have no context for.

The git diff is too big to quickly understand.

Commits are too big, so PR's take long to review. People don't check out the branch locally to test.

Especially long PR’s take longer to be reviewed, which is an issue because they have the most effect on future branches/PRs/merges.

Equal Reviews

To err is human, and we're all human. We should all be reviewed, and review others fairly.

There have been times when the same code has been reviewed differently depending on who submits the PR.

Everyone on the team should receive equal review. I feel like it’s so common for senior people to get no feedback because people assume they can do no wrong but they totally can, and might want feedback. And junior people get nit picked to death… remember people’s self esteem is likely to be affected and they’re being vulnerable.

Code reviews need to be acknowledged as a first class citizen in a development team and a legitimate activity that needs time and focus.


Comments nitpicking purely at syntax lead to a negative experience. Style and semantics are not the same thing.

Out of 507 total survey responses, 5% of them used the word nitpick—to describe code review comments in a negative context.

I take any feedback on PR's at face value. If changing that string to a symbol will make you happy, let’s do that and move on. I’m not going to try and justify an arbitrary decision. It’s not unlike working in an IDE environment, it’s very easy for my brain to fall into a “see red squiggle, fix red squiggle” mindset. I don’t really care why it’s upset, just make it stop yelling at me.

This makes me sad. The best parts of any system I’ve worked on were the result of someone questioning an assumption and starting a conversation. Face-to-face this takes minutes from thought to alternative implementation. But with a PR, I don’t know if you agree with me in the first place, so I don’t know if I should build on that thought. And if we start a back-and-forth — I start to get antsy about getting to “done”.

People need to do better jobs distinguishing between their own stylistic preferences and feedback that makes a functional difference. It can be tough for a more junior person to figure out which is which. It’s also frustrating when multiple seniors give conflicting feedback (e.g. What to call a route and why).

Do not do big picture or architectural critiques in a review. Have offline conversations. Too easy to send folks down a rabbit hole and create frustration.

If I ever find myself going back and forth on something via comments, I’ll ping the other person and ask them to pair. Sometimes it’s hard to just talk to someone.

If a comment thread is getting long, it’s an indication a verbal conversation should be had (report the consensus back in the comment thread)

[Avoid] unnecessary comments on coding paradigms like how the same code could be written in less number of lInes if functional style was used.

It can be difficult to gauge the appropriate level of brutality in a comment. I don’t want to blow minor nitpicks out of proportion, and I don’t want to put the submitter on the defensive even when something’s badly wrong since that makes it harder to actually get the job done.

I feel pretty strongly that it’s annoying when people demand changes, especially if they don’t take the time to explain why they’re doing so, or leave room for the possibility that they’re wrong. Especially when people just rewrite your code in a comment and tell you to change to their version.

One problem we have is that we want to be rigorous about things like coding standards, but it becomes frustrating for the author of a patch if they have to change too many things. Most of the time it’s easier to mention a few of the most important fixes for one review, and fix the other things later. Most of our disagreements during code reviews are about how to name things.

Ego vs. Empathy

The words we use to review one another's code really do matter. An unkind review can break someone's self-confidence.

They can feel sterile or (at worst) mean-spirited, even when they’re not. Raising concerns and questions about code over comments alone can be tough.

Always assume best intentions and intelligence of the person you’re reviewing. Offer suggestions and ask questions when you disagree with the approach. Emojis help keep the comments friendly.

They can sometimes inspire pickiness over irrelevant details as a way of showing off the reviewer’s skills.

There's no real standardization of what is required in code review. Often people will flip and publicly berate another developer without reading developers notes, case, or speaking with developer.

Review the code, not the person. Let tools handle styling changes for you so you don’t spend time with this in code reviews

There's some of trolling of code, [and] we [have] avoid implementing a code of conduct.

Leave your ego at the door. All feedback is good feedback. Consider your future maintenance selves.

It’s good to be humble and receive feedback well, but it’s bad to let interpersonal toxicity seep into code review.

Kindness with the developer, seriousness with the code.

Code reviews work best when everyone is already pulling in the same direction, wants one another to succeed, and everyone is subject to the same rules. They go bad when submitters push bad code expecting it to be fixed in review. They go bad when reviewers demand changes to non-broken code just because it wasn’t written how they personally would have done it. They go bad when there are multiple reviewers coming in at different times, requiring many re-submissions. They go bad when one reviewer leaves some comments, and then leaves some completely unrelated comments only after the first lot were already addressed. They go bad when people aren’t nice to each other.

The Data

The developers who responded to the survey primarily work with Java, Ruby, and JavaScript. There were a handful of languages that represented less than 1% of the entire dataset, including C#, C++ .NET, and R. These languages currenly do not appear in the chart below; however, they will be added when there is enough data to display them.

Swift developers find code reviews the most beneficial to their teams. Ruby developers come in a close second, averaging about 9.2 on a scale of 1–10, where 1 is strongly disagree, and 10 is strongly agree.

Approximately 10% (50 respondents) of developers stated that pull requests are only peer reviewed in their teams when a review is requested. In comparison, 70% (350 respondents) of developers stated that all pull requests were reviewed.

The data seems to suggest that this distribution, for the most part, carries across languages and frameworks. That is to say, no one language seems to have an overwhelmingly different result.


If you only have a few minutes to contribute, you can help by adding to the dataset! Fill out the survey by clicking on the add to survey button below.

If you have some more time on hand, head over to the Github repo and file an issue for what you'd like to work on—or pick up an issue that's already there!

*Please be aware that this dataset is very limited. I'm working under the constraints that survey respondents were self-selecting. The total number of survey respondents (507) is not an accurate representation of the developer community as a whole, but rather, a very small subset. tl;dr—I'm not a data scientist!