What are the best code review tools

Code review tools for Gitlab

I used SmartBear Collaborator, the built-in GitLab Merge Requirements and Review Board with and without integration with GitLab and assisted with the deployment. I have also personally examined Atlassian Crucible.
The solution you choose is largely based on your workflow. Each of the options has its own advantages and disadvantages.
There are a few considerations that come up frequently no matter which tool you choose.

If the tool is outside of GitLab, then you need to worry about how it connects to the code base. The collaborator and review board both run the diff commands on a client side and upload them before and after the versions of the files for each changeset you push for review. In Git terms, this means that for the hash area you want to review (sometimes this area is automatic, and for GitLab integration the area is) then it loads all the resulting files from the before and after Side of the resulting diff high. This means that if something goes wrong with your integration between the servers hosting GitLab and your review tool, the review tool may never see a difference and never know that it should.

Whether you host GitLab yourself or use the GitLab hosted in the cloud makes a difference, especially if you plan on using one of the code review tools that isn't the built-in GitLab. You need to be aware of network access requirements, such as: B. GitLab in the cloud requires bi-directional network access directly to the server on which the verification tool is located in order for the integration to work. Also, some changes have been made to the Remote Integration API for GitLab, the last time I checked that the Review Board was not updated to support the "new" (3 year old) Authentication API for GitLab. If this changes, the integration can be interrupted until then your review tool will be updated. In self-hosted GitLab, you can control these changes by restricting when / when you update GitLab. However, this is not possible in the cloud version of GitLab. Even so, the integration API has been pretty stable for a while. If it works now, it will likely work for the foreseeable future.

Recovering and merging during the review process is a consistent issue across all review tools. Many people love the pre-push post-commit review style on which the Merge Request logic is based. Either of these two actions is usually required when running continuous integration tools that run on merge requests before merging. The rebasing method is particularly difficult with review tools because it rebuilds all commits on your feature branch, maintains an updated date, and does not link to the original commits. Merging causes problems with verification tools that are not directly linked to the code base because they are all taken before and after snapshots of changed files and are usually based on the upload order to indicate the order in which changes were made.

The most important considerations in your workflow are:

  1. Pre-commit or post-commit reviews?
  2. Pre-push reviews (while the merge request is open) or post-push (after the merge request is closed)?
  3. Testers who test in parallel or in series? 4, How do I fix fixes for issues found during the review?
  4. Are documentation or non-code changes included in the review? If so, is the code repo kept or separated?
  5. Do you have needs / requirements as to how many people need to review the code?
  6. Is review required, optional, or just for future reference?
  7. Do you want / need feedback linked or separate from the people participating in the review?
  8. What statuses do you need from reviewers (e.g., some reviewed and incomplete, some reviewed and completed, all reviewed and completed, some reviewed and all changes approved, all reviewed and all changes approved, some reviewed and approval waiting for some errors , all checked and approval waiting for errors)?
  9. What notifications would you like from reviews (e.g., added to review, required for review, optional for review, existing review, review reminder, review reminder required for review, review blocked by one or more unresponsive) Reviewer)?
  10. What roles do you have in a review (e.g. notification only, must attend review, optional review participant)?
  11. How much history / follow-up do you want / need from the review (e.g. history of all interactions can be reviewed for certification by the aerospace agency)?
  12. If you do post-commit or pre-push reviews, allow / require rebase during review, merge during review, or both (usually one of the two is required if you have continuous integration tools) ?
  13. Does your repo contain fixed symlinks?
  14. Do you need to view / change / check * nix file permissions?
  15. How often do you move files in your repos?

Examination Board

This is a free, open source review tool created and used by Amazon. It is designed for post-push review, but technically it can be used for any stage review. There are some basic plugins available to add a bit of functionality, but it's largely unchanged. It doesn't really provide any user credential integration as it relies largely on email addresses and doesn't track users very directly.
It's easy to get a server up and use. The user interface is clean and relatively simple. It allows multiple uploads of changes to the same review and provides an easy way to switch between different uploaded versions of the files so you can compare upload 2 and 6 of one file, then 1 and 7 of another. Since it is designed as a post-push review tool, the approval capabilities, as well as the GitLab integration, are quite light. The GitLab integration was not kept and when I tried to use it last year I had to rewrite a block of the GitLab integration code in the Review Board tool to work with the then 2 year old GitLab authentication API ( also) It works out of the box if your repo is fully public, but not so much otherwise. By far the biggest complaints anyone has with the tool are about the approval status. Anyone, including those not invited for the review, can mark the review as approved and once given it cannot be revoked. This includes the case where new uploads are added to the review. The approval is also not per reviewer, but a single approval status for the entire review. Neither is approval limited to whether the check contains open errors, unless you install an additional plugin. The logging of changes is relatively good for reviews. All actions are added to an ongoing overall discussion list. However, you can delete reviews entirely (if that's a problem). The nature of the tool as a post-push means it doesn't really try to integrate with GitLab to drive a merge request. It serves more as documentation of the changes to the merge requirement. If you want to write your own webhooks and scripts, you can probably get the GitLab merge requests to block the merge until the review is approved by the review board. However, it would be your own custom scripts to do this.
Overall, this is a great tool for post-push review, but I don't think it's very useful as a pre-push merge request code review tool.

SmartBear Collaborator

This is what my company actually uses, and we used it before we used GitLab.
This is a paid, closed-source commercial tool, although there is a free version of the tool with limited functionality and a maximum of 10 users. It is very easy to install an initial evaluation version of the full version of the tool, but it becomes more complicated when you need HTTPS for the LDAP or ActiveSync web UI accounts, for example. With floating licenses, buying licenses for the full version does a good job. So in our case we share 19 licenses with almost 40 users without much problem, but the licenses are only valid for 1 year and are not valid. t very cheap. I only used the paid version. They have changed their paid model from a 30-day, 10-user trial to a separate free install model with limited functionality since I started. As far as I know, the free version is the same as the paid version except for the user limit and some of the features that are not available in the free version.
Collaborator has many configuration options and can be a bit overwhelming for the administrator to set up for the first time. They have to register users (regardless of whether this is done through LDAP / Active Directory integration or manual entry) and have a role-based access model. It was designed as a standalone code review tool that doesn't necessarily fit into anything, and works with a number of different version control systems. As a result, there are plenty of configuration options available to enable features that are more useful when you don't have a GitLab merge request and all of the information that supports the changes (configurable forms, fields and selection boxes in an admin GUI review) . It also allows for multiple concurrent workflow requests, so in theory it can be configured to be used for pre-commit, post-commit, pre-push, and post-push workflows. This can be complicated and is not very well described in the documentation if you have not used the tool extensively (read more about templates). This allows you to set up up to 4 roles in a review (including the author) and configure the number of mandatory participants for a role, how many of each role need to complete / approve the review (includes all and no options). etc. (Read the public documentation on how to configure the tool). As with Review Board, you can examine the differences between two versions of an uploaded file to compare version 1 with version 7 of one file, then version 3 and version 4 of another. Unlike Review Board, however, this is a bit of a hassle when using git as the file versions don't seem to stay in the order they were uploaded when they were created, and there's no way to relink a file version to that Commit hash in which it appeared.
Document verification in Collaborator is very good if you are using standard documents, PDFs or LaTeX from Microsoft Office / Open Office / Libre Office and can also check image differences. Most review tools don't really distinguish these kinds of things at all, but Collaborator does a good job and has a very good way of allowing commenting (you drop a position pin instead of commenting on an entire line).
Some of the best features we've found with Collaborator are: You can "check" the version of a recently scanned file, and it's a custom checkpoint that can be used as a reference point for further future changes. Files are listed in a general tree view on the main page with links to file-specific review pages that support a much larger number of file changes without being impossible to keep track of with a long scrolling page. Comments can be separated from bugs / bugs so bugs / bugs need to be fixed and corrected (even if they will only be fixed in a future merge request) before the review can be approved, while comments or discussion points can be gained, don't necessarily keep it up . When new changes are added, all approvals are deleted because the updated code changes have not yet been reviewed.
The biggest problems with Collaborator are: Some of the basic assumptions about the workflow cannot be changed (it needs to be Setup-> Inspection-> Completed, or if everyone has completed the review and there are still open bugs, it switches from inspection to rework and then manually back in inspection). The code syntax highlighting is still fairly weak and limited to the major programming languages ​​(e.g. no Make or CMake syntax is supported). Moving and renaming files will appear as deleting the old file name and adding the new file name. File permissions are ignored and are not displayed or checked.
The GitLab integration works specifically and works quite well. You need to add a GitLab webhook to each repository that you want to use with Collaborator and register the repository in Collaborator. Once you have registered the repo in Collaborator, each user must enter their GitLab username for this repo under their individual user settings so that Collaborator can match the usernames (like all smart projects, GitLab uses human-readable names as usernames instead of login names). Upon completion, opening a Merge Request Collaborator will set up a new review with the merge request creator as the author, any file changes included in the review, and an entry in the Remote Content section of the review with a link back to the GitLab Merge Request . At the end, when you start the collaborator review, it will open a pending pipeline for the GitLab merge request (if you have an Enterprise Edition that supports pipelines) which usually blocks the merge request from being approved. For comments made in the GitLab discussion on the merge request or on files in the GitLab difference view by users with their GitLab username entered in Collaborator, the same comments are automatically made in the overall discussion or in the overall file in Collaborator. Additional pushes on the branch in the merge request automatically trigger a new file upload in Collaborator. When the collaborator verification ends, the pipeline is marked as failed in the merge request. If the collaborator review completes successfully, the pipeline will be marked as successfully completed in GitLab.
You will find that some of the limitations occur with properly configuring usernames. This is problematic when you use several different repositories and each user in Collaborator has to enter the same synchronized, Active Directory synchronized, human-readable name in 40 different repository configurations in their Collaborator user settings. Once it works, it just works just fine. You can even set up Collaborator to automatically merge merge requests after the reviews are complete and / or close when the reviews are finished by Collaborator. If for some reason you need to reopen a review or if it closes prematurely (sometimes it requires tests in hand first and you may want to wait for code review for those results), there is no way you can get Collaborator to reopen the pipeline. For all GitLab details, the Collaborator review is closed as soon as it is closed and cannot be reopened (although an administrator and, under certain settings, individual users can do this in Collaborator).
For the exam, by default, Collaborator does not allow permanent deletion of elements from the GUI, which is very important for some people and not for others. However, you can configure something to be deleted. All of the actions taken on any / all of the checks can be relatively easily extracted from data and exported from the GUI if you wish. Error states are tracked with a running log in the specific error element about who created it, when it was closed and / or reopened, when new file revisions were uploaded and with comments / discussions about the error. You can also control who can close the bug (maybe just the original author or an administrator?). Each user also sees their own version of the status and history of reviews (something), with new things being highlighted for that user when logged in and having to manually mark them as read (or "mark all as read"). ).

Built-in GitLab

This was used by one of our peer departments when we started implementing GitLab. Therefore, I have used this extensively in practice.
The best that can be said about it is that since it is built in, it is guaranteed to be up to date with the code. As mentioned earlier, any solutions that are not hard-wired to the repo may be out of sync compared to the actual code in the branch. However, in my opinion, this option only works when you have very small teams that communicate extensively on the ins and outs of changes outside of the GitLab tool, don't need quick reviews, and / or limit your proposed changes to a single commit (squash any more changes before uploading to GitLab).
GitLab keeps a long history of all operations, discussions, uploads, external integration events, etc. on the talk page of each merge request. This gets very, very long, very quickly, and is very hard to go back. It is useful to see which discussion items are still open and need to be resolved, as they are automatically collapsed in the discussion view as soon as each discussion is closed. However, because they are automatically collapsed when the discussion item is closed, it is difficult to determine what changes will be made to match when the discussion item was closed (only the open discussion items remain visible in Diff view). Also, you need to find a way to bookmark the latest version of the files you've reviewed, such as adding a special comment in the general discussion that you can search for later. GitLab doesn't even let you comment on lines that are more than a few lines away from a line that has changed. In cases where you would like to comment on adding an argument to a function, some lines must be added to the middle of a function on or near one of the lines that have changed, rather than the line that would be affected by the change. That wouldn't be such a big deal if GitLab allowed multiple independent discussion points on the same line of a file. You can only have one, and all other comments on the same line will be grouped together into one discussion point. Often times, a few lines of code changes have many different bugs and may require more discussion points than lines of changed code. In the end, you have to put discussion items on unrelated lines just because they are visible and have not yet been used on a discussion item, because you can't put them on the line causing the separate discussion (one line restriction) or in the Lines they affect (outside the standard visible area, so there cannot be any discussion elements). Fortunately, there are also no parallel verifications so hopefully you may not run into this problem at all. Unlike problems, merge requests can only have one recipient. By using the feature to automatically add approvers to all merge requests in a repo, you can ensure that those approvers ignore most emails from GitLab until they are the recipient (per-user notification options are basically all or nothing). and approvers automatically subscribe to the merge request for which they are approvers from the time they are added. Because the only filterable notification on a merge request that indicates that a user needs to take action is for them to be appointed an assignee, and you can only have one assignee per merge request, you're basically limited to uses that involve one person does the review and hand it over to the next, or you speak outside of GitLab and explicitly request when you want to review things. That's all if you remember to review your merge requirements. Once you are no longer the assignee, it is difficult to find the merge requirements that you were involved in again so that you can get the reps to actually take care of those assigned to them.
For large differences, GitLab also has a lot of problems. If you've made a small single-line change to many files (replace a common header name?), The webpage that GitLab generates to display all changes will crash some browsers and almost lock others (depending on the number of files). This is because GitLab collapses individual files with a multitude of changes and you have to expand them manually. However, there isn't a good way to limit the number of files you can see the differences for at one time. This creates a very long scrolling page that browsers cannot handle well.
The code review tool built into GitLab is fine if you have some dedicated reviewers with just a few small reviews over an extended period of time and don't need to speed up how quickly reviews are done (all of which describes a public open source code base, with most Users have read-only access). However, if it doesn't, the built-in tool is not for you.

Atlassian crucible

I haven't tried to incorporate this into GitLab, but I've done extensive research on this very same tool. As you read the documentation on it, be very careful about what limits are set (a limited number of files can be included in a review at one time) and what is actually in the Crucible tool without the rest of the Atlassian tools (e.g. FishEye etc .). Most of the tool is designed for use with Atlassian's BitBucket, and it practically requires FishEye too so you can see more than the limited context around the differences. While it is possible not to use Jira to track bugs / discussion points / bugs found during the review, it is not as well supported.