Predicting Usefulness of Code Review Comments using Textual Features and Developer Experience

Abstract: Although peer code review is widely adopted in both commercial and open source development, existing studies suggest that such code reviews often contain a significant amount of non-useful review comments. Unfortunately, to date, no tools or techniques exist that can provide automatic support in improving those non-useful comments. In this paper, we first report a comparative study between useful and nonuseful review comments where we contrast between them using their textual characteristics and reviewers’ experience. Then, based on the findings from the study, we develop RevHelper, a prediction model that can help the developers improve their code review comments through automatic prediction of their usefulness during review submission. Comparative study using 1,116 review comments suggested that useful comments share more vocabularies with the changed code, contain salient items like relevant code elements, and their reviewers are generally more experienced. Experiments using 1,482 review comments report that our model can predict comment usefulness with 66% prediction accuracy which is promising. Comparison with three variants of a baseline model using a case study validates our empirical findings and demonstrates the potential of our model.

 

Comparative Study


Code Review Comments: The comments below are used for our comparative study between useful and non-useful comments


Tools/Items for Replication:

Experimental Data


Review Comments for Evaluation:


Review Comments for Validation & Case study:

Prediction Model for Comment Usefulness



Related Publication(s)


@inproceedings{msr2017masud, 
author = {Rahman, M. M. and Roy, C. K. and Kula, R. G. }, 
title = {{Predicting Usefulness of Code Review Comments using Textual Features and Developer Experience}}, 
booktitle = {Proc. MSR}, 
year = {2017}, 
pages = {215--226} } 

@Masud Rahman, Computer Science, University of Saskatchewan, Canada.