Peer review is often seen as a cornerstone of modern science. We are going to discuss the current peer review practices in software engineering research, their strengths and limitations. Next we will discuss tips and tricks for writing code reviews, as well as implications for writing papers. I will also share some insights in my own reviewing practices.
8. A program committee member (including the chair of the committee) is considered
to have a conflict of interest on a submission that has an author in any of the
following categories:
• the person themselves;
• a past or current student or academic advisor;
• a supervisor or employee in the same line of authority within the past 3 years;
• a member of the same organization within the past 3 years;
• a co-author of a paper appearing in publication within the past 3 years;
• someone with whom there has been a financial relationship (e.g., grants,
contracts, …) within the past 3 years;
• someone with whom acceptance or rejection would further the personal goals of
the reviewer (e.g., a competitor);
• a member of the same family or anyone considered a close personal friend; or
• someone about whom, for whatever reason, their work cannot be evaluated
objectively.
https://www.sigsoft.org/policies/pgmcommittee.html
9. Columns = PC members
Rows =
submissions
• Asterisk = assignment
• Encircled = somebody had to review it…
15. Submissions will be evaluated on the basis of
soundness, importance of contribution,
originality, quality of presentation, appropriate
comparison to related work, and convincing
evaluation of the proposed approach.
ICSME 2018
19. TEMPLATE
Overall Recommendation:
Reviewer Confidence:
<…>
Summary
Detailed Evaluation
* Soundness
* Significance
* Verifiability
* Presentation.
Strengths and Weaknesses
* Strengths
* Weaknesses
<…>
Confidential Remarks:
reject, weak reject, weak accept, accept
low, medium, high
bullet points
• facilitate discussion between the reviewers
• help the authors to navigate the review
helps the authors: did the reviewer get the
main points of the paper?
I use rarely and only in extreme cases to alert
other reviewers/chairs: e.g., suspected plagiarism
21. Research question
Research
method(s)
Research finding(s)
Discussion
Is the method appropriate? Can the question
be answered using this method?
Has the method been applied correctly?
Can one trust the findings?
Are the findings correctly interpreted?
Are the limitations/threats correctly
identified and addressed?
Has the question
been answered?
22.
23. “The reviewer suspects that many of the
predictors used are highly correlated and likely
offer little additional explanatory power. However,
the paper does not describe whether a correlation
analysis was performed.”
“Similarly, I had some questions about the
evaluation interviews. How many engineers used
the tool? How many of them were interviewed?
Are they continuing to use it?”
25. • Have the questions been asked in a way that the
respondents are likely to understand?
• Are there leading questions?
• In case of multiple-choice questions: how have the answer
options been derived?
• How is confidentiality of the responses ensured? Is there a
risk of conflict between different subgroups, e.g., employees
and managers?
• Is there a risk of stereotyping?
• How many responses have been received? Are they
representative?
Example questions to
ask yourself when
reviewing surveys
26. • Is the data “cleaned” somehow? How/why?
• Are the tests carried out appropriate for the data at
hand?
• What are the assumptions of these tests? Did the
authors check that the assumptions are satisfied?
• If multiple tests are performed, are the p-values
adjusted?
• What is the effect size?
• Is the sample large enough? Is it representative?
Example questions to
ask yourself when
reviewing statistics
29. “Overall, I like this paper, since it (rightfully) makes the case for commit-
based bug prediction instead of more traditional file-based prediction. The
heuristics based on clues are novel, incorporating a kind of voting system
for finding the culprit commit. <…>
However, the idea of commit-based prediction is less novel than
discussed, since XXX (Ref1), YYY (Ref2) and ZZZ (Ref3) performed earlier
studies on commit-based prediction in industry. These three papers all
focus on prediction of risk (not necessarily bugs), however YYY also build
bug (introduction) prediction models.”
34. “How were the 79 Pharo systems selected is not so obvious - does the
Pharo standard distribution contain only 39 large software applications
or more than 39? What does exactly large mean?”
35. • Is every step clear? Are there vague words like
popular, large, active?
• Are the tools used open source and publicly
available?
• Is the data used publicly available?
• This is of course not always possible but should
be encouraged
• If the data/tools are not available, this should be
explained
Example questions to
ask yourself when
assessing verifiability
37. “The write up is not well-structured. For instance, the approach (i.e., the
model and its features) are at least partially described in the section
"Experimental Results". IMO there should be a section "Approach" which
describes the model/approach.”
38. • Are the figures/tables legible?
• Also for colour-blind individuals?
• Is English reasonable?
• It is not an English exam but the text should be
understandable.
• Are the abbreviations explained and is their use
reasonable?
• And those broken sentences…
Example questions to
ask yourself when
assessing presentation
41. • You should also include a section with threats to
validity to describe the limitations of your study.
• Please consider including a section with threats to
validity to describe the limitations of your study.
• A section with threats to validity describing the
limitations of the study can further improve the
paper.
Be kind