This document (V20151015) provides guidelines to review artifacts. It gradually evolves to define common evaluation criteria based on our past Artifact Evaluations and your feedback (see this presentation with an outcome of the past PPoPP/CGO'15 AE).
During rebuttal, authors will be able to address raised issues and respond to the reviewers. Finally, reviewers will check if raised issues have been fixed and will provide the final report. Based on all reviewers, AE chairs will make the following final assessment of the submitted artifact:
Note that our goal is not to fail problematic artifacts but to promote reproducible research via artifact validation and sharing. Therefore, we allow light communication between reviewers and authors whenever there are installation/usage problems. In such cases, AE chairs serve as a proxy to avoid revealing reviewers' identity.
Criteria | Score |
Documentation | Enough to understand and evaluate artifact? |
Packaging | Nothing missing? |
Installation procedure | Enough to install and use artifact? |
Use case | Enough to validate artifact? |
Expected behavior | Any unexpected artifact behavior (depends on the type of artifact such as unexpected output, scalability issues, crashes, performance variation, etc)? |
Relevance to paper | How well submitted artifact supports work described in a paper? |
Customization and reusability | Optional and should not be used for overall assessment - mainly used to select distinguished artifact. We encourage reviewers to check know whether a given artifact can be easily reused and customized. For example, can it be used in different environment, with different parameters, under different conditions, or when using different and possibly larger data set (particularly useful to validate whether machine learning based techniques are meaningful). |
Overall score | Provide explanation of your score and what to improve during rebuttal. |