This document (V20171205) provides guidelines to review artifacts. It gradually evolves to define common evaluation criteria based on our past Artifact Evaluations, ACM reviewing and badging policy which we co-authored in 2016, and your feedback (2017a, 2017b, 2014).
Reviewers will then have approximately two weeks to evaluate artifacts and provide a report via dedicated submission website (usually EasyChair or HotCRP). Reviewers will be also allowed to communicate with authors about encountered issues immediately and anonymously via submission website in order to quickly resolve issues (our point is not to fail problematic artifacts but to help authors improve publicly available ones and pass evaluation).
During rebuttal phase (a technical clarification phase), authors will be able to respond to the final evaluation. Finally AE chairs will decide which set of badges to award (see below) based on all reviews and authors responses.
+1) exceeded expectations
0) met expectations (or inapplicable)
-1) fell below expectations
|Criteria||Score||Badges for ACM conferences
|Badges for non-ACM conferences
|Artifacts available?||Are all artifacts related to this paper are publicly available?
Note that it is not obligatory to make artifacts publicly available!
The author-created artifacts relevant to this paper
will receive an ACM "artifact available" badge
only if they have been placed on
a publicly accessible archival repository
such as Zenodo,
A DOI will be then assigned to their artifacts
and must be provided in the Artifact Appendix!
The authors can also share their artifact
via ACM DL - in such case they should contact AE
chairs to obtain DOI (not yet automated unlike above repositories).
Notes: ACM does not mandate the use of specific repositories. Publisher repositories (such as the ACM Digital Library), institutional repositories, or open commercial repositories (e.g., figshare or Dryad) are acceptable. In all cases, repositories used to archive data should have a declared plan to enable permanent accessibility. Personal web pages, GitHub, GitLab and BitBucket are not acceptable for this purpose.
Artifacts do not need to have been formally evaluated in order for an article to receive this badge. In addition, they need not be complete in the sense described above. They simply need to be relevant to the study and add value beyond the text in the article. Such artifacts could be something as simple as the data from which the figures are drawn, or as complex as a complete software system under study.
|Artifacts functional?||Package complete?||
All components relevant to evaluation are included in the package?
Note that proprietary artifacts need not be included. If they are required to exercise the package then this should be documented, along with instructions on how to obtain them. Proxies for proprietary data should be included so as to demonstrate the analysis.
The artifacts associated with the paper will receive an "Artifacts Evaluated
- Functional" badge only if they are found to be documented, consistent,
complete, exercisable, and include appropriate evidence of verification and
|Well documented?||Enough to understand, install and evaluate artifact?|
|Exercisable?||Includes scripts and/or software to perform appropriate experiments and generate results?|
|Consistent?||Artifacts are relevant to the associated paper and contribute in some inherent way to the generation of its main results?|
|Artifacts customizable and reusable?||
Can this artifact and experimental workflow be easily reused and customized? For example, can it be used on a different platform, with different benchmarks, data sets, compilers, tools, under different conditions and parameters, etc.?
We collaborate with ACM to unify packing and sharing of artifacts as reusable and customizable components using Collective Knowledge framework (see ACM announcement). Note that authors are not obliged to use CK and can use any other suitable workflow framework to receive this badge. Check out reusable and customizable workflow from CGO'17 which was shared using CK framework: GitHub, PDF with appendix.
The artifacts associated with the paper will receive an "Artifact Evaluated - Reusable" badge
only if they are of a quality that significantly exceeds minimal functionality.
That is, they have all the qualities of the Artifacts Evaluated - Functional level,
but, in addition, they are very carefully documented and well-structured to the extent
that reuse and repurposing is facilitated. In particular, norms and standards of the research
community for artifacts of this type are strictly adhered to.
Can all main results from the paper be validated using provided artifacts?
Report any unexpected artifact behavior (depends on the type of artifact such as unexpected output, scalability issues, crashes, performance variation, etc).
The artifacts associated with the paper will receive a
"Results replicated" badge only if the main results
of the paper have been obtained in a subsequent study
by a person or team other than the authors, using,
in part, artifacts provided by the author.
Note that variation of empirical and numerical results is tolerated. In fact it is often unavoidable in computer systems research - see "how to report and compare empirical results?" in AE FAQ!
Did artifacts and results match authors' description?
|Artifacts successfully passed evaluation receive a stamp of approval:
|Workflow framework used?||Was any workflow framework such as Collective Knowledge used to implement experimental workflow?||Artifact can receive dividiti prize (if arranged by the conference)|
|Distinguished artifact?||Is artifact publicly available, functional, reproducible and easily customizable and reusable?||Artifact can receive distinguished artifact award (if arranged by the conference)|