This document (V20200102) provides the guidelines to submit your artifacts for evaluation across a range of systems and machine learning conferences and journals. We regularly update it based on our past Artifact Evaluation experience and open reproducibility discussions (2018, 2017a, 2017b, 2014, 2009), the feedback we receive from researchers (Artifact Evaluation google group, Shared Google doc), PL AE, and the ACM reviewing and badging policy which we are contributing to as a member of the ACM taskforce on reproducibility. Our goal is to come up with a common methodology, unified artifact appendix with the reproducibility checklist, and an open reproducibility platform for artifact sharing, validation and reuse.

Motivation

It's becoming increasingly difficult to reproduce results from CS papers. Voluntarily Artifact Evaluation (AE) was successfully introduced at program languages, systems and machine learning conferences and tournaments to validate experimental results by the independent AE Committee, share unified Artifact Appendices, and assign reproducibility badges (see our AE motivation).

AE promotes the reproducibility of experimental results and encourages artifact sharing to help the community quickly validate and compare alternative approaches. Authors are invited to formally describe all supporting material (code, data, models, workflows, results) using the unified Artifact Appendix and the Reproducibility Checklist template and submit it to the single-blind AE process. Reviewers will then collaborate with the authors to evaluate their artifacts and assign the following ACM reproducibility badges:

Preparing your Artifact Appendix and the Reproducibility Checklist

You need to prepare the Artifact Appendix describing software/hardware dependencies, how to prepare and run experiments, and which results to expect at the end of the evaluation. Though it is relatively intuitive and based on our past AE experience and your feedback, we strongly encourage you to check the the Artifact Appendix guide, artifact reviewing guide, the SIGPLAN Empirical Evaluation Guidelines, the NeurIPS reproducibility checklist and AE FAQs before submitting artifacts for evaluation! You can find the examples of Artifact Appendices in the following reproduced papers.

Since the AE methodology is slightly different at different conferences, we introduced the unified Artifact Appendix with the Reproducibility Checklist to help readers understand what was evaluated and how! Furthermore, artifact evaluation sometimes help to discover some minor mistakes in the accepted paper - in such case you have a chance to add related notes and corrections in the Artifact Appendix of your camera-ready paper!

Preparing your experimental workflow

You can skip this step if you want to share your artifacts without the validation of experimental results - in such case your paper can still be entitled for the "artifact available" badge!

We strongly recommend you to provide at least some scripts to build your workflow, all inputs to run your workflow, and some expected outputs to validate results from your paper. You can then describe the steps to evaluate your artifact using Jupyter Notebooks or plain ReadMe files.

Based on your feedback, we are now working with the community on a common format for digital artifacts, open-source portable workflows and reusable R&D actions to automate and simplify the artifact evaluation process. Feel free to check the following community projects and provide your feedback (optional):

Making artifacts available to evaluators

Most of the time, the authors make their artifacts available to the evaluators via GitHub, GitLab, BitBucket or similar private or public service. Public artifact sharing allows optional "open evaluation" which we have successfully validated at ADAPT'16 and ASPLOS-REQUEST'18. It allows the authors to quickly fix encountered issues during evaluation before submitting the final version to archival repositories.

Other acceptable methods include:

  • Using Docker, Virtual Box and other containers and VM images. However, since they are usually not easily portable and customizable, we do not consider them for the "artifact reusable badge".
  • Using zip or tar files with all related code and data, particularly when your artifact should be rebuilt on reviewers' machines (for example to have a non-virtualized access to a specific hardware).
  • Arranging remote access to the authors' machine with the pre-installed software - this is an exceptional cases when rare or proprietary software and hardware is used. You will need to privately send the access information to the AE chairs.

Note that your artifacts will receive the ACM "artifact available" badge only if they have been placed on any publicly accessible archival repository such as Zenodo, FigShare, and Dryad. You must provide a DOI automatically assigned to your artifact by these repositories in your final Artifact Appendix!

Submitting artifacts

Write a brief abstract describing your artifact, the minimal hardware and software requirements, how it supports your paper, how it can be validated and what the expected result is. Do not forget to specify if you use any proprietary software or hardware! This abstract will be used by evaluators during artifact bidding to make sure that they have an access to appropriate hardware and software and have required skills.

Submit the artifact abstract and the PDF of your paper with the Artifact Appendix attached using the AE submission website provided by the event.

Asking questions

If you have questions or suggestions, do not hesitate to get in touch with the the AE chairs or the community using the Artifact Evaluation google group, our LinkedIn group, and the Slack #reproducible-research channel.

Preparing your camera-ready paper

If you have successfully passed AE with at least one reproducibility badge, you will need to add up to 2 pages of your artifact appendix to your camera ready paper while removing all unnecessary or confidential information. This will help readers better understand what was evaluated and how.

If your paper is published in the ACM Digital Library, you do not need to add reproducibility stamps - ACM will add them to your camera-ready paper and will make this information available for search! In other cases, AE chairs will tell you how to add stamps to the first page of your paper.

Examples of reproduced papers with shared artifacts and Artifact Appendices:

Methodology archive

List of different versions of our artifact submission and reviewing guides to help you understand which one was used in papers with evaluated artifacts: