It's becoming increasingly difficult to reproduce results from CS papers. Voluntarily Artifact Evaluation (AE) was successfully introduced at program languages, systems and machine learning conferences and tournaments to validate experimental results by the independent AE Committee, share unified Artifact Appendices, and assign reproducibility badges.
AE promotes the reproducibility of experimental results and encourages artifact sharing to help the community quickly validate and compare alternative approaches. Authors are invited to formally describe all supporting material (code, data, models, workflows, results) using the unified Artifact Appendix and the Reproducibility Checklist template and submit it to the single-blind AE process. Reviewers will then collaborate with the authors to evaluate their artifacts and assign the following ACM reproducibility badges:
You need to prepare the Artifact Appendix describing all software, hardware and data set dependencies, key results to be reproduced, and how to prepare, run and validated experiments. Though it is relatively intuitive and based on our past AE experience and your feedback, we strongly encourage you to check the the Artifact Appendix guide, artifact reviewing guide, the SIGPLAN Empirical Evaluation Guidelines, the NeurIPS reproducibility checklist and AE FAQs before submitting artifacts for evaluation! You can find the examples of Artifact Appendices in the following reproduced papers.
Since the AE methodology is slightly different at different conferences, we introduced the unified Artifact Appendix with the Reproducibility Checklist to help readers understand what was evaluated and how! Furthermore, artifact evaluation sometimes help to discover some minor mistakes in the accepted paper - in such case you have a chance to add related notes and corrections in the Artifact Appendix of your camera-ready paper!
You can skip this step if you want to share your artifacts without the validation of experimental results - in such case your paper can still be entitled for the "artifact available" badge!
We strongly recommend you to provide at least some scripts to build your workflow, all inputs to run your workflow, and some expected outputs to validate results from your paper. You can then describe the steps to evaluate your artifact using Jupyter Notebooks or plain ReadMe files.
Other acceptable methods include:
Note that your artifacts will receive the ACM "artifact available" badge only if they have been placed on any publicly accessible archival repository such as Zenodo, FigShare, and Dryad. You must provide a DOI automatically assigned to your artifact by these repositories in your final Artifact Appendix!
Submit the artifact abstract and the PDF of your paper with the Artifact Appendix attached using the AE submission website provided by the event.
If your paper is published in the ACM Digital Library, you do not need to add reproducibility stamps - ACM will add them to your camera-ready paper and will make this information available for search! In other cases, AE chairs will tell you how to add stamps to the first page of your paper.