Artifact Evaluation for ASPLOS 2022

[ Back to the ASPLOS 2022 conference website ]

Important dates

Paper decision: November 15, 2021
Artifact submission: December 1, 2021
Artifact decision: January 21, 2022
Conference: Feb 28-Mar 4, 2022 (Lausanne, Switzerland)

Artifact evaluation chairs

Artifact Evaluation Committee

  • Alex Grosul (Google)
  • Alexander Fuerst (Indiana University)
  • Alexander Midlash (Google)
  • Aporva Amarnath (IBM Research)
  • Asmita Pal (University of Wisconsin-Madison)
  • Bogdan Alexandru Stoica (University of Chicago)
  • Byung Hoon Ahn (University of California, San Diego)
  • Charles Eckman (Google)
  • Clement Poncelet (Uppsala University)
  • David Munday (Google)
  • Davide Conficconi (Politecnico di Milano)
  • Emanuele Vannacci (Vrije Universiteit Amsterdam)
  • Faruk Guvenilir (Microsoft)
  • Farzin Houshmand (UC Riverside)
  • Felippe Vieira Zacarias (UPC/BSC)
  • Francesco Minna (Vrijte Universiteit Amsterdam)
  • Gaurav Verma (Stony Brook University, New York)
  • Hongyuan Liu (William & Mary)
  • Hugo Lefeuvre (The University of Manchester)
  • Kartik Lakshminarasimhan (Ghent University)
  • Korakit Seemakhupt (University of Virginia)
  • Lev Mukhanov (Queen's University Belfast)
  • Mahmood Naderan-Tahan (Ghent University)
  • Marcos Horro (CITIC, Universidade da Coruña, Spain)
  • Mário Pereira (NOVA LINCS & DI -- Nova School of Sciences and Technology)
  • Mengchi Zhang (Facebook/Purdue University)
  • Miheer Vaidya (University of Utah)
  • Mohammad Loni (MDH)
  • Mohit Tekriwal (University of Michigan, Ann Arbor)
  • Narges Shadab (UC Riverside)
  • Peter Gavin (Google)
  • Rahol Rajan (Google)
  • Shehbaz Jaffer (University of Toronto)
  • Shijia Wei (UT Austin)
  • Stephen Longfield (Google)
  • Subhankar Pal (IBM Research)
  • Sumanth Gudaparthi (University of Utah)
  • Tirath Ramdas (HP Inc.)
  • Toluwanimi O. Odemuyiwa (University of California, Davis)
  • Utpal Bora (IIT Hyderabad)
  • Vlad-Andrei Bădoiu (University Politehnica of Bucharest)
  • Vlastimil Dort (Charles University)
  • Wei Tang (Princeton University)
  • Weiwei Jia (New Jersey Institute of Technology)
  • Wenshao Zhong (University of Illinois at Chicago)
  • Xiaolin Jiang (UC Riverside)
  • Xiaowei Shang (New Jersey Institute of Technology)
  • Xizhe Yin (Univsersity of California, Riverside)
  • Yifan Yang (Massachusetts Institute of Technology)
  • Yueying Li (Cornell University)
  • Yufan Xu (University of Utah)
  • Yuke Wang (University of California Santa Barbara)
  • Zhipeng Jia (The University of Texas at Austin)
  • Zishen Wan (Georgia Tech)
  • Zixian Cai (Australian National University)
  • Ziyang Xu (Princeton University)

The process

Artifact evaluation promotes reproducibility of experimental results and encourages code and data sharing to help the community quickly validate and compare alternative approaches. Authors of accepted papers are invited to formally describe supporting materials (code, data, models, workflows, results) using the standard Artifact Appendix template and submit it together with the materials for evaluation.

Note that this submission is voluntary and will not influence the final decision regarding the papers. We want to help the authors validate experimental results from their accepted papers by an independent AE Committee in a collaborative way while helping readers find articles with available, functional, and validated artifacts!

The papers that successfully go through AE will receive a set of ACM badges of approval printed on the papers themselves and available as meta information in the ACM Digital Library (it is now possible to search for papers with specific badges in ACM DL). Authors of such papers will have an option to include up to 2 pages of their Artifact Appendix to the camera-ready paper.
Artifact available
Artifact evaluated - functional
Results reproduced

Artifact preparation

You need to prepare the Artifact Appendix describing all software, hardware and data set dependencies, key results to be reproduced, and how to prepare, run and validated experiments. Though it is relatively intuitive and based on our past AE experience and your feedback, we strongly encourage you to check the the Artifact Appendix guide, artifact reviewing guide, the SIGPLAN Empirical Evaluation Guidelines, the NeurIPS reproducibility checklist and AE FAQs before submitting artifacts for evaluation! You can find the examples of Artifact Appendices in the following reproduced papers.

Since the AE methodology is slightly different at different conferences, we introduced the unified Artifact Appendix with the Reproducibility Checklist to help readers understand what was evaluated and how! Furthermore, artifact evaluation sometimes help to discover some minor mistakes in the accepted paper - in such case you have a chance to add related notes and corrections in the Artifact Appendix of your camera-ready paper!

We strongly recommend you to provide at least some scripts to build your workflow, all inputs to run your workflow, and some expected outputs to validate results from your paper. You can then describe the steps to evaluate your artifact using Jupyter Notebooks or plain README files. You can skip this step if you want to share your artifacts without the validation of experimental results - in such case your paper can still be entitled for the "artifact available" badge!

Artifact submission

Submit the artifact abstract and the PDF of your paper with the Artifact Appendix attached using the AE submission website.

The (brief) abstract should describe your artifact, the minimal hardware and software requirements, how it supports your paper, how it can be validated and what the expected result is. Do not forget to specify if you use any proprietary software or hardware! This abstract will be used by evaluators during artifact bidding to make sure that they have an access to appropriate hardware and software and have required skills.

Most of the time, the authors make their artifacts available to the evaluators via GitHub, GitLab, BitBucket or similar private or public service. Public artifact sharing allows optional "open evaluation" which we have successfully validated at ADAPT'16 and ASPLOS-REQUEST'18. It allows the authors to quickly fix encountered issues during evaluation before submitting the final version to archival repositories. Other acceptable methods include:

  • Using zip or tar files with all related code and data, particularly when your artifact should be rebuilt on reviewers' machines (for example to have a non-virtualized access to a specific hardware).
  • Using Docker, Virtual Box and other containers and VM images.
  • Arranging remote access to the authors' machine with the pre-installed software - this is an exceptional cases when rare or proprietary software and hardware is used. You will need to privately send the access information to the AE chairs.

Note that your artifacts will receive the ACM "artifact available" badge only if they have been placed on any publicly accessible archival repository such as Zenodo, FigShare, and Dryad. You must provide a DOI automatically assigned to your artifact by these repositories in your final Artifact Appendix!

Artifact review

Reviewers will need to read a paper and then thoroughly go through the Artifact Appendix step-by-step to evaluate a given artifact based on a set of reviewing guidelines.

Reviewers are strongly encouraged to communicate with the authors about encountered issues immediately (and anonymously) via the HotCRP submission website to give the authors time to resolve all problems! Note that our philosophy of artifact evaluation is not to fail problematic artifacts but to help the authors improve their artifacts (at least publicly available ones) and pass the evaluation!

In the end, AE chairs will decide on a set of the standard ACM reproducibility badges to award to a given artifact based on all reviews as well as the authors' responses.

Questions and feedback

Please check the AE FAQs and feel free to ask questions or provide your feedback and suggestions via the public AE discussion group.