Artifact Evaluation for ASPLOS 2021

[ Back to the ASPLOS 2021 conference website ]

We thank Google for their generous sponsorship of the distinguished artifact awards!

Important dates

Paper decision: November 19, 2020
Intent to submit: December 4, 2020
Artifact submission: December 16, 2020
Artifact decision: January 21, 2021
Camera-ready paper: Feburary 1, 2021
Artifacts: the list
Conference: April 1-5, 2021

Reproducibility chairs

Artifact Evaluation Committee

  • Ismail Akturk (University of Missouri, Columbia)
  • Utpal Bora (IIT Hyderabad)
  • Christin David Bose (Purdue university)
  • Rangeen Basu Roy Chowdhury (Intel Corporation)
  • Alexei Colin (USC Information Sciences Institute)
  • Weilong Cui (Google)
  • Tiziano De Matteis (ETH Zurich)
  • Daniele De Sensi (ETH Zurich)
  • Davide Del Vento (UCAR)
  • Haowei Deng (University of Maryland)
  • Murali Emani (Argonne National Laboratory)
  • Doug Evans (Google)
  • Umar Farooq (University of California, Riverside)
  • Elba Garza (Texas A&M University)
  • Kaan Genc (Ohio State University)
  • Zahra Ghodsi (New York University)
  • Vidushi Goyal (University of Michigan)
  • Jing Guo (Institute of Computing Technology, Chinese Academy of Sciences)
  • Faruk Guvenilir (Microsoft, The University of Texas at Austin)
  • Qijing Huang (UC Berkeley)
  • Jianming Huang (Huazhong University of Science and Technology)
  • Sitao Huang (University of Illinois at Urbana–Champaign)
  • Jeff Huynh (Amazon)
  • Sergio Iserte (Universitat Jaume I)
  • Animesh Jain (Amazon Web Services)
  • Anand Jayarajan (University of Toronto)
  • Weiwei Jia (New Jersey Institute of Technology)
  • Aditi Kabra (Carnegie Mellon University)
  • Iacovos G. Kolokasis (University of Crete and FORTH-ICS)
  • Pengfei Li (Huazhong University of Science and Technology)
  • Guangpu Li (University of Chicago)
  • Hao Li (Xi'an Jiaotong Universifty)
  • Sihang Liu (University of Virginia)
  • Jianqiao Liu (Google)
  • Jiawen Liu (University of California, Merced)
  • Boran Liu (Institute of Computing Technology, Chinese Academy of Sciences)
  • Hongyuan Liu (College of William and Mary)
  • Stephen Longfield (Google)
  • Jie Lu (The Institute of Computing Technology of the Chinese Academy of Sciences)
  • Amrita Mazumdar (University of Washington)
  • Atefeh Mehrabi (Duke University)
  • David Munday (Google)
  • Eric Munson (University of Toronto)
  • Amir Hossein Nodehi Sabet (University of California, Riverside)
  • Toluwanimi O. Odemuyiwa (University of California, Davis)
  • Mayank Parasar (Georgia Institute of Technology/Samsung Austin Research&Development Center (SARC))
  • Jacques Pienaar (Google)
  • Thamir Qadah (Purdue University, West Lafayette)
  • Tirath Ramdas (HP)
  • Xiaowei Ren (University of British Columbia)
  • Alex Renda (MIT CSAIL)
  • Solmaz Salimi (Sharif University of Technology)
  • Abhishek Shah (Columbia University)
  • Junru Shao (OctoML)
  • Haichen Shen (Amazon)
  • Linghao Song (University of California, Los Angeles)
  • Tom St. John (Tesla)
  • Cesar Stuardo (University of Chicago)
  • Minh-Thuyen Thi (Institute List, CEA, Paris-Saclay University)
  • John Thorpe (UCLA)
  • Miheer Vaidya (University of Utah)
  • Yufan Xu (University of Utah)
  • Qiumin Xu (Google)
  • Victor Ying (MIT)
  • Felippe Zacarias (UPC/BSC)
  • Ming Zhang (Huazhong University of Science and Technology)
  • Jia Zhang (Tsinghua University)
  • Chao Zhang (Lehigh University)
  • Fang Zhou (The Ohio State University)
  • Yazhou Zu (Google)
  • Di Wu (Department of ECE, University of Wisconsin-Madison)

The process

Following the successful introduction of the artifact evaluation process (AE) at ASPLOS'20 we continue the reproducibility initiative at ASPLOS 2021!

Artifact evaluation promotes reproducibility of experimental results and encourages code and data sharing to help the community quickly validate and compare alternative approaches. Authors of accepted papers are invited to formally describe supporting materials (code, data, models, workflows, results) using the standard Artifact Appendix template and submit it to the Artifact Evaluation process (AE).

Note that this submission is voluntary and will not influence the final decision regarding the papers. We want to help the authors validate experimental results from their accepted papers by an independent AE Committee in a collaborative way while helping readers find articles with available, functional, and validated artifacts!

Artifact submission

Prepare your submission and the unified Artifact Appendix and Checklist using the following guidelines (see papers with Artifact Appendices from ASPLOS'20). Register it at the ASPLOS'21 AE submission website. Your submission will be then reviewed according to the following guidelines. You can use papers with Artifact Appendix from

The papers that successfully go through AE will receive a set of ACM badges of approval printed on the papers themselves and available as meta information in the ACM Digital Library (it is now possible to search for papers with specific badges in ACM DL). Authors of such papers will have an option to include up to 2 pages of their Artifact Appendix to the camera-ready paper.

ACM reproducibility badges

Artifact available
Artifact evaluated - functional
Results reproduced

Questions and feedback

Please check AE FAQs and the related Reddit discussion and feel free to ask questions or provide your feedback and suggestions via the ASPLOS'21 AE slack and the public AE discussion group.