Artifact Evaluation for MICRO 2023
[ Back to the ACM/IEEE MICRO 2023 conference website ]Artifact evaluation promotes reproducibility of experimental results and encourages code and data sharing.
The authors will need to fill in this Artifact Appendix to describe the minimal software, hardware and data set requirements, and explaining how to prepare, run and reproduce key experiments. The Artifact Appendix should be appended to the accepted paper and submitted for evaluation via MICRO'23 AE HotCRP website.
We introduced this Artifact Appendix to unify the description of experimental setups and results across different conferences. Though it is relatively intuitive and based on the feedback from the community, we encourage the authors to check the Artifact Appendix guide, artifact reviewing guide, the SIGPLAN Empirical Evaluation Guidelines, the NeurIPS reproducibility checklist, AE FAQs and Artifact Appendices from past papers before submitting artifacts for evaluation.
This submission is voluntary and will not influence the final decision regarding the papers - our goal is to help the authors validate their experiments by an independent AE Committee in a collaborative and constructive way. Furthermore, the authors can add notes and corrections to the Artifact Appendix if some mistakes were found during artifact evaluation.
The authors will communicate with evaluators via HotCRP after the submission.
We suggest the authors to make their artifacts available for evaluation via GitHub or similar public or private service. Public artifact sharing allows the authors to quickly fix encountered issues during evaluation before submitting the final version to archival repositories. Other acceptable methods include:
The papers that successfully go through AE will receive a set of ACM badges of approval printed on the papers themselves and available as meta information in the ACM Digital Library (it is now possible to search for papers with specific badges in the ACM DL). Authors of such papers will have an option to include up to 2 pages of their Artifact Appendix to the camera-ready paper.
Artifact available | General ACM guidelines - artifacts will receive the ACM "artifact available" badge only if they have been placed on any publicly accessible archival repository such as Zenodo, FigShare, and Dryad with a DOI. The authors can provide a DOI of the final artifact at the very end of the AE process. | |
Artifact evaluated - functional | General ACM guidelines. | |
Artifacts evaluated – reusable (pilot project) |
To help digest the criteria for the ACM "Artifacts Evaluated – Reusable" badge, we have partnered with MLCommons to add their unified automation interface (MLCommons CM) to the shared artifacts to prepare, run and plot results. We opine that MLCommons CM captures the core tenets of ACM "Artifacts Evaluated – Reusable" badge. Hence, we have added it as a possible criteria to obtain the ACM "Artifacts Evaluated – Reusable" badge. The authors can try to add the MLCommons CM interface to their artifacts themselves using this tutorial. |
|
Results reproduced | General ACM guidelines and our extended guidelines. |
Evaluators will read a paper and then go through the Artifact Appendix to evaluate artifacts and reproduce experiments based on general ACM guidelines and our MICRO'23 guidelines.
Reviewers will communicate with the authors about any encountered issue immediately (and anonymously) via the HotCRP submission website to give the authors time to resolve all problems! Note that our philosophy is not to fail problematic artifacts but to help the authors improve their public artifacts and pass the evaluation!
In the end, AE chairs will communicate with the authors and decide on a set of the standard ACM reproducibility badges to award to a given paper/artifact based on all reviews and the authors' responses.