We aim to make artifact submission as simple as possible. You just need to pack your artifact (code and data) using any publicly available tool you prefer. In some exceptional cases when rare hardware or proprietary software is used, you can arrange a remote access to machine with the pre-installed software.
Then, you need to prepare a small and informal Artifact Evaluation appendix using our AE LaTeX template (common for PPoPP, CGO, PACT and SC conferences - see below) to explain evaluators what your artifacts are and how to use them (you will be allowed to add up to 2 pages of this Appendix to your final camera-ready paper). You can find examples of such AE appendices in the following papers: CGO'17, PPoPP'16, SC'16.
At least three reviewers will follow your guide to evaluate your artifacts and will then send you a report with the following overall assessment of your artifact based on our reviewing guidelines:
|Badges for ACM conferences
|Badges for non-ACM conferences
Check our AE criteria:
Artifacts Evaluated - Functional
Artifacts Evaluated - Reusable
The highest ranked artifact which is not only reproducible but also easily customizable and reusable typically receives a "distinguished artifact" award.
Since our eventual goal is to promote collaborative and reproducible research, we see AE as a cooperative process between authors and reviewers to validate shared artifacts (rather than naming and shaming problematic artifacts). Therefore, we allow communication between authors and reviewers to fix raised issues until a given artifact can pass evaluation (unless a major problem is encountered). In such cases, AE chairs will serve as a proxy to avoid revealing reviewers' identity (the review is blind, i.e. your identity is known to reviewers since your paper is already accepted, but not vice versa).
Note that if you just want to make your artifacts publicly available (which is also encouraged) without validation of experimental results, please go to the next step.
We strongly encourage you to at least provide some scripts to prepare and run experiments, as well as reporting and validating results. You may be interested to use Jupyter Notebooks for this purpose.
Furthermore, from our past Artifact Evaluation experience, we have noticed that the biggest burden for evaluators is to deal with numerous ad-hoc scripts to prepare, customize and run experiments, try other benchmarks, data sets, compilers and simulators, analyze results and compare them with the ones from the paper. Therefore we also suggest you to use workflow frameworks which can customize and automate experiments in some unified way such as:
Though you are not obliged to publicly release your artifacts (in fact, it is sometimes impossible due to various limitations), we also strongly encourage you to share them with the community even if they are not open-source. You can release them as an auxiliary material in Digital Libraries or use your institutional repository and various public services for code and data sharing.
Even accepted artifacts may have some unforeseen behavior and limitations discovered during evaluation. Now you have a chance to add related notes in the Artifact Appendix as a future work..