This document (V20190108) provides guidelines to submit your artifact for evaluation across a range of CS conferences and journals. It gradually evolves to define a common submission methodology based on our past Artifact Evaluations and open discussions, the ACM reviewing and badging policy (which we contributed to as a part of the ACM taskforce on reproducibility), artifact-eval.org and your feedback (2018a, 2018b, 2017a, 2017b, 2014).

News

What to expect   Preparing artifacts for submission   If accepted   Examples of accepted artifacts   Methodology archive   Extended artifact description

What to expect

We aim to formalize and unify artifact submission while keeping it relatively simple. You will need to pack your artifacts (code and data) using any publicly available tool. In some exceptional cases when rare hardware or proprietary software is used, you can arrange a remote access to a machine with the pre-installed software. Then you need to prepare a small and informal Artifact Appendix using our AE LaTeX template (now used by CGO, PPoPP, Supercomputing, PACT, IA3, RTSS, ReQuEST and other ACM/IEEE conferences and workshops) to explain evaluators what your artifact is and how to validate it. You will normally be allowed to add up to 2 pages of this Appendix to your final camera-ready paper. You will need to add this appendix to you paper and submit it to the AE submission website for a given event. You can find examples of such AE appendices in the following papers: ReQuEST-ASPLOS'18 (associated experimental workflow), CGO'17, PPoPP'16, SC'16. Note that since your paper is already accepted, artifact submission is single blind i.e. you can add author names to your PDF!

Please, do not forget to check the following artifact reviewing guidelines to understand how your artifact will be evaluated. In the end, you will receive a report with the following overall assessment of your artifact and a set of ACM reproducibility badges:

Since our eventual goal is to promote collaborative and reproducible research, we see AE as a cooperative process between authors and reviewers to validate shared artifacts rather than naming and shaming problematic artifacts. We therefore allow continuous and anonymous communication between authors and reviewers via HotCRP to fix raised issues until a given artifact can pass evaluation or until a major issue is detected.

Preparing artifacts for submission

You need to perform the following steps to submit your artifact for evaluation:
  1. Prepare experimental workflow.

    You can skip this step if you just want to make your artifacts publicly available without validation of experimental results.

    You need to provide at least some scripts or Jupyter Notebooks to prepare and run experiments, as well as reporting and validating results.

    If you would like to use the Collective Knowledge framework to automate your workflow and think you might need some assistance, please contact us in advance! We are developing CK to help authors reduce their time and effort when preparing AI/ML/SW/HW workflows for artifact evaluation by reusing many data sets, models and frameworks already shared by the community in a common format. This, in turn, should enable evaluators to quickly validate results in an automated and portable way. Please, see CK community use-cases and check out the following papers with CK workflows: ReQuEST-ASPLOS'18 (associated CK workflow), CGO'17, IA3'17, SC'15.

  2. Pack your artifact (code and data) or provide an easy access to them using any publicly available and free tool you prefer or strictly require.

    For example, you can use the following:
    • Docker to pack only touched code and data during experiment.
    • Virtual Box to pack all code and data including OS (typical images are around 2..3GB; we strongly recommend to avoid images larger than 10GB).
    • Standard zip or tar with all related code and data, particularly when an artifact should be rebuilt on a reviewers machine (for example to have a non-virtualized access to a specific hardware).
    • Private or public GIT or SVN.
    • Arrange a remote access to a machine with pre-installed software (exceptional cases when rare hardware or proprietary software is used or your VM image is too large) - you will need to privately send the access information to the AE chairs. Also, please avoid making any changes to the remote machine during evaluation unless explicitly agreed with AE chairs - you can do it during the rebuttal phase if needed!
    • Check other tools which can be useful for artifact and workflow sharing.

  3. Write a brief artifact abstract with a SW/HW check-list to informally describe your artifact including minimal hardware and software requirements, how it supports your paper, how it can be validated and what the expected result is. Particularly stress if you use any proprietary software or hardware Note that it is critical to help AE chairs select appropriate reviewers! If you use proprietary benchmarks or tools (SPEC, Intel compilers, etc), we suggest you to provide a simplified test case with open source software to be able to quickly validate functionality of your experimental workflow.

  4. Fill in and append AE template (download here) to the PDF of your (accepted) paper. Though it should be relatively intuitive, we still strongly suggest you to check out extra notes about how to fill in this template based on our past AE experience.

  5. Submit the artifact abstract and the new PDF at the AE submission website provided by the event.
If you encounter problems, find some ambiguities or have any questions, do not hesitate to get in touch with the AE community via the dedicated AE google group.

If accepted

You will need to add up to 2 pages of your AE appendix to your camera ready paper while removing all unnecessary or confidential information. This will help readers better understand what was evaluated. If your paper will be published in the ACM Digital Library, you do not need to add reproducibility stamps yourself - ACM will add them to your camera-ready paper! In other cases, AE chairs will tell you how to add a stamp to your paper.

Sometimes artifact evaluation help discover some minor mistakes in the accepted paper - in such case you now have a chance to add related notes and corrections in the Artifact Appendix of your camera-ready paper..

A few artifact examples from the past conferences, workshops and journals

Methodology archive

We keep track of the past submission and reviewing methodology to let readers understand which one was used in the papers with the evaluated artifacts. Also see original AE procedures for programming language conferences.

Thank you for participating in Artifact Evaluation!