This guide (V20180713) was prepared by Grigori Fursin and Bruce Childers with contributions from Michael Heroux, Michela Taufer and other colleagues to help you describe and submit your artifacts for evaluation across a range of CS conferences and journals.

News

What to expect   Preparing artifacts for submission   If accepted   Examples of accepted artifacts   Methodology archive   Extended artifact description

What to expect

We aim to formalize and unify artifact submission while still keeping it as simple as possible. You will need to pack your artifacts (code and data) using any publicly available tool. In some exceptional cases when rare hardware or proprietary software is used, you can arrange a remote access to a machine with the pre-installed software. Then you need to prepare a small and informal Artifact Appendix using our AE LaTeX template (now used by CGO, PPoPP, Supercomputing, PACT, IA3, RTSS, ReQuEST and other ACM/IEEE conferences and workshops) to explain evaluators what your artifacts are and how to use them. You will normally be allowed to add up to 2 pages of this Appendix to your final camera-ready paper. You will need to add this appendix to you paper and submit it to the AE submission website for a given event. You can find examples of such AE appendices in the following papers: ReQuEST-ASPLOS'18 (associated CK workflow), CGO'17, PPoPP'16, SC'16.

At least three reviewers will follow your guide to evaluate your artifacts and will then send you a report with the following overall assessment of your artifact based on our reviewing guidelines:

where "met expectations" score or above means that your artifact successfully passed evaluation and will receive the following stamps of approval depending on the event:
Badges for ACM conferences, workshops and journals
(CGO,PPoPP,SC,PACT'18,ReQuEST-ASPLOS'18)
Badges for non-ACM events
(PACT'16,PACT'17,MLSys'19)
Check AE criteria:

Artifacts Available

Artifacts Evaluated - Functional

Artifacts Evaluated - Reusable

Results reproduced

Results replicated

The highest ranked artifact which is not only reproducible but also easily customizable and reusable typically receives a "distinguished artifact" award.

Since our eventual goal is to promote collaborative and reproducible research, we see AE as a cooperative process between authors and reviewers to validate shared artifacts rather than naming and shaming problematic artifacts. Therefore, we allow communication between authors and reviewers to fix raised issues until a given artifact can pass evaluation unless a major problem is encountered. In such cases, AE chairs will either serve as a proxy to avoid revealing reviewers' identity or will allow direct anonymous communication between authors and reviewers via submission website or via issues at public repositories such as GitHub, GitLab and BitBucket (reviews are blind, i.e. your identity is known to reviewers since your paper is already accepted, but not vice versa).

Preparing artifacts for submission

You need to perform the following steps to submit your artifact for evaluation:
  1. Prepare experimental workflow.

    Note that if you just want to make your artifacts publicly available (which is also encouraged) without validation of experimental results, please go to the next step.

    You need to provide at least some scripts or Jupyter Notebooks to prepare and run experiments, as well as reporting and validating results.

    However, we have noticed during past Artifact Evaluation that the biggest burden for evaluators is to deal with numerous ad-hoc scripts to prepare, customize and run experiments, try other benchmarks, data sets, compilers and simulators, analyze results and compare them with the ones from the paper.

    That is why we collaborative with ACM to encourage the use of workflow frameworks with portable package managers and unified APIs such as Collective Knowledge (CK) instead of ad-hoc scripts to help evaluators automate installation, execution, validation and customization of your experiments.

    See how researchers use CK during ACM tournaments to co-design efficient SW/HW stack: ACM Digital Library, GitHub with reusable CK workflows, ReQuEST report.

    Also note that the cTuning foundation and dividiti offer free community help to convert your artifacts to the CK format while reusing a growing number of unified CK components (programs, workflows, packages, software detection plugins, etc). Check out the following papers with artifacts and workflows in the CK format: ReQuEST-ASPLOS'18 (associated CK workflow), CGO'17, IA3'17, SC'15.

  2. Pack your artifact (code and data) or provide an easy access to them using any publicly available and free tool you prefer or strictly require.

    For example, you can use the following:
    • Virtual Box to pack all code and data including OS (typical images are around 2..3GB; we strongly recommend to avoid images larger than 10GB).
    • Docker to pack only touched code and data during experiment.
    • Standard zip or tar with all related code and data, particularly when an artifact should be rebuilt on a reviewers machine (for example to have a non-virtualized access to a specific hardware). You may check ReproZip tool to automatically pack your artifacts with all dependencies.
    • Private or public GIT or SVN.
    • Arrange a remote access to a machine with pre-installed software (exceptional cases when rare hardware or proprietary software is used or your VM image is too large) - you will need to privately send the access information to the AE chairs. Also, please avoid making any changes to the remote machine during evaluation unless explicitly agreed with AE chairs - you can do it during the rebuttal phase if needed!
    • Check other tools which can be useful for artifact and workflow sharing.

  3. Write a brief artifact abstract with a SW/HW check-list to informally describe your artifact including minimal hardware and software requirements, how it supports your paper, how it can be validated and what the expected result is. Particularly stress if you use any proprietary software or hardware Note that it is critical to help AE chairs select appropriate reviewers! If you use proprietary benchmarks or tools (SPEC, Intel compilers, etc), we suggest you to provide a simplified test case with open source software to be able to quickly validate functionality of your experimental workflow.

  4. Fill in and append AE template (download here) to the PDF of your (accepted) paper. Though it should be relatively intuitive, we still strongly suggest you to check out extra notes about how to fill in this template based on our past AE experience.

  5. Submit artifact abstract and the new PDF at the AE submission website depending on a conference, tournament, workshop or journal.
If you encounter problems, find some ambiguities or have any questions, do not hesitate to get in touch with the AE community via the dedicated AE google group.

If accepted

We strongly encourage you to add up to 2 pages of your AE appendix to your camera ready paper while removing all unnecessary or confidential information. This will help readers better understand what was evaluated. If your paper will be published in the ACM Digital Library, you do not need to add reproducibility stamps yourself - ACM will add them to your camera-ready paper! In other cases, AE chairs will tell you how to add a stamp to your paper.

Though you are not obliged to publicly release your artifacts (in fact, it is sometimes impossible due to various limitations), we also strongly encourage you to share them with the community. You can release them as standalone artifacts with a DOI in ACM Digital Library (see artifact example from the 1st ACM ReQuEST tournament), via other permanent repositories or using your institutional repository (if it has a plan for permanent archiving).

Sometimes artifact evaluation help discover some minor mistakes in the accepted paper - in such case you now have a chance to add related notes and corrections in the Artifact Appendix of your camera-ready paper..

A few artifact examples from the past conferences, workshops and journals

Methodology archive

We keep track of the past submission and reviewing methodology to let readers understand which one was used in the papers with the evaluated artifacts.

Thank you for participating in Artifact Evaluation!