News:
  • For SC'17 authors: you can find extra notes about how to fill in Artifact Appendix here.
  • Slides from the CGO/PPoPP'17 AE discussion session on how to improve artifact evaluation are available here.
  • We co-authored new ACM Result and Artifact Review and Badging policy in 2016 and plan to use it for the next CGO/PPoPP/PACT AE.

This guide (V20170414) was prepared by Grigori Fursin and Bruce Childers with contributions from Michael Heroux, Michela Taufer and other colleagues to help you describe and submit your artifacts for evaluation across a range of CS conferences and journals. It gradually evolves based on our long-term vision and your feedback during our Artifact Evaluations public discussion sessions.

Navigation:

What to expect

We aim to make artifact submission as simple as possible. You just need to pack your artifact (code and data) using any publicly available tool you prefer. In some exceptional cases when rare hardware or proprietary software is used, you can arrange a remote access to machine with the pre-installed software.

Then, you need to prepare a small and informal Artifact Evaluation appendix using our AE LaTeX template (common for PPoPP, CGO, PACT and SC conferences - see below) to explain evaluators what your artifacts are and how to use them (you will be allowed to add up to 2 pages of this Appendix to your final camera-ready paper). You can find examples of such AE appendices in the following papers: CGO'17, PPoPP'16, SC'16.

At least three reviewers will follow your guide to evaluate your artifacts and will then send you a report with the following overall assessment of your artifact based on our reviewing guidelines:

where "met expectations" score or above means that your artifact successfully passed evaluation and will receive the following stamps of approval depending on the conference:
Badges for ACM conferences
(CGO,PPoPP,SC)
Badges for non-ACM conferences
(PACT)
Check our AE criteria:

Artifacts Evaluated - Functional

Artifacts Evaluated - Reusable

Artifacts Available

Results validated

The highest ranked artifact which is not only reproducible but also easily customizable and reusable typically receives a "distinguished artifact" award.

Since our eventual goal is to promote collaborative and reproducible research, we see AE as a cooperative process between authors and reviewers to validate shared artifacts (rather than naming and shaming problematic artifacts). Therefore, we allow communication between authors and reviewers to fix raised issues until a given artifact can pass evaluation (unless a major problem is encountered). In such cases, AE chairs will serve as a proxy to avoid revealing reviewers' identity (the review is blind, i.e. your identity is known to reviewers since your paper is already accepted, but not vice versa).

Preparing artifacts for submission

You need to perform the following steps to submit your artifact for evaluation:
  1. Prepare experimental workflow.

    Note that if you just want to make your artifacts publicly available (which is also encouraged) without validation of experimental results, please go to the next step.

    We strongly encourage you to at least provide some scripts to prepare and run experiments, as well as reporting and validating results. You may be interested to use Jupyter Notebooks for this purpose.

    Furthermore, from our past Artifact Evaluation experience, we have noticed that the biggest burden for evaluators is to deal with numerous ad-hoc scripts to prepare, customize and run experiments, try other benchmarks, data sets, compilers and simulators, analyze results and compare them with the ones from the paper. Therefore we also suggest you to use workflow frameworks which can customize and automate experiments in some unified way such as:

    • Collective Knowledge framework (CK) to convert your artifacts to reusable components with JSON API; assemble experimental workflows from them as LEGO (such as multi-objective DNN optimization); automate package installation; crowdsource and reproduce experiments; enable interactive report. See CK-based cross-platform customizable workflow from the University of Cambridge which won distinguished artifact award at CGO'17, as well as its AE Appendix and interactive CK dashboard to compare empirical results with the ones from the paper.
    • OCCAM (Open Curation for Computer Architecture Modeling) is a project that serves as the catalyst for the tools, education, and community-building needed to bring openness, accountability, comparability, and repeatability to computer architecture experimentation.
    • Code Ocean is a cloud-based executable research platform.

  2. Pack your artifact (code and data) or provide an easy access to them using any publicly available and free tool you prefer or strictly require.

    For example, you can use the following:
    • Virtual Box to pack all code and data including OS (typical images are around 2..3GB. we strongly recommend to avoid images larger than 10GB).
    • Docker to pack only touched code and data during experiment.
    • Standard zip or tar with all related code and data, particularly when artifact should be rebuilt on a reviewers machine (for example to have a non-virtualized access to a specific hardware). You may check ReproZip tool to automatically pack your artifacts with all dependencies.
    • Private or public GIT or SVN.
    • Arrange a remote access to machine with pre-installed software (exceptional cases when rare hardware or proprietary software is used or the VM image is too large)) - you will need to privately send the access information to the AE chairs. Also, please avoid making any changes to the remote machine during evaluation (unless explicitly agreed with AE chairs) - you can do it during rebuttal phase, if needed!
    • Check other tools which can be useful for artifact and workflow sharing.

    Note that we now provide free service to help authors prepare their artifacts for submission and make them more portable, reusable and customizable using above open-source frameworks. If you are interested, feel free to contact AE chairs for more details.

  3. Write a brief artifact abstract with an informal check-list to informally describe your artifact including minimal hardware and software requirements, how it supports your paper, how it can be validated and what the expected result is. It will be used to select appropriate reviewers.

  4. Fill in and append AE template (download here) to the PDF of your accepted paper. Though it should be relatively intuitive, we still strongly suggest you to check out extra notes about how to fill in this template based on our past AE experience.

  5. Submit artifact abstract and new PDF at the AE submission website (such as EasyChair).
If you encounter problems, find some ambiguities or have any questions, do not hesitate to contact AE steering committee!

If accepted

AE chairs will tell you how to add appropriate stamps to the final camera-ready version of your paper. We also strongly encourage you to add up to 2 pages of your AE appendix to your camera ready paper while removing all unnecessary or confidential information. This will help readers better understand what was evaluated.

Though you are not obliged to publicly release your artifacts (in fact, it is sometimes impossible due to various limitations), we also strongly encourage you to share them with the community even if they are not open-source. You can release them as an auxiliary material in Digital Libraries or use your institutional repository and various public services for code and data sharing.

Even accepted artifacts may have some unforeseen behavior and limitations discovered during evaluation. Now you have a chance to add related notes in the Artifact Appendix as a future work..

A few examples of accepted artifacts from the past conferences, workshops and journals

Methodology archive

We keep track of all past versions of submission/reviewing methodology to let readers understand which one was used in papers with evaluated artifacts.

Thank you for participating in Artifact Evaluation!