This document (
V20190109) provides guidelines to submit your artifact for evaluation across a range of CS conferences and journals.
It gradually evolves to define a common submission methodology based
on
our past Artifact Evaluations and open discussions,
the
ACM reviewing and badging policy
(which we contributed to as a part of the
ACM taskforce on reproducibility),
artifact-eval.org and your feedback (
2018a,
2018b,
2017a,
2017b,
2014).
News
We aim to formalize and unify artifact submission while keeping it relatively simple.
You will need to pack your artifacts (code and data) using any publicly available tool.
In some exceptional cases when rare hardware or proprietary software is used,
you can arrange a remote access to a machine with the pre-installed software.
Then you need to prepare a small and informal Artifact Appendix
using our AE LaTeX template
(now used by CGO, PPoPP, Supercomputing, PACT, IA3, RTSS, ReQuEST and other ACM/IEEE conferences and workshops)
to explain evaluators what your artifact is and how to validate it.
You will normally be allowed to add up to 2 pages of this Appendix to your final camera-ready paper.
You will need to add this appendix to you paper and submit it to the AE submission website for a given event.
You can find examples of such AE appendices in the following papers:
ReQuEST-ASPLOS'18
(associated experimental workflow),
CGO'17,
PPoPP'16,
SC'16.
Note that since your paper is already accepted, artifact submission is single blind i.e. you can add author names to your PDF!
Please, do not forget to check the following artifact reviewing guidelines
to understand how your artifact will be evaluated. In the end, you will receive a report
with the following overall assessment of your artifact and a set of ACM reproducibility badges:
Since our eventual goal is to promote collaborative and reproducible research,
we see AE as a cooperative process between authors
and reviewers to validate shared artifacts rather than naming and shaming
problematic artifacts. We therefore allow continuous and anonymous communication between authors
and reviewers via HotCRP to fix raised issues until a given artifact can pass evaluation
or until a major issue is detected.
You need to perform the following steps to submit your artifact for evaluation:
-
Prepare experimental workflow.
You can skip this step if you just want to make your artifacts publicly available without validation of experimental results.
You need to provide at least some scripts or Jupyter Notebooks
to prepare and run experiments, as well as reporting and validating results.
If you would like to use the Collective Knowledge framework
to automate your workflow and think you might need some assistance,
please contact us in advance!
We are developing CK to help authors reduce their time and effort when preparing AI/ML/SW/HW workflows
for artifact evaluation by reusing many data sets, models and frameworks
already shared by the community in a common format.
This, in turn, should enable evaluators to
quickly validate results in an automated and portable way.
Please, see CK community use-cases
and check out the following papers with CK workflows:
ReQuEST-ASPLOS'18
(associated CK workflow),
CGO'17,
IA3'17,
SC'15.
-
Pack your artifact (code and data) or provide an easy access to them
using any publicly available and free tool you prefer or strictly require.
For example, you can use the following:
-
Docker to pack only touched code and data during experiment.
-
Virtual Box to pack all code and data including OS
(typical images are around 2..3GB; we strongly recommend to avoid images larger than 10GB).
-
Standard zip or tar with all related code and data, particularly when an artifact
should be rebuilt on a reviewers machine (for example to have a non-virtualized access to a specific hardware).
-
Private or public GIT or SVN.
-
Arrange a remote access to a machine with pre-installed software
(exceptional cases when rare hardware or proprietary software is used or your VM image is too large)
- you will need to privately send the access information to the AE chairs. Also, please avoid making any changes
to the remote machine during evaluation unless explicitly agreed with AE chairs - you can do it during
the rebuttal phase if needed!
-
Check other tools
which can be useful for artifact and workflow sharing.
-
Write a brief artifact abstract with a SW/HW check-list to informally describe your artifact
including minimal hardware and software requirements, how it supports your paper, how it can be validated and
what the expected result is. Particularly stress if you use any proprietary software or hardware
Note that it is critical to help AE chairs select appropriate reviewers!
If you use proprietary benchmarks or tools (SPEC, Intel compilers, etc),
we suggest you to provide a simplified test case with open source software
to be able to quickly validate functionality of your experimental workflow.
-
Fill in and append AE template (download here) to the PDF of your (accepted) paper.
Though it should be relatively intuitive, we still strongly suggest you to
check out extra notes
about how to fill in this template based on our past AE experience.
-
Submit the artifact abstract and the new PDF at the AE submission website provided by the event.
If you encounter problems, find some ambiguities or have any questions,
do not hesitate to get in touch with the AE community via
the dedicated AE google group.
You will need to add up to 2 pages of your AE appendix
to your camera ready paper while removing all unnecessary or confidential information.
This will help readers better understand what was evaluated.
If your paper will be published in the ACM Digital Library,
you do not need to add reproducibility stamps yourself - ACM will add them to your camera-ready paper!
In other cases, AE chairs will tell you how to add a stamp to your paper.
Sometimes artifact evaluation help discover some minor mistakes in the accepted paper -
in such case you now have a chance to add related notes and corrections
in the Artifact Appendix of your camera-ready paper..
-
"Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe"
(Paper DOI, Artifact DOI,
Original artifact,
CK workflow,
CK results)
-
"Software Prefetching for Indirect Memory Accesses", CGO 2017, dividiti award for the portable and customizable CK-based workflow (Sources at GitHub, PDF with AE appendix, CK dashboard snapshot)
-
"Optimizing Word2Vec Performance on Multicore Systems", IA3 at Computing 2017, dividiti award for the portable and customizable CK-based workflow (Sources at GitHub, PDF with AE appendix)
-
"Self-Checkpoint: An In-Memory Checkpoint Method Using Less Space and its Practice on Fault-Tolerant HPL", PPoPP 2017 (example of a public evaluation via HPC and supercomputer mailing lists: GitHub discussions)
-
"Lift: A Functional Data-Parallel IR for High-Performance GPU Code Generation", CGO 2017 (example of a public evaluation with a bug fix: GitLab discussions,
example of a paper with AE Appendix and a stamp: PDF,
CK workflow for this artifact: GitHub,
CK concepts: blog)
-
"Gunrock: A High-Performance Graph Processing Library on the GPU", PPoPP 2016 (PDF with AE appendix and GitHub)
-
"GEMMbench: a framework for reproducible and collaborative benchmarking of matrix multiplication", ADAPT 2016 (example of a CK-powered artifact reviewed and validated by the community via Reddit)
-
"Integrating algorithmic parameters into benchmarking and design space exploration in dense 3D scene understanding", PACT 2016 (example of interactive graphs and artifacts in the Collective Knowledge format)
-
"Polymer: A NUMA-aware Graph-structured Analytics Framework", PPoPP 2015 (GitHub and personal web page)
-
"A graph-based higher-order intermediate representation", CGO 2015 (GitHub)
-
"MemorySanitizer: fast detector of uninitialized memory use in C++", CGO 2015 (added to LLVM)
-
"Predicate RCU: an RCU for scalable concurrent updates", PPoPP 2015 (BitBucket)
-
"Low-Overhead Software Transactional Memory with Progress Guarantees and Strong Semantics", PPoPP 2015 (SourceForge and Jikes RVM)
-
"More than You Ever Wanted to Know about Synchronization", PPoPP 2015 (GitHub)
-
"Roofline-aware DVFS for GPUs", ADAPT 2014 (ACM DL, Collective Knowledge repository)
-
"Many-Core Compiler Fuzzing", PLDI 2015 (example of an artifact with a CK-based experimental workflow and live results)