News:
-
Do not forget to provide a list of hardware, software, benchmark and data set dependencies in your artifact abstract - this is essential to find appropriate evaluators!
-
Participate in the 1st competition on reproducible SW/HW co-design competition
at ASPLOS'18 (submission template with Artifact Appendix is available here).
-
For SC'17 authors: you can find extra notes about how to fill in Artifact Appendix here.
-
Slides from the CGO/PPoPP'17 AE discussion session on how to improve artifact evaluation are available here.
-
We co-authored a new ACM Result and Artifact Review and Badging policy in 2016 and now use it for the CGO and PPoPP AE.
This guide (V20171101) was prepared by
Grigori Fursin
and
Bruce Childers with contributions from
Michael Heroux,
Michela Taufer
and other
colleagues
to help you describe and submit your artifacts for evaluation across a range of CS conferences
and journals.
It gradually evolves based on
our long-term vision
and your feedback during our
public Artifact Evaluation discussion sessions.
Navigation:
We aim to formalize and unify artifact submission while keeping it as simple as possible.
You will need to pack your artifacts (code and data) using any publicly available tool
you prefer. In some exceptional cases when rare hardware or proprietary software is used,
you can arrange a remote access to a machine with the pre-installed software.
We strongly encourage to use workflow frameworks such as CK
to unify preparation and validation of experiments (ACM currently evaluates possibility to
integrate CK with the ACM Digital Library; cTuning foundation
also provides free community help to convert your artifacts to the CK format).
Then you need to prepare a small and informal Artifact Evaluation appendix
using our AE LaTeX template (now used by CGO, PPoPP, SuperComputing, PACT, IA3, RTSS and other ACM/IEEE conferences and workshops)
to explain evaluators what your artifacts are and how to use them
(you will be allowed to add up to 2 pages of this Appendix to your final camera-ready paper).
You can find examples of such AE appendices in the following papers:
CGO'17,
PPoPP'16,
SC'16.
At least three reviewers will follow your guide to evaluate your artifacts and will then send you a report
with the following overall assessment of your artifact based
on our reviewing guidelines:
-
exceeded expectations
-
met expectations
-
fell below expectations
where
"met expectations" score or above means that your artifact
successfully passed evaluation and will receive the following stamps of approval
depending on the conference:
The highest ranked artifact which is not only reproducible
but also easily customizable and reusable typically
receives a "distinguished artifact" award.
Since our eventual goal is to promote collaborative and reproducible research,
we see AE as a cooperative process between authors
and reviewers to validate shared artifacts (rather than naming and shaming
problematic artifacts). Therefore, we allow communication between authors
and reviewers to fix raised issues until a given artifact can pass evaluation
(unless a major problem is encountered).
In such cases, AE chairs will serve as a proxy to avoid
revealing reviewers' identity (reviews are blind,
i.e. your identity is known to reviewers since your paper
is already accepted, but not vice versa).
You need to perform the following steps to submit your artifact for evaluation:
-
Prepare experimental workflow.
Note that if you just want to make your artifacts publicly available
(which is also encouraged) without validation of experimental results, please go to the next step.
We strongly encourage you to at least provide some scripts to prepare and run experiments,
as well as reporting and validating results. You may be interested to use
Jupyter Notebooks for this purpose.
Furthermore, from our past Artifact Evaluation experience,
we have noticed that the biggest burden for evaluators is to deal with numerous ad-hoc scripts to prepare, customize and run experiments,
try other benchmarks, data sets, compilers and simulators, analyze results and compare them with the ones from the paper.
That's why we collaborate with ACM to unify packing and sharing of artifacts as reusable and customizable components
using the Collective Knowledge framework and OCCAM
(see ACM announcement).
You are not obliged to use CK and OCCAM since they may only influence "Artifacts Evaluated - Reusable" badge
but not the overall evaluation. However if you are interested to try CK,
cTuning foundation
offers free help to convert your workflows
to the CK format while reusing
a growing number of open CK artifacts.
Feel free to contact Grigori Fursin
as soon as possible for more details. The highest ranked artifacts shared in the CK format
will also receive a 300$ Amazon gift card from dividiti.
Check out the following artifacts which shared
using CK framework: CGO'17 paper,
IA3 @ SuperComputing'17 paper,
SuperComputing'15 paper, etc .
-
Pack your artifact (code and data) or provide an easy access to them
using any publicly available and free tool you prefer or strictly require.
For example, you can use the following:
-
Virtual Box to pack all code and data including OS
(typical images are around 2..3GB; we strongly recommend to avoid images larger than 10GB).
-
Docker to pack only touched code and data during experiment.
-
Standard zip or tar with all related code and data, particularly when an artifact
should be rebuilt on a reviewers machine (for example to have a non-virtualized access to a specific hardware).
You may check ReproZip tool to automatically pack your artifacts with all dependencies.
-
Private or public GIT or SVN.
-
Arrange a remote access to a machine with pre-installed software
(exceptional cases when rare hardware or proprietary software is used or your VM image is too large))
- you will need to privately send the access information to the AE chairs. Also, please avoid making any changes
to the remote machine during evaluation unless explicitly agreed with AE chairs - you can do it during
the rebuttal phase if needed!
-
Check other tools
which can be useful for artifact and workflow sharing.
-
Write a brief artifact abstract with a SW/HW check-list to informally describe your artifact
including minimal hardware and software requirements, how it supports your paper, how it can be validated and
what the expected result is. It will be used to select appropriate reviewers.
-
Fill in and append AE template (download here) to the PDF of your accepted paper.
Though it should be relatively intuitive, we still strongly suggest you to
check out extra notes
about how to fill in this template based on our past AE experience.
-
Submit artifact abstract and the new PDF at the AE submission website depending on a conference.
If you encounter problems, find some ambiguities or have any questions,
do not hesitate to contact AE steering committee!
AE chairs will tell you how to add appropriate stamps to the final camera-ready version of your paper.
We also strongly encourage you to add up to 2 pages of your AE appendix
to your camera ready paper while removing all unnecessary or confidential information.
This will help readers better understand what was evaluated.
Though you are not obliged to publicly release your artifacts
(in fact, it is sometimes impossible due to various limitations),
we also strongly encourage you to share them with the community
even if they are not open-source.
You can release them as an auxiliary material in Digital Libraries
or use your institutional repository
and various public services for code and data sharing.
Even accepted artifacts may have some unforeseen behavior and limitations
discovered during evaluation. Now you have a chance to add related notes
in the Artifact Appendix as a future work..
-
"Software Prefetching for Indirect Memory Accesses", CGO 2017, dividiti award for the portable and customizable CK-based workflow (Sources at GitHub, PDF with AE appendix, CK dashboard snapshot)
-
"Optimizing Word2Vec Performance on Multicore Systems", IA3 at Computing 2017, dividiti award for the portable and customizable CK-based workflow (Sources at GitHub, PDF with AE appendix)
-
"Self-Checkpoint: An In-Memory Checkpoint Method Using Less Space and its Practice on Fault-Tolerant HPL", PPoPP 2017 (example of a public evaluation via HPC and supercomputer mailing lists: GitHub discussions)
-
"Lift: A Functional Data-Parallel IR for High-Performance GPU Code Generation", CGO 2017 (example of a public evaluation with a bug fix: GitLab discussions,
example of a paper with AE Appendix and a stamp: PDF,
CK workflow for this artifact: GitHub,
CK concepts: blog)
-
"Gunrock: A High-Performance Graph Processing Library on the GPU", PPoPP 2016 (PDF with AE appendix and GitHub)
-
"GEMMbench: a framework for reproducible and collaborative benchmarking of matrix multiplication", ADAPT 2016 (example of a CK-powered artifact reviewed and validated by the community via Reddit)
-
"Integrating algorithmic parameters into benchmarking and design space exploration in dense 3D scene understanding", PACT 2016 (example of interactive graphs and artifacts in the Collective Knowledge format)
-
"Polymer: A NUMA-aware Graph-structured Analytics Framework", PPoPP 2015 (GitHub and personal web page)
-
"A graph-based higher-order intermediate representation", CGO 2015 (GitHub)
-
"MemorySanitizer: fast detector of uninitialized memory use in C++", CGO 2015 (added to LLVM)
-
"Predicate RCU: an RCU for scalable concurrent updates", PPoPP 2015 (BitBucket)
-
"Low-Overhead Software Transactional Memory with Progress Guarantees and Strong Semantics", PPoPP 2015 (SourceForge and Jikes RVM)
-
"More than You Ever Wanted to Know about Synchronization", PPoPP 2015 (GitHub)
-
"Roofline-aware DVFS for GPUs", ADAPT 2014 (ACM DL, Collective Knowledge repository)
-
"Many-Core Compiler Fuzzing", PLDI 2015 (example of an artifact with a CK-based experimental workflow and live results)