Here we provide a few informal suggestions to help you fill in your AE template for submission.
They are based on our past Artifact Evaluations and personal experience
to crowdsource and reproduce experiments
to help you avoid common pitfalls and reduce reviewers burden.
Briefly and informally describe your artifact including minimal hardware and software requirements,
how it supports your paper, how it can be validated, and what is the expected result. It will be used
to select appropriate reviewers.
Artifact check-list (meta-information)
Together with artifact abstract, this informal check-list will help us make sure that reviewers
have appropriate competency as well as technology to evaluate your artifact.
It can also be used as meta information
to find your artifacts in Digital Libraries (under discussion/development).
It was created based on past AE experience and your feedback as such to cover most of the
artifacts in computer systems research including SW/HW co-design, benchmarking, design space exploration,
autotuning, architecture simulation, run-time adaptation, and more.
Fill in whatever is applicable with some informal keywords and remove unrelated items
(please consider questions below just as informal hints
that reviewers are usually concerned about):
Compilation: Do you present or require a specific compiler? Public/private? Is it included? Which version?
Transformations: Do you present or require a program transformation tool (source-to-source, binary-to-binary, compiler pass, etc)?
Public/private? Is it included? Which version?
Binary: Are binaries included? OS-specific? Which version?
Data set: Do you use specific data sets (for example,
cTuning data sets,
Are they included? If not, how to download and install?
What is their approximate size?
Run-time environment: Is your artifact OS-specific (Linux, Windows, MacOS, Android, etc) ?
Which version? Which are the main software dependencies (JIT, libs, run-time adaptation frameworks, etc);
Do you need root access?
Hardware: Do you need specific hardware (supercomputer, architecture simulator, CPU, GPU, neural network accelerator, FPGA)
or features such as hardware counters
to measure power consumption, or access to CPU/GPU frequency)?
Are they publicly available?
Run-time state: Is your artifact sensitive to run-time state (cold/hot cache, network/cache contentions, etc.)
Execution: Any specific conditions during execution (sole user, process pinning, profiling, adaptation, etc)?
Output: What is your output (console, file, table, graph) and what is your result
(exact output, measured characteristic, etc)?
Download package from a private website (you will need to send information how to access your artifact to AE chairs)
Access artifact via private machine with pre-installed software (only when access to rare hardware is required or proprietary
software is used - you will need to send information and credentials to access your machine to AE chairs)
Please describe approximate disk space required after unpacking your artifact
(to avoid surprises when artifact requires 20GB of free space). We do not have
a strict limit but strongly suggest to limit space to several GB and avoid including
unnecessary software to your VM images.
Describe any specific hardware and its features
strictly required to evaluate your artifact
(vendor, CPU/GPU/FPGA, number of processors/cores, interconnect, memory,
hardware counters, etc).
Describe any specific OS and software packages required to evaluate your
artifact. This is particularly important if you share source code
that has to be rebuilt or if you rely on proprietary software that you
can not include to your package. In such case, we strongly suggest to describe
where to get and to install all third-party tools.
Note that we are trying to obtain AE licenses for some commonly used proprietary tools and benchmarks
(you will be informed in case of positive outcome).
If third-party data sets are not included in your packages (for example,
they are very large or proprietary), please provide details how to download
and install them.
In case of proprietary data sets, we suggest you provide reviewers
a public alternative subset for evaluation.
Describe experiment workflow and how it is implemented,
invoked and customized (if needed), i.e. some OS scripts,
portable CK workflow, etc.
See the following example of the experimental workflow
for multi-objective and machine-learning based autotuning:
Describe all steps necessary to evaluate your artifact
using the workflow above. Also describe whether reviewers
will need to replicate (exact result match) or reproduce (possibly
varying result or different experimental conditions) the output.
Finally, describe the expected result as well as allowable variation
(particularly important for performance numbers and speed-ups).
It is currently optional since it is not always trivial.
If possible, describe how to customize your workflow, i.e. if
it is possible to use different data sets, benchmarks, real applications,
predictive models, software environment (compilers, libraries,
run-time systems), hardware, etc. Also, describe if it is possible
to parameterize your workflow (whatever is applicable such as
changing number of threads, optimizations, CPU/GPU frequency,
accuracy, autotuning scenario, etc). See this
CGO'17 distinguished artifact
as an example of a portable, customizable and reusable workflow.
You can add informal notes for reviewers to draw
their attention to known or possible issues (particularly
if you plan to continue working on them after submission).