Artifact Evaluation for Computer Systems' Research
We work with the community and ACM to improve methodology and tools for reproducible experimentation, artifact submission / reviewing and open challenges!
Home Artifacts Joint Committee Submission Guide Reviewing Guide FAQ Prior AE
Sponsors and supporters
                 
If you would like to sponsor this community service including prizes for highest ranked artifacts and supporting open-source technology for collaborative, customizable and reproducible experimentation, please get in touch with the AE steering committee!

Upcoming AE-related events

Recently completed Artifact Evaluation

CGO 2017 - see accepted artifacts here.

Distinguished artifact implemented using Collective Knowledge Framework (reusable and customizable workflow with automatic cross-platform software installation and web-based experimental dashboard):
"Software Prefetching for Indirect Memory Accesses", Sam Ainsworth and Timothy M. Jones [ GitHub , Paper with AE appendix and CK workflow , PDF snapshot of the interactive CK dashboard , CK concepts ]
($500 from dividiti)

PPoPP 2017 - see accepted artifacts here.

Distinguished artifact: "Understanding the GPU Microarchitecture to Achieve Bare-Metal Performance Tuning", Xiuxia Zhang, Guangming Tan, Shuangbai Xue, Jiajia Li, Mingyu Chen
(NVIDIA Pascal Titan X GPGPU card presented by Steve Keckler from NVIDIA)

PACT 2016 - see accepted artifacts here.

Highest ranked artifact: "Fusion of Parallel Array Operations", Mads R. B. Kristensen, James Avery, Simon Andreas Frimann Lund and Troels Blum
(NVIDIA GPGPU)

ADAPT 2016 - see open reviewing and discussions via Reddit, all accepted artifacts validated by the community here, and the motivation for our open reviewing and publication model .

Highest ranked artifact with customizable workflow implemented using CK: "Integrating a large-scale testing campaign in the CK framework", Andrei Lascu, Alastair F. Donaldson
(award by dividiti)

[ See all prior AE here ]

Recent events

Motivation

Reproducing experimental results from computer systems papers and building upon them is becoming extremely challenging and time consuming. Major issues include ever changing and possibly proprietary software and hardware, lack of common tools and interfaces, stochastic behavior of computer systems, lack of common experimental methodology, and lack of universally accepted mechanisms for knowledge exchange [ 1, 2 ].

We are organizing Artifact Evaluation to help authors validate their techniques and tools by independent reviewers - please check out "submission" and "reviewing" guidelines for further details. Papers that successfully pass Artifact Evaluation process receive a seal of approval printed on the papers themselves (we discuss with ACM how to unify this stamp and include it directly to Digital Library). Authors are also invited (but not obliged) to share their artifacts along with their publications, for example as a supplementary material in Digital Libraries. We hope that this initiative will help make artifacts as important as papers while gradually solving numerous reproducibility issues in our research.

We consider Artifact Evaluation as a continuous learning curve - our eventual goal is to collaboratively develop common methodology for artifact sharing and reproducible experimentation in computer system's research. Your feedback is essential to make it happen! If you have any questions, comments or suggestions, do not hesitate to get in touch, participate in public discussions (LinkedIn, wiki, mailing list), submit patches for the artifact templates at GitHub, join us at related events, and check out our supporting technology (OCCAM, Collective Knowledge, CK-WA).

We would like to thank Prof. Shriram Krishnamurthi and all our colleagues for very fruitful discussions and feedback!

Maintained by
cTuning foundation (non-profit R&D organization)
and volunteers!
          
Powered by Collective Knowledge
                     
  
  
  
           Locations of visitors to this page