Researchers, engineers and students struggle to reproduce experimental results and reuse research code from scientific papers due to continuously changing software and hardware, lack of common APIs, stochastic behavior of computer systems and a lack of a common experimental methodology. That is why we decided to set up the Artifact Evaluation process at conferences to help the community validate results from accepted papers with the help of independent evaluators while collaborating with ACM and IEEE on a common methodology, reproducibility checklist and tools to automate this tedious process. Papers that successfully pass such evaluation process receive a set of ACM reproducibility badges printed on the papers themselves:
Please check our "submission" and "reviewing" guidelines for more details. If you have questions or suggestions, do not hesitate to participate in our public discussions using this Artifact Evaluation google group and/or the LinkedIn group.