We now lead Artifact Evaluation initative or help improve submission/reviewing procedures and Artifact Appendices based for several leading conferences including CGO,PPoPP,PACT,RTSS and SC. We hope it will enable open, collaborative and reproducible computer engineering.

In 2016, we co-authored ACM's policy on Result and Artifact Review and Badging now used at several conferences including SuperComputing'17.

In 2015, we have released the 4th version of our cTuning infrastructure to enable collaborative and reproducible computer systems' research - a brand new, open-source, customizable Collective Knowledge Repository. It aggregates all our past developments, ideas and techniques, and allows users to share, cross-link and reference any object and knowledge (workloads, data sets, tools, optimization results, predictive models, etc.) as a reusable component with a unified JSON API via GitHub or BitBucket!

You can easily check out, reuse and improve the following workloads/artifacts and modules shared by the community! You can also find and reproduce various results from experiment crowdsourcing in the CK live repository and even participate in crowd-tuning yourself.

You can find more info at our reproducibility wiki and manifesto.


Since 2006 we are actively working on collaborative and reproducible research, experimentation and publication methodology in computer engineering where experimental results and all related material is continuously shared, validated and improved by the community as described in our manifesto [ ACM DLarXiv ]. It originally happened as a side effect of the MHAOTEU project (1999-2001) on performance tuning of real HPC applications from supercomputer centers [J15], and later during our MILEPOST project (2006-2009) to enable machine learning based self-tuning compiler using public cTuning repository of optimization knowledge, common machine learning-based pluginized autotuning infrastructure, performance statistics continuously collected from multiple users (big data), and predictive analytics (statistical analysis, data mining and machine learning and feature detection) to adaptively explore and prune large optimization spaces, and predict better optimizations and hardware designs. We hope our initiative will help restore attractiveness of computer engineering as a traditional, collaborative, reproducible, fair and scientific discipline rather than hacking, publication machine or monopolized business.

The main challenge we face is the variability of experimental results in a constantly changing software and hardware stack. Hence, it is not enough to simply pack and share relatively simple code and data in an archive or use some Virtual Machine to replay experiments particularly during performance/energy/resiliency analysis and optimization. Based on our practical knowledge and experience, cTuning foundation helps partners with:

If you would like to join our effort or need our help, please contact Grigori Fursin.