Enabling collaborative, systematic and reproducible research and experimentation with an open publication model in computer engineering
This wiki is maintained by cTuning foundation. If you would like to help or make corrections, please get in touch with Grigori Fursin.
Contents
Motivation / manifesto
Rather than writing yet another manifesto complaining about reproducibility problems in computer engineering, we are trying to solve them with the community since 2006 'as a side effect of our MILEPOST , cTuning.org and Collective Mind projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, predictive analytics and crowdsourcing) as briefly described here.
Our expertise and initiatives
We and our supporters are focusing on:
- Evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [arXiv , ACM DL])
- Setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [arXiv , ACM DL])
- Developing public and open source repositories of knowledge including Collective Mind
- Developing collaborative research and experimentation infrastructure that can share the whole experimental setups
- Improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material
Together with the community and cTuning foundation we are gradually trying to address/solve the following challenges that we faced during our 20-year R&D:
- develop tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones
- describe and catalog whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact
- develop specification to preserve experiments including all software and hardware dependencies
- deal with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques
- develop new predictive analytics techniques to explore large design and optimization spaces
- validate and verify experimental results by the community
- develop common research interfaces for existing or new tools
- develop common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)
- share rare hardware and computational resources for experimental validation
- implement previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure
- implement open access to publications and data (particularly discussing intellectual property IP and legal issues)
- speed up analysis of "big" experimental data
- develop new (interactive) visualization techniques for "big" experimental data
- enable interactive articles
We are currently developing open source Collecitve Mind repository of knowledge and infrastructure for collaborative and reproducible research, development and experimentation in computer engineering.
Our events
Featuring new open publication model and validation of experimental results
- ADAPT'15 - workshop on adaptive self-tuning computer systems. It is currently under submission and will likely be co-located with HiPEAC'15.
- ADAPT'14 - workshop on adaptive self-tuning computer systems [ program and publications ]
Discussing technical aspects to enable reproducibility and open publication model
- Special journal issue on Reproducible Research Methodologies at IEEE TETC
- ACM SIGPLAN TRUST'14 @ PLDI'14
- REPRODUCE'14 @ HPCA'14
- ADAPT'14 panel @ HiPEAC'14
- HiPEAC'13 CSW thematic session @ ACM ECRC "Making computer engineering a science"
- HiPEAC'12 CSW thematic session
- ASPLOS/EXADAPT'12 panel @ ASPLOS'12
- cTuning lectures (2008-2010)
- GCC Summit'09 discussion
Reproducible Research Committee
Steering committee
- Grigori Fursin, cTuning foundation and INRIA, France (focusing on technical aspects of collaborative and reproducible research in computer engineering)
- Cristophe Dubach, University of Edinburgh, UK
- Our colleagues and collaborators from AEC
- Our colleagues and collaborators from OCCAM project
Artifact evaluation committee
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a pool of our supporters based on submitted publications and their keywords as discussed in our vision paper on new publication model [arXiv], [ACM DL].
Packing and sharing research and experimental material
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any publicly available tool) and readme.txt file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [arXiv, [ACM DL]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as ADAPT workshop and from common discussions during ACM SIGPLAN TRUST'14 workshops.
History and motivation
In the MILEPOST project we attempted to build a practical machine learning based self-tuning compiler combining plugin-based auto-tuning framework with a public cTuning repository of knowledge, crowdsourcing predictive analytics, but faced numerous problems including:
- Lack of common, large and diverse benchmarks and data sets needed to build statistically meaningful predictive models;
- Lack of common experimental methodology and unified ways to preserve, systematize and share our growing optimization knowledge and research material including benchmarks, data sets, tools, tuning plugins, predictive models and optimization results;
- Problem with continuously changing, "black box" and complex software and hardware stack with many hardwired and hidden optimization choices and heuristics not well suited for auto-tuning and machine learning;
- Difficulty to reproduce performance results from the cTuning.org database submitted by users due to a lack of full software and hardware dependencies;
- Difficulty to validate related auto-tuning and machine learning techniques from existing publications due to a lack of culture of sharing research artifacts with full experiment specifications along with publications in computer engineering.
Our new proposal to crowdsource reviewing of publications and artifacts
Validation
After many years of evangelizing collaborative and reproducible research in computer engineering based on the presented practical experience, we finally start seeing the change in mentality in academia, industry and funding agencies. In our ADAPT'14 workshop authors of two papers (out of nine accepted) agreed to have their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to our volunteers, experiments from these papers have been validated, archives shared in our public repository , and papers marked with a "validated
by the community" stamp as seen on top of this page.