Line 13: | Line 13: | ||
== Validation<br/> == | == Validation<br/> == | ||
− | After many years of evangelizing collaborative and reproducible research in computer engineering based on the presented practical experience, we finally start seeing the change in mentality in academia, industry and funding agencies. In our [http://adapt-workshop.org/2014 ADAPT'14 workshop] authors of two papers (out of nine accepted) agreed to have their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to our volunteers, experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a "validated<br/>by the community" stamp. | + | After many years of evangelizing collaborative and reproducible research in computer engineering based on the presented practical experience, we finally start seeing the change in mentality in academia, industry and funding agencies. In our [http://adapt-workshop.org/2014 ADAPT'14 workshop] authors of two papers (out of nine accepted) agreed to have their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to our volunteers, experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a "validated<br/>by the community" stamp as seen on top of this page. |
− | + | ||
== Our events<br/> == | == Our events<br/> == | ||
Revision as of 23:42, 30 June 2014
Enabling collaborative, systematic and reproducible research and experimentation with an open publication model in computer engineering
This wiki is maintained by cTuning foundation. If you would like to help or make corrections, please get in touch with Grigori Fursin.
Contents
Manifesto / motivation
Rather than writing yet another manifesto on reproducible research and experimentation in computer engineering, we are working on technical aspects of sharing and reproducibility of experimental results and all related material (artifacts) in computer engineering since 2007 as a side effect of our MILEPOST and cTuning.org projects. We attempted to build a practical machine learning based self-tuning compiler combining plugin-based auto-tuning framework with a public cTuning repository of knowledge, crowdsourcing predictive analytics, but faced numerous problems including:
- Lack of common, large and diverse benchmarks and data sets needed to build statistically meaningful predictive models;
- Lack of common experimental methodology and unified ways to preserve, systematize and share our growing optimization knowledge and research material including benchmarks, data sets, tools, tuning plugins, predictive models and optimization results;
- Problem with continuously changing, "black box" and complex software and hardware stack with many hardwired and hidden optimization choices and heuristics not well suited for auto-tuning and machine learning;
- Difficulty to reproduce performance results from the cTuning.org database submitted by users due to a lack of full software and hardware dependencies;
- Difficulty to validate related auto-tuning and machine learning techniques from existing publications due to a lack of culture of sharing research artifacts with full experiment specifications along with publications in computer engineering.
Validation
After many years of evangelizing collaborative and reproducible research in computer engineering based on the presented practical experience, we finally start seeing the change in mentality in academia, industry and funding agencies. In our ADAPT'14 workshop authors of two papers (out of nine accepted) agreed to have their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to our volunteers, experiments from these papers have been validated, archives shared in our public repository , and papers marked with a "validated
by the community" stamp as seen on top of this page.
Our events
Featuring new open publication model
- ADAPT'15 - workshop on adaptive self-tuning computer systems. It is currently under submission and will likely be co-located with HiPEAC'15.
Discussing technical aspects to enable reproducibility and open publication model
- ACM SIGPLAN TRUST'14 - 1st Workshop on Reproducible Research Methodologies and New Publication Models in Computer Engineering [ program and publications ]
Featuring validation of experimental results in computer engineering
- ADAPT'14 - workshop on adaptive self-tuning computer systems [ program and publications ]
- PLDI'14 - conference on programming languages design and experimentation
- OOPSLA'13 - conference on object-oriented programming
Committee
We collaborate with our colleagues from AEC who recently managed to persuade the following conferences join similar initiative:
Packing and sharing research and experimental material
Working on technical aspects
Together with the community we are gradually trying to address/solve the following challenges that we faced during our R&D:
- capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones
- describe and catalog whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact
- validate and verify experimental results by the community
- develop common research interfaces for existing or new tools
- develop common experimental frameworks and repositories
- share rare hardware and computational resources for experimental validation
- deal with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques
- implement previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure
- implement open access to publications and data (particularly discussing intellectual property IP and legal issues)
- enable interactive articles
Our expertise and work
- Set up evaluation of experimental results and all related material for workshops, conferences and journals
- Improve sharing, description of dependencies, and statistical reproducibility of experimental results and related material
- Improve public Collective Mind repository of knowledge and collaborative experimentation infrastructure in computer engineering
- Validate new open publication model
Community-driven reviewing of publications and artifacts
Pool
Packing artifacts for evaluation
- Links to tools for possible packing