From cTuning.org
Line 13: | Line 13: | ||
# setting up practical public repository and infrastructure to share and reproduce experimental results from the community | # setting up practical public repository and infrastructure to share and reproduce experimental results from the community | ||
- | |||
# sharing benchmarks, data sets, tools, interfaces, predictive models, etc. to prepare common experimental methodology | # sharing benchmarks, data sets, tools, interfaces, predictive models, etc. to prepare common experimental methodology | ||
- | |||
# pushing forward new publication model for HiPEAC where experimental results are validated by the community before being published (similar to conferences and journals in statistics, machine learning, biology, etc) - the main challenge is to enable very simple validation across multiple ever-changing architectures, tools, benchmarks and data sets | # pushing forward new publication model for HiPEAC where experimental results are validated by the community before being published (similar to conferences and journals in statistics, machine learning, biology, etc) - the main challenge is to enable very simple validation across multiple ever-changing architectures, tools, benchmarks and data sets | ||
=== Comments === | === Comments === | ||
- | * Grigori: after the keynote, I had interesting discussions with mathematicians from NTU during conference on [auto-tuning] in Taiwan. They mentioned a [http://www.jstatsoft.org journal in statistics] where results have to be validated by the community. However, we agreed that the biggest challenge in computer engineering is not in reproducing similar scheme, but in enabling validation of results across many architectures, tools, data sets which is impossible right now without special framework and repository. That's why there was a considerable interest to the new plugin-based Collective Mind framework that is intended to enable collaborative and reproducible research while supporting most of the architectures, operating systems (including off-the-shelf Android mobiles), compilers, run-time systems, benchmarks, data sets, etc. I plan to pre-release this framework for a broader community during next HiPEAC computing week to continue this collaborative effort. | + | * Grigori: after the keynote, I had interesting discussions with mathematicians from NTU during conference on [ttp://goo.gl/iutx auto-tuning] in Taiwan. They mentioned a [http://www.jstatsoft.org journal in statistics] where results have to be validated by the community. However, we agreed that the biggest challenge in computer engineering is not in reproducing similar scheme, but in enabling validation of results across many architectures, tools, data sets which is impossible right now without special framework and repository. That's why there was a considerable interest to the new plugin-based Collective Mind framework that is intended to enable collaborative and reproducible research while supporting most of the architectures, operating systems (including off-the-shelf Android mobiles), compilers, run-time systems, benchmarks, data sets, etc. I plan to pre-release this framework for a broader community during next HiPEAC computing week to continue this collaborative effort. |
=== People === | === People === | ||
* We would like to thank [http://ctuning.org/lab/people all our colleagues] who provided interesting feedback on cTuning technology, and evaluated, validated or extended various collaborative tools, benchmarks, data sets and predictive models for program and architecture optimization! | * We would like to thank [http://ctuning.org/lab/people all our colleagues] who provided interesting feedback on cTuning technology, and evaluated, validated or extended various collaborative tools, benchmarks, data sets and predictive models for program and architecture optimization! |
Revision as of 10:44, 31 March 2013
New publication model for computer engineering that favors reproducible experimental results validated by the community
With our background in physics, we found it extremely disappointing that reproducibility, sharing and statistical mindfulness of results in computer engineering is rarely considered. In fact, it is often simply impossible due to lack of common tools and data repositories. Therefore, we decided to develop cTuning technology for collaborative and reproducible research that helps to systematize computer engineering and enables a new publication model which favors sharing of data, models, tools and interfaces for validation and reproducibility by the community.
Since 2008 we released all our benchmarks (cBench), datasets (cDatasets/KDataSets), tools, and experimental data used in our publications in the cTuning repository in most of our publications since 2008. Since 2005, we also made our cTuning-related lectures available on-line. This model resulted in multiple collaborative projects to improve predictive models and tools to design and optimize computer systems together with IBM (Israel), Google (USA), ICT (China), University of Edinburgh (UK), UPC (Spain), CAPS Entreprise (France), ISP RAS (Russia), Intel (Illinois), Ghent University (Belgium), UVSQ (France), NCAR (USA), ARC/Synopsys (UK) and others.This topic has been accepted for the HiPEAC3 network of excellence (2012-2015) and we are currently building a community around this model.
Contents |
Next event
- 2-3/May/2013 - [cTuning.org/making-computer-engineering-a-science-2013 Thematic session] at HiPEAC Computing Week/ACM ECRC 2013 in Paris, France
If you are interested to join this effort, please get in touch with Grigori Fursin.
Current work in progress
We are starting building an academic and industrial workgroup interested in:
- setting up practical public repository and infrastructure to share and reproduce experimental results from the community
- sharing benchmarks, data sets, tools, interfaces, predictive models, etc. to prepare common experimental methodology
- pushing forward new publication model for HiPEAC where experimental results are validated by the community before being published (similar to conferences and journals in statistics, machine learning, biology, etc) - the main challenge is to enable very simple validation across multiple ever-changing architectures, tools, benchmarks and data sets
Comments
- Grigori: after the keynote, I had interesting discussions with mathematicians from NTU during conference on [ttp://goo.gl/iutx auto-tuning] in Taiwan. They mentioned a journal in statistics where results have to be validated by the community. However, we agreed that the biggest challenge in computer engineering is not in reproducing similar scheme, but in enabling validation of results across many architectures, tools, data sets which is impossible right now without special framework and repository. That's why there was a considerable interest to the new plugin-based Collective Mind framework that is intended to enable collaborative and reproducible research while supporting most of the architectures, operating systems (including off-the-shelf Android mobiles), compilers, run-time systems, benchmarks, data sets, etc. I plan to pre-release this framework for a broader community during next HiPEAC computing week to continue this collaborative effort.
People
- We would like to thank all our colleagues who provided interesting feedback on cTuning technology, and evaluated, validated or extended various collaborative tools, benchmarks, data sets and predictive models for program and architecture optimization!