Line 53: | Line 53: | ||
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion] | *[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion] | ||
− | + | == Steering committee<br/> == | |
− | + | ||
− | + | ||
*[http://cTuning.org/lab/people/gfursin Grigori Fursin], cTuning foundation and INRIA, France | *[http://cTuning.org/lab/people/gfursin Grigori Fursin], cTuning foundation and INRIA, France | ||
− | |||
− | |||
− | |||
*[http://homepages.inf.ed.ac.uk/cdubach Christophe Dubach], University of Edinburgh, UK | *[http://homepages.inf.ed.ac.uk/cdubach Christophe Dubach], University of Edinburgh, UK | ||
− | We would like to thank our colleagues from the [http://www.artifact-eval.org | + | We would like to thank our colleagues from the [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] and [http://cTuning.org/lab/people cTuning foundation] for their help, frequent participation and support. |
− | == | + | == Paper and artifact evaluation committee<br/> == |
− | + | Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp "Validated by the community". Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive. | |
− | + | As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]]. | |
− | + | ||
== Packing and sharing research and experimental material<br/> == | == Packing and sharing research and experimental material<br/> == |
Revision as of 21:23, 27 July 2014
This wiki is maintained by cTuning foundation.
Contents
Motivation
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering as a side effect of our MILEPOST , cTuning.org and Collective Mind projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:
- developing public and open source repositories of knowledge including Collective Mind ;
- developing collaborative research and experimentation infrastructure that can share the whole experimental setups with all software and hardware dependencies;
- evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [arXiv , ACM DL]);
- setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [arXiv , ACM DL]);
- improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material.
See our manifesto and history here.
Community-driven research and developments
Together with the community and cTuning foundation we are working on the following topics:
- developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones
- describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact
- developing specification to preserve experiments including all software and hardware dependencies
- dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques
- developing new predictive analytics techniques to explore large design and optimization spaces
- validating and verifying experimental results by the community
- developing common research interfaces for existing or new tools
- developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)
- sharing rare hardware and computational resources for experimental validation
- implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure
- implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)
- speeding up analysis of "big" experimental data
- developing new (interactive) visualization techniques for "big" experimental data
- enabling interactive articles
Our interdisciplinary events
Featuring new open publication model and validation of experimental results
- PPoPP'15 artifact evaluation
- CGO'15 artifact evaluation
- ADAPT'15@ HiPEAC'15 - workshop on adaptive self-tuning computer systems with our new publication model!
- ADAPT'14 @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ program and publications ]
Discussing technical aspects to enable reproducibility and open publication model
- Special journal issue on Reproducible Research Methodologies at IEEE TETC
- ACM SIGPLAN TRUST'14 @ PLDI'14
- REPRODUCE'14 @ HPCA'14
- ADAPT'14 panel @ HiPEAC'14
- HiPEAC'13 CSW thematic session @ ACM ECRC "Making computer engineering a science"
- HiPEAC'12 CSW thematic session
- ASPLOS/EXADAPT'12 panel @ ASPLOS'12
- cTuning lectures (2008-2010)
- GCC Summit'09 discussion
Steering committee
- Grigori Fursin, cTuning foundation and INRIA, France
- Christophe Dubach, University of Edinburgh, UK
We would like to thank our colleagues from the artifact-eval.org, OCCAM project and cTuning foundation for their help, frequent participation and support.
Paper and artifact evaluation committee
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a pool of our supporters based on submitted and publicly available publications, their keywords and public discussions as described in our proposal [arXiv], [ACM DL]. Validated papers receive a stamp "Validated by the community". Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described here.
Packing and sharing research and experimental material
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any publicly available tool) and readme.txt file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [arXiv, ACM DL]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as ADAPT workshop and from common discussions during ACM SIGPLAN TRUST'14 workshops. See also nice guidelines for packing code and data along with publications here.
Validation
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our ADAPT'14 workshop (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to our volunteers, experiments from these papers have been validated, archives shared in our public repository , and papers marked with a "validated by the community" stamp:
Resources
- Collection of related initiatives
- Collection of related tools
- Collection of related benchmarks and data sets
- Collection of public repositories
- Collection of related lectures
- Collection of related articles
- Collection of related blogs
- Collection of related events
Archive
- Outdated cTuning wiki page related to reproducible research and open publication model
- Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ database, web-service for online prediction of optimizations ]