From cTuning.org

(Difference between revisions)
Jump to: navigation, search
Line 1: Line 1:
 +
This wiki is dedicated to:
* ''New publication model for computer engineering that favors reproducible experimental results validated by the community''  
* ''New publication model for computer engineering that favors reproducible experimental results validated by the community''  
* ''New plugin-based open source collaborative R&D infrastructure and repository to systematize design and optimization of computer systems''
* ''New plugin-based open source collaborative R&D infrastructure and repository to systematize design and optimization of computer systems''
-
With our background in physics, we found it extremely disappointing that reproducibility, sharing and statistical mindfulness of results in computer engineering is rarely considered. In fact, it is often simply impossible due to lack of common tools and data repositories. Therefore, we decided to develop cTuning technology for collaborative and reproducible research that helps to systematize computer engineering and enables a new publication model which favors sharing of data, models, tools and interfaces for validation and reproducibility by the community.
+
Designing novel many-core computer systems becomes intolerably complex, ad-hoc, costly and error prone due to limitations of available technology, enormous number of available design and optimization choices, and complex interactions between all software and hardware components. Empirical auto-tuning combined with run-time adaptation and machine learning has been demonstrating good potential to address above challenges for more than a decade but still far from the widespread production use due to unbearably long exploration and training times, ever changing tools and their interfaces, lack of a common experimental methodology, and lack of unified mechanisms for knowledge building and exchange apart from publications where reproducibility of results is often not even considered.
-
Since 2008 we released all our benchmarks (cBench), datasets (cDatasets/KDataSets), tools, and experimental data used in our publications in the cTuning repository in most of our publications since 2008. Since 2005, we also made our cTuning-related lectures available on-line. This model resulted in multiple collaborative projects to improve predictive models and tools to design and optimize computer systems together with IBM (Israel), Google (USA), ICT (China), University of Edinburgh (UK), UPC (Spain), CAPS Entreprise (France), ISP RAS (Russia), Intel (Illinois), Ghent University (Belgium), UVSQ (France), NCAR (USA), ARC/Synopsys (UK) and others.This topic has been accepted for the HiPEAC3 network of excellence (2012-2015) and we are currently building a community around this model.
+
We strongly believe that the time has come to start collaborative systematization and unification of design and optimization of computer systems combined with a new publication model where experimental results are validated by the community. One of the possible promising solutions is to combine public repository of knowledge with online auto-tuning, machine learning and crowdsourcing techniques where HiPEAC and cTuning communities already have a good practical experience. Such collaborative approach should allow community to continuously validate, systematize and improve collective knowledge about computer systems, and extrapolate it to build faster, more power efficient and reliable computer systems. It can also help to restore the attractiveness of computer engineering making it a more systematic and rigorous discipline rather than "hacking".
 +
 
 +
We are building an academic and industrial workgroup interested in:
 +
 
 +
# building common extensible HiPEAC repository and infrastructure to collect statistics, benchmarks, codelets, tools, data sets and predictive models from the community
 +
# preparing new publication model (workshops, conferences, journals) with validation of experimental results by the community
 +
# systematizing and unifying optimization, design space exploration and run-time adaptation techniques (co-design and auto-tuning)
 +
# evaluating various data mining, classification and predictive modeling techniques for off-line and on-line auto-tuning
=== Events ===
=== Events ===
Line 11: Line 19:
** [[Discussions:New_Publication_Model:Notes_from_hipeac_thematic_session_paris_2013 - Notes from HiPEAC thematic session in Paris 2013]]
** [[Discussions:New_Publication_Model:Notes_from_hipeac_thematic_session_paris_2013 - Notes from HiPEAC thematic session in Paris 2013]]
* '''24-25/04/2012''' - [http://www.hipeac.net/content/goeteborg-hipeac-computing-systems-week-april-2012 Thematic session] at HiPEAC Computing Week 2012 in Göteborg, Sweden
* '''24-25/04/2012''' - [http://www.hipeac.net/content/goeteborg-hipeac-computing-systems-week-april-2012 Thematic session] at HiPEAC Computing Week 2012 in Göteborg, Sweden
-
 
-
=== Current work in progress ===
 
-
We are building an academic and industrial workgroup interested in:
 
-
 
-
# setting up practical public repository and infrastructure to share and reproduce experimental results from the community
 
-
# sharing benchmarks, data sets, tools, interfaces, predictive models, etc. to prepare common experimental methodology
 
-
# pushing forward new publication model for HiPEAC where experimental results are validated by the community before being published (similar to conferences and journals in statistics, machine learning, biology, etc) - the main challenge is to enable very simple validation across multiple ever-changing architectures, tools, benchmarks and data sets
 
-
 
-
* [http://groups.google.com/group/collective-mind Mailing list] for the plugin-based open source Collective Mind Repository and Framework
 
-
* [https://groups.google.com/forum/?fromgroups#!forum/ctuning-discussions cTuning discussion mailing list]
 
=== Comments ===
=== Comments ===
* Grigori: after the keynote, I had interesting discussions with mathematicians from NTU during conference on [http://goo.gl/iutx auto-tuning] in Taiwan. They mentioned a [http://www.jstatsoft.org journal in statistics] where results have to be validated by the community. However, we agreed that the biggest challenge in computer engineering is not in reproducing similar scheme, but in enabling validation of results across many architectures, tools, data sets which is impossible right now without special framework and repository. That's why there was a considerable interest to the new plugin-based Collective Mind framework that is intended to enable collaborative and reproducible research while supporting most of the architectures, operating systems (including off-the-shelf Android mobiles), compilers, run-time systems, benchmarks, data sets, etc. I plan to pre-release this framework for a broader community during next HiPEAC computing week to continue this collaborative effort.
* Grigori: after the keynote, I had interesting discussions with mathematicians from NTU during conference on [http://goo.gl/iutx auto-tuning] in Taiwan. They mentioned a [http://www.jstatsoft.org journal in statistics] where results have to be validated by the community. However, we agreed that the biggest challenge in computer engineering is not in reproducing similar scheme, but in enabling validation of results across many architectures, tools, data sets which is impossible right now without special framework and repository. That's why there was a considerable interest to the new plugin-based Collective Mind framework that is intended to enable collaborative and reproducible research while supporting most of the architectures, operating systems (including off-the-shelf Android mobiles), compilers, run-time systems, benchmarks, data sets, etc. I plan to pre-release this framework for a broader community during next HiPEAC computing week to continue this collaborative effort.
-
=== People ===
+
=== Thanks ===
-
* We would like to thank [http://ctuning.org/lab/people all our colleagues] who provided interesting feedback on cTuning technology, and evaluated, validated or extended various collaborative tools, benchmarks, data sets and predictive models for program and architecture optimization!
+
* We would like to thank HiPEAC for funding our meetings and [http://ctuning.org/lab/people all colleagues] who provided interesting feedback on cTuning/Collective Mind technology, and evaluated, validated or extended various collaborative tools, benchmarks, data sets and predictive models for program and architecture optimization!

Revision as of 16:20, 15 May 2013

This wiki is dedicated to:

  • New publication model for computer engineering that favors reproducible experimental results validated by the community
  • New plugin-based open source collaborative R&D infrastructure and repository to systematize design and optimization of computer systems

Designing novel many-core computer systems becomes intolerably complex, ad-hoc, costly and error prone due to limitations of available technology, enormous number of available design and optimization choices, and complex interactions between all software and hardware components. Empirical auto-tuning combined with run-time adaptation and machine learning has been demonstrating good potential to address above challenges for more than a decade but still far from the widespread production use due to unbearably long exploration and training times, ever changing tools and their interfaces, lack of a common experimental methodology, and lack of unified mechanisms for knowledge building and exchange apart from publications where reproducibility of results is often not even considered.

We strongly believe that the time has come to start collaborative systematization and unification of design and optimization of computer systems combined with a new publication model where experimental results are validated by the community. One of the possible promising solutions is to combine public repository of knowledge with online auto-tuning, machine learning and crowdsourcing techniques where HiPEAC and cTuning communities already have a good practical experience. Such collaborative approach should allow community to continuously validate, systematize and improve collective knowledge about computer systems, and extrapolate it to build faster, more power efficient and reliable computer systems. It can also help to restore the attractiveness of computer engineering making it a more systematic and rigorous discipline rather than "hacking".

We are building an academic and industrial workgroup interested in:

  1. building common extensible HiPEAC repository and infrastructure to collect statistics, benchmarks, codelets, tools, data sets and predictive models from the community
  2. preparing new publication model (workshops, conferences, journals) with validation of experimental results by the community
  3. systematizing and unifying optimization, design space exploration and run-time adaptation techniques (co-design and auto-tuning)
  4. evaluating various data mining, classification and predictive modeling techniques for off-line and on-line auto-tuning

Events

Comments

  • Grigori: after the keynote, I had interesting discussions with mathematicians from NTU during conference on auto-tuning in Taiwan. They mentioned a journal in statistics where results have to be validated by the community. However, we agreed that the biggest challenge in computer engineering is not in reproducing similar scheme, but in enabling validation of results across many architectures, tools, data sets which is impossible right now without special framework and repository. That's why there was a considerable interest to the new plugin-based Collective Mind framework that is intended to enable collaborative and reproducible research while supporting most of the architectures, operating systems (including off-the-shelf Android mobiles), compilers, run-time systems, benchmarks, data sets, etc. I plan to pre-release this framework for a broader community during next HiPEAC computing week to continue this collaborative effort.

Thanks

  • We would like to thank HiPEAC for funding our meetings and all colleagues who provided interesting feedback on cTuning/Collective Mind technology, and evaluated, validated or extended various collaborative tools, benchmarks, data sets and predictive models for program and architecture optimization!
Locations of visitors to this page