From cTuning.org

Revision as of 16:20, 15 May 2013 by Gfursin (Talk | contribs)
Jump to: navigation, search

This wiki is dedicated to:

  • New publication model for computer engineering that favors reproducible experimental results validated by the community
  • New plugin-based open source collaborative R&D infrastructure and repository to systematize design and optimization of computer systems

Designing novel many-core computer systems becomes intolerably complex, ad-hoc, costly and error prone due to limitations of available technology, enormous number of available design and optimization choices, and complex interactions between all software and hardware components. Empirical auto-tuning combined with run-time adaptation and machine learning has been demonstrating good potential to address above challenges for more than a decade but still far from the widespread production use due to unbearably long exploration and training times, ever changing tools and their interfaces, lack of a common experimental methodology, and lack of unified mechanisms for knowledge building and exchange apart from publications where reproducibility of results is often not even considered.

We strongly believe that the time has come to start collaborative systematization and unification of design and optimization of computer systems combined with a new publication model where experimental results are validated by the community. One of the possible promising solutions is to combine public repository of knowledge with online auto-tuning, machine learning and crowdsourcing techniques where HiPEAC and cTuning communities already have a good practical experience. Such collaborative approach should allow community to continuously validate, systematize and improve collective knowledge about computer systems, and extrapolate it to build faster, more power efficient and reliable computer systems. It can also help to restore the attractiveness of computer engineering making it a more systematic and rigorous discipline rather than "hacking".

We are building an academic and industrial workgroup interested in:

  1. building common extensible HiPEAC repository and infrastructure to collect statistics, benchmarks, codelets, tools, data sets and predictive models from the community
  2. preparing new publication model (workshops, conferences, journals) with validation of experimental results by the community
  3. systematizing and unifying optimization, design space exploration and run-time adaptation techniques (co-design and auto-tuning)
  4. evaluating various data mining, classification and predictive modeling techniques for off-line and on-line auto-tuning

Events

Comments

  • Grigori: after the keynote, I had interesting discussions with mathematicians from NTU during conference on auto-tuning in Taiwan. They mentioned a journal in statistics where results have to be validated by the community. However, we agreed that the biggest challenge in computer engineering is not in reproducing similar scheme, but in enabling validation of results across many architectures, tools, data sets which is impossible right now without special framework and repository. That's why there was a considerable interest to the new plugin-based Collective Mind framework that is intended to enable collaborative and reproducible research while supporting most of the architectures, operating systems (including off-the-shelf Android mobiles), compilers, run-time systems, benchmarks, data sets, etc. I plan to pre-release this framework for a broader community during next HiPEAC computing week to continue this collaborative effort.

Thanks

  • We would like to thank HiPEAC for funding our meetings and all colleagues who provided interesting feedback on cTuning/Collective Mind technology, and evaluated, validated or extended various collaborative tools, benchmarks, data sets and predictive models for program and architecture optimization!
Locations of visitors to this page