Jump to: navigation, search

This is archive. We moved to ...

This wiki is dedicated to:

  • New publication model for computer engineering that favors reproducible experimental results validated by the community
  • New plugin-based open source collaborative R&D infrastructure and repository to systematize design and optimization of computer systems

If you have questions, comments or suggestions, interested to participate in the future events or would like to join this effort, please get in touch with Grigori Fursin.

Designing novel many-core computer systems becomes intolerably complex, ad-hoc, costly and error prone due to limitations of available technology, enormous number of available design and optimization choices, and complex interactions between all software and hardware components. At the same time, research and development methodology for computer systems has hardly changed in the past decades. Users and developers often have to resort to non-scientific, non-systematic, non-rigorous, intuitive and error-prone methods combined with multiple ad-hoc tools, often limited, non-representative and simply outdated benchmarks and datasets, complex interfaces and data formats to select the most appropriate solution that satisfies all their needs. Such an outdated technology results in an enormous waste of expensive computing resources and energy, and considerably increases development costs and time-to-market for the new systems.

Empirical auto-tuning combined with run-time adaptation and machine learning has been demonstrating good potential to address above challenges for more than a decade but still far from the widespread production use due to unbearably long exploration and training times, ever changing tools and their interfaces, lack of a common experimental methodology, and lack of unified mechanisms for knowledge building and exchange apart from publications where reproducibility of results is often not even considered.

We strongly believe that the time has come to start collaborative systematization and unification of design and optimization of computer systems combined with a new publication model where experimental results are validated by the community. One of the possible promising solutions is to combine public repository of knowledge with online auto-tuning, machine learning and crowdsourcing techniques where HiPEAC and cTuning communities already have a good practical experience. Such collaborative approach should allow community to continuously validate, systematize and improve collective knowledge about computer systems, and extrapolate it to build faster, more power efficient and reliable computer systems. Furthermore, it should be able to suggest researchers and engineers where to focus their effort and creativity when designing or optimizing computer systems, thus boosting innovation and dramatically reducing development and optimization costs, and time to market for new systems. It can also help to restore the attractiveness of computer engineering making it a more systematic and rigorous discipline rather than "hacking".

We are building an academic and industrial workgroup interested in:

  • building common extensible HiPEAC repository and infrastructure (possibly based on already existing modular cTuning/Collective Mind technology) to collect statistics, benchmarks, codelets, data sets, tools, auto-tuning and run-time adaptation plugins, and predictive models from the community
  • systematizing and unifying optimization, design space exploration and run-time adaptation techniques (co-design and auto-tuning)
  • collaboratively evaluating and improving various data mining, classification and predictive modeling techniques for off-line and on-line auto-tuning
  • substituting ad-hoc and possibly outdated benchmarks and data sets with realistic and representative ones from the community
  • preparing new publication model (workshops, conferences, journals) with validation of experimental results by the community



  • Grigori: from our experience with sharing our experimental results and tools or validating existing results (such as at ADAPT'14), just packing and sharing your artifacts is not enough - we would like to have them as reusable components with defined API and meta-description.
  • Grigori: During conference on auto-tuning in NTU (Taiwan), colleagues mentioned a on-line journal in statistics where algorithm and datasets are submitted along publication to validate results by the PC/community. Later we also had more discussions at HiPEAC computing week in Paris and we all agreed that we can't simply reproduce the same scheme since experimental setups are usually much more complex in computer engineering (design, optimization and run-time adaptation) while there is still no clear methodology for reproducibility (for example, how to deal with general variations in computer system behavior across different users, setups or runs). That's why we believe that we need a common collaborative R&D repository and infrastructure to reproduce and validate experiments across many users, architectures, tools, data sets. That is exact reason why I have been designing open source plugin-based cTuning and Collective Mind repository and infrastructure which allows to build customized experimental setups while supporting most of the architectures, operating systems (including off-the-shelf Android mobiles), compilers, run-time systems, benchmarks, data sets, etc. It is now available for preview and we are now testing it in a few industrial and academic projects before pre-release to continue collaborative and public discussions.


  • We would like to thank HiPEAC for funding our meetings and all colleagues who provided interesting feedback on cTuning/Collective Mind technology, and evaluated, validated or extended various collaborative tools, benchmarks, data sets and predictive models for program and architecture optimization!
Locations of visitors to this page