We are currently raising funds to extend our public activities and developments in 2015 (including our public repository of knowledge, framework to crowdsource auto-tuning, and initiatives on reproducible research in computer engineering) - if you are interested to sponsor us, please get in touch!

Since 2007, we develop public and open source knowledge management technology and repository validated in various academic and industrial projects for collaborative, systematic and reproducible R&D in computer engineering.


News and upcoming events:

Home

The cTuning foundation is a not-for-profit research and development organization working on an open source knowledge management technology and public repository of knowledge. We help non-specialists systematize and crowdsource their experimentation while gradually making it more reproducible and exposing it to powerful predictive analytics (statistical analysis, machine learning, detection of missing features, improvement of models) and collective intelligence.

As a practical usage scenario, our technology helped various academic and industrial partners unify, systematize, standardize and accelerate the ad-hoc, complex, time-consuming and error-prone process of benchmarking, optimization and co-design of computer systems (software and hardware) making them faster, smaller, cheaper, more energy efficient and reliable. Better computer systems, in turn, help boost innovation in science and technology!

The cTuning foundation primarily focuses on developing, supporting and extending an open source and published cTuning technology from the MILEPOST project to enable faster, smaller, cheaper, more energy efficient and reliable self-tuning computer systems. This technology, considered by IBM to be the first in the world, turns complex, ad-hoc, error-prone, time consuming and costly process of benchmarking, optimization and co-design of computer systems across all software and hardware layers (applications, compilers, run-time libraries, heterogeneous multi-core architectures) into a unified big data problem. We then systematize and considerably speed it up (sometimes by several orders of magnitude) using predictive analytics (statistical analysis, machine learning, data mining, feature selection), public repository of knowledge, empirical auto-tuning, run-time adaptation, crowdsourcing and collective intelligence. As a consequence, cTuning approach dramatically reduces development costs and time to market for the new multi-core devices thus boosting innovation in science and technology. As a side effect, our approach also enables new reproducible research and publication model in computer engineering where articles, experimental results and all related artifacts are continuously shared, discussed, validated and improved by the community. See our manifesto and history for more details.

Our expertise, research and developments:

Our open source technology and expertise has already been successfully used in multiple industrial and academic projects helping our partners develop faster, smaller, cheaper, more power efficient and reliable computer systems while dramatically reducing time to market for the new products (software and hardware) by an order of magnitude and thus boosting innovation. We are systematizing, automating and considerably speeding up the following R&D tasks as described in our vision papers (2009, 2013, 2014):

  • developing and customizing collaborative and open source research and experimentation infrastructure with a repository of knowledge to connect big data analytics frameworks to empirical analysis and optimization of computer systems (see Fujitsu's 2014 press-release about cTuning and our vision papers: 2009, 2013)
  • speeding up exploration of large design and optimization spaces using adaptive, probabilistic sampling and machine learning
  • automatically tuning GCC, LLVM or any other compiler optimization heuristics (static or JIT) for the new hardware using Collective Tuning and Collective Mind performance tracking and tuning buildbots, machine learning and crowdsourcing to reduce user programs' execution time, energy, and code/system size (our technology is considered by IBM to be the first in the world)
  • collecting a large number of benchmarks and data sets to make use of machine learning in computer engineering statistically meaningful
  • validating (hardware/software errors) and benchmarking (regression detection) the new hardware
  • automatically analyzing and optimizing user applications using plugin-based auto-tuning, machine learning and crowdsourcing (for example, using commodity mobile phones)
  • enabling run-time adaptation for statically compiled user programs executed in embedded devices or data centers with virtual machines (cloud) to maximize performance and minimize power consumption
  • detecting or generating missing program and system features needed for optimization prediction and effective run-time adaptation.

At the same time, we develop novel or improve existing machine learning, data mining, knowledge discovery, statistical analysis, feature detection, crowdsourcing, auto-tuning and run-time adaptation techniques used in computer engineering.

More details:

Our 2008 paper with all shared material

Since 2006, we have been releasing all experimental data and tools along with our publications [TR2013, TR2009, IJPP2011, ACM TACO2010, PLDI2010, SMART2009, HiPEAC 2009] to allow collaborative validation, reproducibility and extensions by the community. At the same time, having common collaborative R&D repository and infrastructure allows researchers focus their effort on novel approaches combined with with data mining, predictive analytics and classification rather than spending considerable effort on building new tools with already existing functionality. It also allows conferences and journals to favor publications that can be collaboratively validated by the community as described in our vision paper [arXiv , ACM DL].

Demo of Collective Mind powered online graph from our recent paper (enabling interactive articles)