Current activities of the non-profit cTuning foundation:
• developing free and open-source Collective Knowledge infrastructure and repository (CK) for collaborative and reproducible research combined with experiment crowdsourcing and big data predictive analytics;
• using CK, crowdsourcing and collective intelligence to enable practical and machine-learning based multi-objective autotuning and run-time adaptation (effectively balancing performance,energy,accuracy,size,faults and all associated costs in heterogeneous computer systems);
• enabling open research and publication model, reproducible research and artifact evaluation for PPoPP and CGO.

News and upcoming events:


The cTuning foundation is a non-profit research and development organization. Since 2008, we are developing open source knowledge management framework and web-based repository of knowledge to enable collaborative and reproducible experimentation in computer engineering while exposing it to powerful predictive analytics (statistical analysis, machine learning, detection of missing features, improvement of models) and collective intelligence. The latest version of our technology (Collective Knowledge or CK) is now available online: soft, demo repo and our latest interactive paper describing concept.

Our CK technology is currently used in multiple research projects and already helped several academic and industrial partners unify, systematize, standardize and accelerate their previously ad-hoc, complex, time-consuming and error-prone process of benchmarking, autotuning (optimization) and co-design of computer systems (software and hardware) making them faster, smaller, cheaper, more energy efficient and reliable. Better computer systems, in turn, help boost innovation in science and technology!

The cTuning foundation primarily focuses on developing, supporting and extending an open source and published cTuning technology from the MILEPOST project to enable faster, smaller, cheaper, more energy efficient and reliable self-tuning computer systems. This technology, considered by IBM to be the first in the world, turns complex, ad-hoc, error-prone, time consuming and costly process of benchmarking, optimization and co-design of computer systems across all software and hardware layers (applications, compilers, run-time libraries, heterogeneous multi-core architectures) into a unified big data problem. We then systematize and considerably speed it up (sometimes by several orders of magnitude) using predictive analytics (statistical analysis, machine learning, data mining, feature selection), public repository of knowledge, empirical autotuning, run-time adaptation, crowdsourcing and collective intelligence. As a consequence, cTuning approach dramatically reduces development costs and time to market for the new multi-core devices thus boosting innovation in science and technology. As a side effect, our approach also enables new reproducible research and publication model in computer engineering where articles, experimental results and all related artifacts are continuously shared, discussed, validated and improved by the community. See our manifesto and history for more details.

Our expertise, research and developments:

Our open source technology and expertise has already been successfully used in multiple industrial and academic projects helping our partners develop faster, smaller, cheaper, more power efficient and reliable computer systems while dramatically reducing time to market for the new products (software and hardware) by an order of magnitude and thus boosting innovation. We are systematizing, automating and considerably speeding up the following R&D tasks as described in our vision papers (2009, 2013, 2014, 2015):

  • developing and customizing collaborative and open source research and experimentation infrastructure with a repository of knowledge to connect big data analytics frameworks to empirical analysis and optimization of computer systems (see Fujitsu's 2014 press-release about cTuning and our vision papers: 2009, 2013)
  • speeding up exploration of large design and optimization spaces using adaptive, probabilistic sampling and machine learning
  • automatically tuning GCC, LLVM or any other compiler optimization heuristics (static or JIT) for the new hardware using Collective Tuning and Collective Mind performance tracking and tuning buildbots, machine learning and crowdsourcing to reduce user programs' execution time, energy, and code/system size (our technology is considered by IBM to be the first in the world)
  • collecting a large number of benchmarks and data sets to make use of machine learning in computer engineering statistically meaningful
  • validating (hardware/software errors) and benchmarking (regression detection) the new hardware
  • automatically analyzing and optimizing user applications using plugin-based autotuning, machine learning and crowdsourcing (for example, using commodity mobile phones)
  • enabling run-time adaptation for statically compiled user programs executed in embedded devices or data centers with virtual machines (cloud) to maximize performance and minimize power consumption
  • detecting or generating missing program and system features needed for optimization prediction and effective run-time adaptation.

At the same time, we develop novel or improve existing machine learning, data mining, knowledge discovery, statistical analysis, feature detection, crowdsourcing, autotuning and run-time adaptation techniques used in computer engineering.

More details:

Our 2008 paper with all shared material

Since 2006, we have been releasing all experimental data and tools along with our publications [CPC'15, JSC'14, TR2013, TR2009, IJPP2011, ACM TACO2010, PLDI2010, SMART2009, HiPEAC 2009] to allow collaborative validation, reproducibility and extensions by the community. At the same time, having common collaborative R&D repository and infrastructure allows researchers focus their effort on novel approaches combined with with data mining, predictive analytics and classification rather than spending considerable effort on building new tools with already existing functionality. It also allows conferences and journals to favor publications that can be collaboratively validated by the community as described in our vision paper [arXiv , ACM DL].

Demo of Collective Mind powered online graph from our recent paper (enabling interactive articles)