Our long-term mission since 2005 is to develop community-based infrastructure and public repository of knowledge for systematic, collaborative and reproducible research and experimentation in computer engineering combined with auto-tuning, machine learning and crowdsourcing. Please, join our community effort!
Continuing innovation in science and technology is vital for our society and requires ever increasing computational resources. However, delivering such resources became intolerably complex, ad-hoc, costly and error prone due to an enormous number of available design and optimization choices combined with the complex interactions between all software and hardware components, and a large number of incompatible analysis and optimization tools. As a result, understanding and modeling of the overall relationship between end-user algorithms, applications, compiler optimizations, hardware designs, data sets and run-time behavior, essential to provide better solutions and computational resources, became simply infeasible as confirmed by many recent long-term international research visions about future computer systems. Additional problems are caused by the lack of a common experimental methodology, lack of interdisciplinary background, and lack of unified mechanisms for knowledge building and exchange apart from numerous similar publications where reproducibility and statistical meaningfulness of results as well as sharing of data and tools is often not even considered in contrast with other sciences including physics, biology and artificial intelligence. In fact, it is often impossible due to a lack of common and unified repositories, tools and data sets. At the same time, there is a vicious circle since initiatives to develop common tools and repositories to unify, systematize, share knowledge (data sets, tools, benchmarks, statistics, models) and make it widely available to the research and teaching community are practically not funded or rewarded academically where a number of publications often matter more than the reproducibility and statistical quality of the research results. As a consequence, students, scientists and engineers are forced to resort to some intuitive, non-systematic, non-rigorous and error-prone techniques combined with unnecessary repetition of multiple experiments using ad-hoc tools, benchmarks and data sets. Furthermore, we witness slowed down innovation, dramatic increase in development costs and time-to-market for the new embedded and HPC systems, enormous waste of expensive computing resources and energy, and diminishing attractiveness of computer engineering often seen as "hacking" rather than systematic science (detailed vision is available in these two open access publications 2009, 2013). Finally and thankfully, we started seeing similar initiatives at some major conferences including OOPSLA and PLDI!
Currently, we see 2 main paths for computer engineering:
Since 1996, we are working on gradual systematization of knowledge about design and optimization of computer systems based on our background in physics and AI. We are developing collaborative experimental methodology, repository, tools and publication model for computer engineering (cTuning / Collective Mind initiative) that favors collaborative discovery, sharing and reuse of knowledge [P1, M1]. This technology allows users to:
Having common collaborative R&D repository and infrastructure allows users to focus their effort on novel approaches combined with with data mining, classification and predictive modeling rather than spending considerable effort on building new tools with already existing functionality or using some ad-hoc tuning heuristics. It also allows conferences and journals to favor publications that can be collaboratively validated by the community.
Since 2007, we have released all experimental data and tools for our cTuning / Collective Mind-related publications [TR2013, TR2009, IJPP2011, ACM TACO2010, PLDI2010, SMART2009, HiPEAC 2009] which allowed further collaborative validation, reproducibility and extension of this technology together with IBM, CAPS, ARC, Intel/CEA Exascale Lab, Google, University of Edinburgh, ICT, UPC, NCAR!
We believe that cTuning may finally complete the puzzle about how to consolidate various ad-hoc techniques and tools together to build efficient self-tuning computer systems. Therefore we strongly advocate for further collaborative R&D and new publication model. If you are interested, join us at upcoming events or use our cTuning and Collective Mind mailing lists to participate in discussions and collaborative developments.
You can find more details about history and implementation of cTuning here.
We would like to thank all our colleagues for interesting and sometimes tough discussions, feedback, collaborations and support during development of cTuning technology.