Line 3: Line 3:
 
*'''Please, join our panel on reproducible research at [http://adapt-workshop.org ADAPT'14] @ HiPEAC 2014 in January 2014, or submit papers to [[Events:TRUST2014|TRUST'14]] @ PLDI 2014'''
 
*'''Please, join our panel on reproducible research at [http://adapt-workshop.org ADAPT'14] @ HiPEAC 2014 in January 2014, or submit papers to [[Events:TRUST2014|TRUST'14]] @ PLDI 2014'''
 
*[https://sourceforge.net/projects/c-mind/files/latest/download Download] latest stable BSD-licensed Collective Mind framework and repository from Sourceforge (1.0.2318beta)
 
*[https://sourceforge.net/projects/c-mind/files/latest/download Download] latest stable BSD-licensed Collective Mind framework and repository from Sourceforge (1.0.2318beta)
*[http://c-mind.org/repo Access] pilot live Collective Mind repository
+
*[http://c-mind.org/repo Access] pilot live Collective Mind repository with shared benchmarks, data sets, tools, models and other artifacts
  
 
<span style="color:#b22222">''Since 2006, we are working on a common methodology, infrastructure and repository to enable collaborative and reproducible research and experimentation in computer engineering while focusing on auto-tuning, co-design machine learning and run-time adaptation of computer systems! Such approach enables new publication model where all research materials (artifacts) are continuously shared, validated and improved by the community! To set up an example, we started collecting, unifying and releasing all benchmarks, data sets, models and tools with unified interfaces at cTuning.org and later at c-mind.org since 2008. In spite of original hostility to this project from the academic community, we glad to eventually see recent similar initiatives at major conferences! However, our project is complementary and focuses more on technological aspects of collaborative and reproducible research in computer engineering rather than just sharing and validating artifacts. If you are interested in this community project, join our events and effort, collaborate, invest or contact [http://cTuning.org/lab/people/gfursin Grigori Fursin] (project founder) for more details!''</span>
 
<span style="color:#b22222">''Since 2006, we are working on a common methodology, infrastructure and repository to enable collaborative and reproducible research and experimentation in computer engineering while focusing on auto-tuning, co-design machine learning and run-time adaptation of computer systems! Such approach enables new publication model where all research materials (artifacts) are continuously shared, validated and improved by the community! To set up an example, we started collecting, unifying and releasing all benchmarks, data sets, models and tools with unified interfaces at cTuning.org and later at c-mind.org since 2008. In spite of original hostility to this project from the academic community, we glad to eventually see recent similar initiatives at major conferences! However, our project is complementary and focuses more on technological aspects of collaborative and reproducible research in computer engineering rather than just sharing and validating artifacts. If you are interested in this community project, join our events and effort, collaborate, invest or contact [http://cTuning.org/lab/people/gfursin Grigori Fursin] (project founder) for more details!''</span>
Line 77: Line 77:
 
*new publication model (workshops, conferences, journals) with validation of experimental results by the community
 
*new publication model (workshops, conferences, journals) with validation of experimental results by the community
  
Current <span data-scayt_word="cM" data-scaytid="35">cM</span> version includes public benchmarks, datasets, tools, techniques and some stats from past [http://cTuning.org/lab/people/gfursin <span data-scayt_word="Grigori" data-scaytid="36">Grigori</span> <span data-scayt_word="Fursin's" data-scaytid="39">Fursin's</span> research]:
+
Current <span data-scayt_word="cM" data-scaytid="35">cM</span> version includes public benchmarks, datasets, tools, techniques and some stats from past [http://cTuning.org/lab/people/gfursin <span data-scayt_word="Grigori" data-scaytid="36">Grigori</span> <span data-scayt_word="Fursin&#039;s" data-scaytid="39">Fursin's</span> research]:
  
 
*support for most <span data-scayt_word="OSes" data-scaytid="40">OSes</span> and platforms (Linux, Android, Windows; servers, cloud nodes, mobiles, laptops, tablets, <span data-scayt_word="supercomputers" data-scaytid="41">supercomputers</span>)
 
*support for most <span data-scayt_word="OSes" data-scaytid="40">OSes</span> and platforms (Linux, Android, Windows; servers, cloud nodes, mobiles, laptops, tablets, <span data-scayt_word="supercomputers" data-scaytid="41">supercomputers</span>)

Revision as of 06:42, 13 December 2013

NEWS:

  • Please, join our panel on reproducible research at ADAPT'14 @ HiPEAC 2014 in January 2014, or submit papers to TRUST'14 @ PLDI 2014
  • Download latest stable BSD-licensed Collective Mind framework and repository from Sourceforge (1.0.2318beta)
  • Access pilot live Collective Mind repository with shared benchmarks, data sets, tools, models and other artifacts

Since 2006, we are working on a common methodology, infrastructure and repository to enable collaborative and reproducible research and experimentation in computer engineering while focusing on auto-tuning, co-design machine learning and run-time adaptation of computer systems! Such approach enables new publication model where all research materials (artifacts) are continuously shared, validated and improved by the community! To set up an example, we started collecting, unifying and releasing all benchmarks, data sets, models and tools with unified interfaces at cTuning.org and later at c-mind.org since 2008. In spite of original hostility to this project from the academic community, we glad to eventually see recent similar initiatives at major conferences! However, our project is complementary and focuses more on technological aspects of collaborative and reproducible research in computer engineering rather than just sharing and validating artifacts. If you are interested in this community project, join our events and effort, collaborate, invest or contact Grigori Fursin (project founder) for more details!

Collaborative, systematic and reproducible computer engineering

You share research material resized.png

With the rapid advances in information technology and all other fields of science comes dramatic growth in the amount of processing data ("big data"). Scientists, engineers and students are drowning in experimental data and often have to divert their research path towards data management, mining, and visualization. Such approaches often require additional interdisciplinary skills including statistical analysis, machine learning, programming and parallelization, database management, and Internet technologies, which still few researchers have or can afford to learn in parallel with their main research work. Multiple frameworks, languages and public data repositories started appearing recently to enable collaborative data analysis and processing but they are often either covering very narrow research topics and too simplistic (just data and code sharing) or very formal and still require special programming skills often including Object Oriented Programming.

Collective Mind technology (cM) attempts to fill in this gap by providing researchers and companies a simple, portable, technology-neutral and practically transparent way to gradually systematize and classify all their data, code and tools. Open source cM framework and repository fully relies on customizable public  or private plugins (mostly written in python with support of any other language through OpenME interface) to gradually describe and classify similar data and code objects, or abstract interfaces of ever changing tools thus effectively protecting researchers' experimental setups. cM helps to easily preserve any complex research artifact (collection of files, benchmarks, codelets, datasets, tools, traces, models) with gradually and easily extensible JSON based meta description including classification, properties and either direct or semantic data connections. Furthermore, meta descriptions of all  data can be transparently and easily indexed using third-party ElasticSearch enabling very fast and complex queries. At the same time, all research artifacts can be exposed to any public or workgroup user through unified web services to crowdsource experimentation, ranking, online learning and knowledge management.

cM uses agile top-down methodology originating from physics to represent any experimental scenario and gradually decompose it into connected plugins with associated data or compose it from already shared plugins similar to "research LEGO". Universal structure immediately enables replay mode for any experiment, thus making this framework suitable for recent projects on reproducibility of experimental results and new publication model where experiments and techniques are validated, ranked and improved by the community. For example, we easily moved all our past R&D on program and architecture multi-objective auto-tuning, co-design and dynamic adaptation to cM plugins and gradually make them available together with all research artifacts at http://c-mind.org/repo. We hope that cM will be useful to a broad range of researchers and companies either as an open-source, community driven solution to systematize their research and experimentation, or possibly as an intermediate step before investing into more complex or commercial knowledge management systems.


Related vision publications and presentations

Public repository of knowledge

Do not waste your research material - use Collective Mind Framework and Repository to describe, run and share your experiments with the community!

  • Beta live Collective Mind repository (3rd generation opened in 2013 substituting previous cTuning repository and infrastructure available since 2008) - we described and shared all our past research developments, codelets, benchmarks, data sets, models, statistical analysis, modeling and online learning plugins and tools to start top-down analysis and optimization of existing computer systems. We used it as the first practical example to motivate new publication model where all research artifacts are continuously shared, validated and improved by the community. After many years, it seems that community finally started moving in this direction and we even see some related initiatives in major conferences including OOPSLA and PLDI. However, our project is complementary and focuses more on technological aspects of collaborative and reproducible research in computer engineering rather than just sharing and validating artifacts.

Common infrastructure and support tools

  • Collective Mind Infrastructure  - plugin-based framework and repository for collaborative and reproducible research and experimentation
    • OpenME - interface to "open up" third-party tools and applications to make them prepared for auto-tuning using cM
    • Alchemist - OpenME plugin to convert compilers into interactive analysis and optimization toolsets

Discussions

Events

Past

Current customized usage scenarios

Designing novel many-core computer systems becomes intolerably complex,  ad-hoc, costly and error prone due to limitations of available technology, enormous number of available design and optimization choices, and complex interactions between all software and hardware components. Empirical auto-tuning combined with run-time adaptation and machine learning has been demonstrating good potential to address above challenges for more than a decade but still far from the widespread production use due to unbearably long exploration and training times, ever changing tools and their interfaces, lack of a common experimental methodology, and lack of unified mechanisms for knowledge building and exchange apart from publications where reproducibility of results is often not even considered. Since 1993, we have spent more time on preparing and analyzing huge amount of heterogeneous experiments for self-tuning machine-learning based computer systems or trying to validate and reproduce others research results rather than on exending our novel ideas.

In 2007, we decided to start collaborative systematization and unification of design and optimization of computer systems combined with a new publication model where experimental results are validated by the community. One of the possible promising solutions is to combine public repository of knowledge with online auto-tuning, machine learning and crowdsourcing techniques where HiPEAC and cTuning communities already have a good practical experience. Such collaborative approach should allow community to continuously validate, systematize and improve collective knowledge about computer systems, and extrapolate it to build faster, more power efficient and reliable computer systems. It can also help to restore the attractiveness of computer engineering making it a more systematic and rigorous discipline rather than "hacking".

We develop cTuning collaborative research and development infrastructure and repository (current version is cTuning3 aka Collective Mind) that enables:

  • gradual decomposition and parametrization of complex experiments using unified and inter-connected Collective Mind modules (plugins)
  • collection and sharing of statistics, benchmarks, codelets, tools, data sets and predictive models from the community
  • systematizaton of optimization, design space exploration and run-time adaptation techniques (co-design and auto-tuning)
  • collaborative evaluation and improvement of various data mining, classification and predictive modeling techniques for off-line and on-line auto-tuning
  • new publication model (workshops, conferences, journals) with validation of experimental results by the community

Current cM version includes public benchmarks, datasets, tools, techniques and some stats from past Grigori Fursin's research:

  • support for most OSes and platforms (Linux, Android, Windows; servers, cloud nodes, mobiles, laptops, tablets, supercomputers)
  • multiple benchmarks (cBench, polybench, SPEC95,SPEC2000,SPEC2006,EEMBC,etc), hundreds of MILEPOST/CAPS codelets, thosands of cBench datasets
  • multiple compilers (GCC, LLVM, Open64, PathScale, Intel, IBM, PGI)
  • tools for program and architecture characterization (MILEPOST GCC for semantic features and code patterns; hardware counters for dynamic analysis)
  • plugins for powerful visualization and data export in various formats
  • experimental pipeline for universal program and architecture co-design, auto-tuning, performance/energy modeling and machine learning
  • OpenME interface to instrument programs or statically enable adaptive binaries through multi-versioning and decision trees for run-time adaptation/scheduling while easily mixing CPU/CUDA/OpenCL codelets or any other heterogeneous programming models
  • plugins for online auto-tuning and performance model building
  • machine-learning enabled self-tuning cTuning CC compiler that can wrap any existing compiler while using crowd-tuning and collective knowledge to continuously improve its own behavior
  • plugins for universal P2P data exchange through cM web services
  • optimization statistics for various ARM, Intel and NVidia chips



Collective Mind is a community-based and continuously evolving project that uses agile development methodology. Hence, interfaces and modules may be changing from time to time to provide needed functionality. We are very thankful for your understanding, patience and any help to extend and improve this framework while making it clean, simple and easy to use.


(C) 2011-2014 cTuning foundation