Interested to sponsor our open-source tools for artifact sharing and sustainable research software? Contact us!

Our novel open research methodology enabled the following activities:

We are developing an open-source Collective Knowledge framework aka CK - a cross-platform customizable Python framework used by leading universities, Fortune 50 companies and non-profit organizations to share artifacts as reusable components with JSON API; quickly prototype portable experimental workflows (such as multi-objective DNN optimization); automate package installation; crowdsource and reproduce experiments across diverse hardware; unify predictive analytics; enable interactive articles, and develop sustainable and customizable research software.
cKnowledge.org website ] [ DATE'16 paper ]
We participate in international research projects with the leading universities, companies and international non-profit organizations to help scientists use our open-source Collective Knowledge framework (CK) and implement sustainable research software, share reusable artifacts and workflows, and crowdsource their experiments across diverse platforms provided by volunteers similar to SETI@home.
We use CK to enable interactive and reproducible articles with reusable artifacts and experimental workflows which have unified JSON API and meta information!
Example of a CK-powered interactive article ] [ ACM evaluation of the CK for reproducible articles ]
We help conferences, workshops and journals including CGO, PPoPP, PACT and SC to develop a common experimental methodology and framework for artifact evaluation and digital libraries such as ACM DL.

     

We also promote open, collaborative, reproducible and reusable research as well as our new publication model with the community-driven reviewing and validation of results.

Our proposal ] [ ADAPT workshops ] [ Wiki ] [ Open R&D via CK open challenges ]
We are developing portable, customizable and multi-objective autotuning workflow powered by CK to help the community automatically crowd-tune and crowd-fuzz compilers such as GCC and LLVM in terms of execution time, code size, compilation time and bugs. We also use it crowd-tune OpenCL, CUDA and other libraries, and co-design various DNN engines and models. For example, such autotuning workflow can help gain up to 10x performance speedups and 30% energy savings on various OpenCL/CUDA/CPU libraries without sacrificing accuracy (or up to 20x speed-ups with a small accuracy degradation) as shown in a CK-powered interactive graph below for a popular SLAM algorithm (all points can be reproduced/validated on practically any platform using CK):
Latest public results ] [ Android app to crowd-tune compilers ]
We use above autotuning workflow to crowdsource benchmarking, optimization and co-design of emerging workloads such as deep learning and AI across diverse and ever changing HW/SW stack from IoT to supercomputers provided by volunteers!

cKnowledge.org/ai ] [ Optimization repository ]

Having a common experimental infrastructure allows us to build reusable, realistic, diverse, and continuously evolving training sets in a common format (programs, data sets, models, unexpected behavior, mispredictions) with the help of our partners and the community.

See the following examples of shared training sets:

Such realistic training sets can help design more efficient computer systems (ARM TechCon'16 and article on machine-learning based optimization) and build accurate models!