We are developing an open-source Collective Knowledge framework aka CK -
a cross-platform customizable Python framework used by leading universities, Fortune 50 companies and non-profit organizations
to share artifacts as reusable components with JSON API;
quickly prototype portable experimental workflows (such as multi-objective DNN optimization);
automate package installation;
crowdsource and reproduce experiments across diverse hardware;
unify predictive analytics;
enable interactive articles,
and develop sustainable and customizable research software.
We participate in international research projects
with the leading universities, companies
and international non-profit organizations to help scientists use our open-source
Collective Knowledge framework (CK)
and implement sustainable research software, share reusable artifacts and workflows,
and crowdsource their experiments across diverse platforms provided by volunteers
similar to SETI@home.
We use CK to enable interactive and reproducible articles with reusable artifacts and experimental workflows which have unified JSON API and meta information!
We are developing portable, customizable and multi-objective autotuning workflow powered by CK
to help the community automatically crowd-tune and crowd-fuzz compilers such as GCC and LLVM in terms of execution time,
code size, compilation time and bugs. We also use it crowd-tune OpenCL, CUDA and other libraries,
and co-design various DNN engines and models.
For example, such autotuning workflow can help gain up to 10x performance speedups and 30% energy savings
on various OpenCL/CUDA/CPU libraries without sacrificing accuracy (or up to 20x speed-ups with a small accuracy degradation)
as shown in a CK-powered interactive graph below
for a popular SLAM algorithm
(all points can be reproduced/validated on practically any platform using CK):
Having a common experimental infrastructure allows us to
build reusable, realistic, diverse,
and continuously evolving training sets in a common format
(programs, data sets, models, unexpected behavior, mispredictions)
with the help of our partners and the community.
See the following examples of shared training sets: