We deeply believe in the power of collaborative research, reproducible experiments, knowledge sharing, open science and open source to solve the world's most complex challenges!
Our passion is to develop open-source tools that help everyone understand and use any state-of-the-art technology, and enable open collaboration to solve the real-world problems!
As a founding member of MLCommons (50+ AI software/hardware organizations), we are committed to democratizing AI/ML benchmarks
and making them accessible to everyone to deliver the most efficient AI solutions while reducing all development, benchmarking and optimization costs.
We actively contribute to open source projects, release all the code, data and models
from our projects to co-design efficient AI/ML systems in a reusable and reproducible format,
develop an open platform to share and reuse knowledge about AI/ML systems,
and help the community with reproducibility initiatives and open science
since 2008 when it was still a taboo.
You can learn more about our vision, community initiatives, open-source developments and other related projects
from our
keynote at ACM REP'23,
joint Nature article'23,
ACM TechTalk'21
and the journal article in Philosophical Transactions of the Royal Society'21.
Feel free to reach us via this Discord server
and connect at LinkedIn.
Grigori Fursin created cTuning.org in 2008, established the non-profit cTuning foundation in 2014
and developed Collective Knowledge technology
to support his quest to enable open science and help everyone
participate in collaborative research and solve the real-world problems by facilitating reproducible experiments and bridging the growing gap between AI/ML research and production.
In 2022, we donated our Collective Knowledge v2 to MLCommons
and established a public MLCommons task force on automation and reproducibility
to continue developing it with the community in an open and transparent way to benefit everyone.
We are now leading the development of a new, open-source, technology-agnostic and portable
Collective Mind automation and reproducibility language (CM)
to empower everyone from a company expert to a child to automatically reproduce, optimize, integrate and deploy the state-of-the-art AI/ML solutions
in the real-world in the fastest and most efficient way while slashing research, development, optimization and operational costs.
CM automation language is adopted and extended by the community via Discord
to collaboratively benchmark and optimize AI and ML systems across diverse software, hardware, models and data from different vendors
and share all the knowledge and experience via CK playground.
Our open-source technology already helped
the community and many companies automate and optimize their MLPerf benchmark submissions
while contributing to more than half of all MLPerf inference performance and power results since the beginning.
We also support artifact evaluation and reproducibility challenges
at the leading ML and Systems conferences to improve reproducibility, replicability and reusability of research projects
in the rapidly evolving world.
We are collaborating with MLCommons and cKnowledge Ltd (our general sponsor)
to develop a Collective Knowledge v3 Platform (CK playground)
- an open-source platform to empower everyone automatically explore, select, co-design, optimize and deploy the most efficient
AI solution based on their requirements and constraints
(accuracy, performance, power consumption, size and price) while slashing all development costs and time to market.
The CM automation language and CK playground that we are developing in collaboration with MLCommons
are now used to automate reproducibility and optimization challenges for AI/ML systems,
MedPerf platform,
automotive benchmarking consortium,
MLPerf benchmarks,
LLM-based assistants and other projects across rapidly evolving software, hardware, models and data.
Join our Discord server and/or contact us
if you are interested to participate in our community projects and developments!
We are honored that our expertise and open-source technology has helped the following initiatives:
Our current community activities include:
News
- 2023 December:
We've completed the artifact evaluation for ACM/IEEE MICRO'23
and prototyped the use of the common MLCommons CM automation interface
to make it easier for the community to run and reproduce experiments from published papers.
See our report for more details.
- 2023 September 15:
The cTuning foundation is proud to help MLCommons develop the new version of the Collective Knowledge Technology v3
with the open-source MLCommons CM automation language,
CK playground
and modular inference library (MIL)
that became the 1st workflow automation enabling mass submission of more than 12000 performance
results in a single MLPerf inference submission round with more than 1900 power results across more
than 120 different system configurations from different vendors
(different implementations, all reference models and support for DeepSparse Zoo,
Hugging Face Hub
and BERT pruners from the NeurIPS paper, main frameworks and diverse software/hardware stacks)
in both open and closed divisions!
This remarkable achievement became possible thanks to open and transparent
development of this technology as an official MLCommons project with
public Discord discussions, important feedback from Neural Magic, TTA, One
Stop Systems, Nutanix, Collabora, Deelvin, AMD and NVIDIA, and
contributions from students, researchers and even school children
from all over the world via our public MLPerf challenges.
Special thanks to cKnowledge for sponsoring our developments and submissions,
to One Stop Systems for showcasing the 1st MLPerf results on Rigel Edge Supercomputer,
and to TTA for sharing their platforms with us to add CM automation for
DLRMv2 available to everyone.
Since it’s impossible to describe all the compelling performance an
power-efficient results achieved by our collaborators in a short
press-release, we make them available with various derived metrics
at the Collective Knowledge playground,
mlcommons@cm4mlperf-results
and this news page.
We continue enhancing the MLCommons CM/CK technology to help everyone
automatically co-design the most efficient end-to-end AI solutions based
on their requirements and constraints. We welcome all submitters to follow
our CK/CM automation developments at GitHub
and join our public Discord server
if you want to automate your future MLPerf submissions at scale.
See related HPC Wire article
about cTuning and our CM/CK technology, and contact Grigori Fursin for more details!
- 2023 July 19: We are very excited to be a part of the great collaborative project
to enable "Federated benchmarking of medical artificial intelligence with MedPerf" -
see the overview of this collaborative platform in the Nature article.
- 2023 June 28: We are honored to give a keynote at the 1st ACM conference on reproducibility and replicability.
You can find our slides at Zenodo.
- 2023 June 14: We are preparing Artifact Evaluation at ACM/IEEE MICRO 2023
- stay tuned for more details! Since criteria for the ACM "Artifacts Evaluated - Reusable" badge are quite vague, we partnered with the
MLCommons task force on automation and reproducibility
to add their unified interface (MLCommons CM)
to the submitted artifacts to make them more portable, reproducible and reusable.
This interface was successfully validated at
the Student Cluster Competition at SuperComputing'23
and we would like to test it as a possible criteria to obtain the ACM "Artifacts Evaluated - Reusable" badge
Our ultimate goal is to provide a common interface to evaluate and reuse all artifacts across diverse and rapidly evolving software and hardware.
We suggest the authors to join the public Discord server for this task force
to get free help from the community and MLCommons to add this interface to their artifacts before evaluation. The authors can also try to add this unified interface themselves
following this tutorial.
- 2023 May 17: The cTuning foundation joined forces with AVCC and MLCommons to help develop
industry's first Automotive Benchmark
based on our automation language and reproducibility methodology.
- 2023 April:
We have successfully validated this artifact evaluation methodology combined with the MLCommons CM automation language
to automate ~80% of MLPerf inference v3.0 submissions (98% of all power results):
LinkedIn,
Forbes,
ZDNet.
- 2023 April 5: We are excited to see our open-source Collective Knowledge playground
highlighted in the Forbes article.
- 2023 April 5: The cTuning foundation joins forces with MLCommons to develop Collective Knowledge Playground for collaborative optimization challenges:
press-release.
- 2023 April 3: Public release of our free, open-source and technology-agnostic MLCommons Collective Knowledge Playground (CK)
to automate benchmarking, optimization and reproducibility of MLPerf inference benchmark via collaborative challenges!
- 2023 Feb 16: New alpha CK2 GUI to visualize all MLPerf results is available here.
- 2023 Jan 30: New alpha CK2 GUI to run MLPerf inference is available here.
- 2022 November: We are very excited to see that our new CK2 automation meta-framework (CM)
was successfully used at the Student Cluster Competition'22
to make it easier to prepare and run the MLPerf inference benchmark just under 1 hour.
If you have 20 minutes, please check this tutorial
to reproduce results yourself ;) !
- 2022 September: We have helped MLCommons to prepare and release CM v1.0.1 -
the next generation of the MLCommons Collective Knowledge framework being developed
by the public workgroup.
We are very glad to see that more than 80% of all performance results and more than 95% of all power results
were automated by the MLCommons CK v2.6.1 in the latest MLPerf inference round thanks to submissions from Qualcomm, Krai, Dell, HPE and Lenovo!
- 2022 July: We have pre-released CK2(CM) portable automation scripts for MLOps and DevOps:
github.com/mlcommons/ck/tree/master/cm-mlops/script.
- 2022 March: We've started developing the CM framework (aka CK2)
based on the community feedback - join our collaborative effort!
- 2022 February: We've helped with artifact evaluation at ASPLOS'22!
- 2021 September: We are excited to announce that we have donated our
Collective Knowldege technology
and the MLPerf inference automation suite v2.5.8
to MLCommons
(github.com/mlcommons/ck and
github.com/mlcommons/ck-mlops) to benefit everyone! .
- 2021 March: Our ACM TechTalk about "reproducing 150 Research Papers and Testing Them in the Real World"
is available on the ACM YouTube channel.
- 2021 March: The report from the "Workflows Community Summit: Bringing the Scientific Workflows Community Together"
is available in ArXiv.
- 2021 March: Our paper about the CK technology has appeared in the Philosophical Transactions A, the world's longest-running journal where Newton published: DOI, ArXiv.
- 2020.December: We are honored to join MLCommons
as a founding member to accelerate machine learning innovation.
- 2020.November: We are very excited to announce that we have completed
the prototyping phase of our Collective Knowledge framework (CK)
and successfully validated it in multiple industrial and academic projects
as described in this white paper
and the FASTPath'20 presentation.
We have helped our partners and the community to use CK as an extensible playground to implement reusable
components
with automation actions for AI, ML, and systems R&D.
We used such components to assemble portable workflows
from reproduced research papers
during our reproducibility initiatives at ML and systems conferences.
We then demonstrated that it was possible to use such portable workflows
to automate the co-design process of efficient software, hardware and models,
simplify MLPerf inference benchmark submissions,
and quickly deploy emerging AI, ML, and IoT technology in production
in the most efficient way (speed, accuracy, energy, costs)
across diverse platforms from data centers to edge devices.
Sponsors
[ Founder ]
[ Twitter ]
[ Partners ]
[ Achievements ]
[ History ]