cTuning foundation news archive

  • 2024 April 2: We are very exctied to be helping students run MLPerf benchmarks at the upcoming Student Cluster Competition at SuperComputing'24.
  • 2024 March 30: We have completed a collaborative engineering project with MLCommons to enhance CM workflow automation to run MLPerf inference benchmarks across different models, software and hardware from different vendors in a unified way - it was successfully validated by automating ~90% of all MLPerf inference v4.0 performance and power submissions while finding some top performance and cost-effective software/hardware configurations for AI systems: see our report for more details.
  • 2024 March 2: We presented our new collaborative project to Automatically Compose High-Performance and Cost-Efficient AI Systems with MLCommons' Collective Mind and MLPerf" at the MLPerf-Bench workshop @HPCA'24
  • 2023 December: We've completed the artifact evaluation for ACM/IEEE MICRO'23 and prototyped the use of the common MLCommons CM automation interface to make it easier for the community to run and reproduce experiments from published papers. See our report for more details.
  • 2023 September 15: The cTuning foundation is proud to help MLCommons develop the new version of the Collective Knowledge Technology v3 with the open-source MLCommons CM automation language, CK playground and modular inference library (MIL) that became the 1st workflow automation enabling mass submission of more than 12000 performance results in a single MLPerf inference submission round with more than 1900 power results across more than 120 different system configurations from different vendors (different implementations, all reference models and support for DeepSparse Zoo, Hugging Face Hub and BERT pruners from the NeurIPS paper, main frameworks and diverse software/hardware stacks) in both open and closed divisions!

    This remarkable achievement became possible thanks to open and transparent development of this technology as an official MLCommons project with public Discord discussions, important feedback from Neural Magic, TTA, One Stop Systems, Nutanix, Collabora, Deelvin, AMD and NVIDIA, and contributions from students, researchers and even school children from all over the world via our public MLPerf challenges. Special thanks to cKnowledge for sponsoring our developments and submissions, to One Stop Systems for showcasing the 1st MLPerf results on Rigel Edge Supercomputer, and to TTA for sharing their platforms with us to add CM automation for DLRMv2 available to everyone.

    Since it’s impossible to describe all the compelling performance an power-efficient results achieved by our collaborators in a short press-release, we make them available with various derived metrics at the Collective Knowledge playground, mlcommons@cm4mlperf-results and this news page. We continue enhancing the MLCommons CM/CK technology to help everyone automatically co-design the most efficient end-to-end AI solutions based on their requirements and constraints. We welcome all submitters to follow our CK/CM automation developments at GitHub and join our public Discord server if you want to automate your future MLPerf submissions at scale.

    See related HPC Wire article about cTuning and our CM/CK technology, and contact Grigori Fursin and Arjun Suresh for more details!

  • 2023 July 19: We are very excited to be a part of the great collaborative project to enable "Federated benchmarking of medical artificial intelligence with MedPerf" - see the overview of this collaborative platform in the Nature article.
  • 2023 June 28: We are honored to give a keynote at the 1st ACM conference on reproducibility and replicability. You can find our slides at Zenodo.
  • 2023 June 14: We are preparing Artifact Evaluation at ACM/IEEE MICRO 2023 - stay tuned for more details! Since criteria for the ACM "Artifacts Evaluated - Reusable" badge are quite vague, we partnered with the MLCommons task force on automation and reproducibility to add their unified interface (MLCommons CM) to the submitted artifacts to make them more portable, reproducible and reusable. This interface was successfully validated at the Student Cluster Competition at SuperComputing'23 and we would like to test it as a possible criteria to obtain the ACM "Artifacts Evaluated - Reusable" badge Our ultimate goal is to provide a common interface to evaluate and reuse all artifacts across diverse and rapidly evolving software and hardware. We suggest the authors to join the public Discord server for this task force to get free help from the community and MLCommons to add this interface to their artifacts before evaluation. The authors can also try to add this unified interface themselves following this tutorial.
  • 2023 May 17: The cTuning foundation joined forces with AVCC and MLCommons to help develop industry's first Automotive Benchmark based on our automation language and reproducibility methodology.
  • 2023 April: We have successfully validated this artifact evaluation methodology combined with the MLCommons CM automation language to automate ~80% of MLPerf inference v3.0 submissions (98% of all power results): LinkedIn, Forbes, ZDNet.
  • 2023 April 5: We are excited to see our open-source Collective Knowledge playground highlighted in the Forbes article.
  • 2023 April 5: The cTuning foundation joins forces with MLCommons to develop Collective Knowledge Playground for collaborative optimization challenges: press-release.
  • 2023 April 3: Public release of our free, open-source and technology-agnostic MLCommons Collective Knowledge Playground (CK) to automate benchmarking, optimization and reproducibility of MLPerf inference benchmark via collaborative challenges!
  • 2023 Feb 16: New alpha CK2 GUI to visualize all MLPerf results is available here.
  • 2023 Jan 30: New alpha CK2 GUI to run MLPerf inference is available here.
  • 2022 November: We are very excited to see that our new CK2 automation meta-framework (CM) was successfully used at the Student Cluster Competition'22 to make it easier to prepare and run the MLPerf inference benchmark just under 1 hour. If you have 20 minutes, please check this tutorial to reproduce results yourself ;) !
  • 2022 September: We have helped MLCommons to prepare and release CM v1.0.1 - the next generation of the MLCommons Collective Knowledge framework being developed by the public workgroup. We are very glad to see that more than 80% of all performance results and more than 95% of all power results were automated by the MLCommons CK v2.6.1 in the latest MLPerf inference round thanks to submissions from Qualcomm, Krai, Dell, HPE and Lenovo!
  • 2022 July: We have pre-released CK2(CM) portable automation scripts for MLOps and DevOps: github.com/mlcommons/ck/tree/master/cm-mlops/script.
  • 2022 March: We've started developing the CM framework (aka CK2) based on the community feedback - join our collaborative effort!
  • 2022 February: We've helped with artifact evaluation at ASPLOS'22!
  • 2021 September: We are excited to announce that we have donated our Collective Knowldege technology and the MLPerf inference automation suite v2.5.8 to MLCommons (github.com/mlcommons/ck and github.com/mlcommons/ck-mlops) to benefit everyone! .
  • 2021 March: Our ACM TechTalk about "reproducing 150 Research Papers and Testing Them in the Real World" is available on the ACM YouTube channel.
  • 2021 March: The report from the "Workflows Community Summit: Bringing the Scientific Workflows Community Together" is available in ArXiv.
  • 2021 March: Our paper about the CK technology has appeared in the Philosophical Transactions A, the world's longest-running journal where Newton published: DOI, ArXiv.
  • 2020.December: We are honored to join MLCommons as a founding member to accelerate machine learning innovation.
  • 2020.November: We are very excited to announce that we have completed the prototyping phase of our Collective Knowledge framework (CK) and successfully validated it in multiple industrial and academic projects as described in this white paper and the FASTPath'20 presentation. We have helped our partners and the community to use CK as an extensible playground to implement reusable components with automation actions for AI, ML, and systems R&D. We used such components to assemble portable workflows from reproduced research papers during our reproducibility initiatives at ML and systems conferences. We then demonstrated that it was possible to use such portable workflows to automate the co-design process of efficient software, hardware and models, simplify MLPerf inference benchmark submissions, and quickly deploy emerging AI, ML, and IoT technology in production in the most efficient way (speed, accuracy, energy, costs) across diverse platforms from data centers to edge devices.
  • 2018.March » Preliminary reproducible results from the 1st open ReQuEST tournament on SW/HW co-design of efficient inference (speed,accuracy,costs) are now available online: (live scoreboard, CK workflows, workshop program)
  • 2018.February » Summary of our reproducibility activities in 2017 is now available online!
  • 2018.January » See our interactive and arXiv report about CK workflow for collaborative research into multi-objective autotuning and machine learning techniques (funded by Raspberry Pi foundation).
  • 2017.November » We opened a beta repository to find reusable and customizable AI artifacts
  • 2017.November » Distinguished artifact at IA3 at SuperComputing'18 was shared using CK!
  • 2017.September » Microsoft sponsors non-profit cTuning foundation
  • 2017.August » ARM presented our technology at the Embedded Vision Summit
  • 2017.June » ACM evaluates our CK technology to share experimental workflows in Digital Libraries
  • 2017.May » We released new open-source tools for collaborative SW/HW co-design of AI algorithms
  • 2017.March » Our CNRS webcast on "Enabling open and reproducible research at computer systems conferences: good, bad and ugly"
  • 2017.March » We released a unique, portable and customizable open-source technology powered by CK to optimize deep learning at all levels across diverse and ever changing HW/SW stack from IoT to supercomputers!
  • 2017.February » Our CGO'07 research paper received "test of time" award - it motivated development of the cTuning's framework to crowdsource optimization!
  • 2017.February » We organized Artifact Evaluation panel at CGO/PPoPP'17 (Monday, 17:15-17:45, Austin, TX, USA)
  • 2017.February » We started preparing AI for collaborative optimization powered by CK: cKnowledge.org/ai
  • 2017.February » We co-authored ACM's policy on Result and Artifact Review and Badging and prepared Artifact Appendices now used at SuperComputing'17!
  • 2017.February » Michel Steuwer (University of Edinburgh) blogged about CK concepts
  • 2017.January » We can now compile and run Caffe (popular DNN framework) with all deps in a unified way using our CK portable workflow framework on Linux, Windows and Android!
  • 2017.January » Catch our team at HiPEAC'17 (Jan.23-25, Stockholm, Sweden)
  • 2017.January » One of the highest ranked public artifacts from the CGO'17 was implemented using our CK framework - see it at GitHub!
  • 2017.January » We wish you a very happy and successful New Year, and start it with several exciting internships available at dividiti (Cambridge, UK)
  • 2016.December » We released new version of our open-source Android application to crowdsource benchmarking and optimization of various DNN libraries and models (Dec.27) [ grab it at Google Play; get sources from GitHub; see crowd-results (scenario "crowd-benchmark DNN libraries") ]
  • 2016.October » We will present our collaborative approach to workloads benchmarking at ARM TechCon'16 (Oct.27, Santa Clara, USA)
  • 2016.October » We will present Collective Knowledge approach for unified artifact sharing MozFest'16 Open Science Session (Oct.27, London, UK)
  • 2016.September » We have released CK V1.8.2 with continuous integration, support for farms of machines, new CK documentation, and new Open Science resources at GitHub!
  • 2016.August » Artifact Evaluation for PACT'16 has been successfully completed!
  • 2016.June » Congratulations to Abdul Memon (PhD student advised by the cTuning foundation researchers) for successfully defending his thesis "Crowdtuning: Towards Practical and Reproducible Auto-tuning via Crowdsourcing and Predictive Analytics" in the University of Paris-Saclay. Most of the software, data sets and experiments are not only reproducible but also shared as reusable and extensible components via Collective Mind and CK!
  • 2016.June » Dagstuhl workshop on Engineering Academic Software!
  • 2016.June » Our Collective Knowledge approach for collaborative and reproducible experimentation was presented at the Smart Anything Everywhere Workshop: Enhancing digital transformation in European SMEs!
  • 2016.May » Thanks to a one-year grant from Microsoft, we have moved Collective Knowledge Repository to Azure cloud!
  • 2016.May » We presented Collective Knowledge technology and Artifact Evaluation at the ACM, IBM and MIT!
  • 2016.April » We now use our CK to power this website!
  • 2016.March.14 » Thank you for a very positive feedback about new Artifact Evaluation procedures and Collective Knowledge concept at the CGO/PPoPP'16!
  • 2016.March » We presented our paper "Collective Knowledge: towards R&D sustainability" and demonstrated CK-based crowdtuning results at DATE'16.
  • 2016.March.1 » We have released updated CK with Android app to crowdsource GCC/LLVM tuning!
  • 2016.January » ADAPT'16 program with keynote by Ed Plowman (ARM) is now available online - check out Reddit discussions!
  • 2015.November » Dr. Grigori Fursin gave a guest lecture about Collective Knowledge at the University of Manchester [slides]!
  • 2015.November » We co-organized exciting Dagstuhl perspective workshop on artifact evaluation - the report will follow soon!
  • 2015.September » We have released our Collective Knowledge Framework for collaborative and reproducible R&D [GitHub, live demo]!
  • 2015.March » Dr. Grigori Fursin gave a guest lecture at the University of Copenhagen [slides]!
  • 2015.February » We have received HiPEAC technology transfer award for validating our new Collective Knowledge Framework and Repository at ARM;