Our success stories
We believe that our main contribution to the community
is our radically new, holistic, interdisciplinary and
scientific research methodology for computer engineering
where we effectively combined our open-source
Collective Knowledge framework and repository
(involving the community to share
realistic programs, workloads, experiment workflows and
predictive models as reusable components with unified JSON
API), crowd-benchmarking, multi-objective autotuning, big
data predictive analytics including statistical analysis,
machine learning and feature selection, run-time adaptation
with multi-versioning, experiment crowdsourcing and
collective intelligence
[
DATE'16,
CPC'15,
Scientific Programming'14,
TRUST@PLDI'14
].
Our collaborative and reproducible computer engineering
methodology combined with predictive analytics helped
enable world's first machine learning based self-tuning
compiler (MILEPOST
GCC), start crowdsourcing
optimization and machine learning, and eventually
initiate artifact
evaluation at PPoPP,
CGO and
ADAPT
(backed up by ACM)
to validate techniques published in computer systems'
conferences, workshops and journals.
Our techniques were published in PLDI, MICRO, CGO, ACM
TACO, CASES, DATE and other major conferences and journals,
included to mainline GCC (one of the most used compilers in the world),
received various international awards, and cited
by ARM
(p.17), Fujitsu
and IBM.
Timeline
-
Our CGO'07 paper received "test of time award" at CGO'17!
(2017)!
-
We partnered with General Motors and developed a unique, portable and customizable open-source workflow
based on the CK to help the community
optimize deep learning at all levels across diverse and ever changing HW/SW stack
from IoT to supercomputers!
-
We partnered with ARM to design more efficient computer systems
powered by Collective Knowledge and ARM's workload automation.
(2016)!
-
ARM and dividiti
made a press-release about our Collective Knowledge Technology
[ PDF (page 17) ]
(2016)!
-
We initiated crowd-tuning campaign, i.e. crowdsourcing GCC/LLVM tuning
and combining it with active learning across diverse hardware including
mobile devices and cloud services provided
by volunteers using CK framework. You can see latest crowd-results
in our live repository!
(2016)!
-
Collective Knowledge will be used in the TETRACOM-funded
project to crowd-source compiler bug detection (Imperial College London
and dividiti)
(2016)!
-
Ed Plowman (director of performance analysis strategy in ARM) says "contribute to CK and workload automation"
[ Slides ] (2016)!
-
We have presented our CK approach at DATE'16 [ PDF ]
(2016)!
-
Our Collective Knowledge technology
have received HiPEAC technology transfer award
(2015).
-
We have received funding from the European Union (TETRACOM)
and ARM to aggregate all our developments
on machine-learning based auto-tuning and optimization repositories
within a new, small and portable
Collective Knowledge Framework.
We successfully completed the project and released framework in September, 2015
(2015-cur.).
-
We initiated completely open and community-driven reviewing
of publications and artifacts at our ADAPT workshop
(2015-cur.).
-
After nearly a decade of persuading the community, we finally initiated
artifact evaluation
at CGO and PPoPP (our major computer systems' research conferences)
(2014-cur.)!
-
cTuning technology was
referenced by Fujitsu as closely related to their
long-term initiative on "big data" driven
optimization of Exascale computer systems
(2014).
-
cTuning technology has been accepted as a new theme
on reproducible research and experimentation in computer
engineering for the EU HiPEAC network of excellence
[More]
(2012-2016).
-
Grigori Fursin received INRIA award for "making
an outstanding contribution to research" and 4-year
fellowship for his cTuning technology
(2012).
-
cTuning technology demonstrated the possibility to fully
automate construction of compiler optimization heuristics
and speed up benchmarking, optimization and co-design
of multi-core reconfigurable systems by several orders
of magnitude thus dramatically reducing time to market
for the new systems and increasing ROI. This technology
is considered by IBM to be the
first in the world (2009)
[M4,
P19,
P28,
press about our project].
(2006-2009).
-
We proposed and implemented crowdsourcing of program
optimization and compiler tuning using
Collective Mind
Technology and commodity
mobile phones
[J10,
P28,
P10,
P50]
(2006-2012)
-
Grigori Fursin extended cTuning-based technology
to develop customized "in house" repository of knowledge
when helping to establish Intel Exascale Lab in France
[I2]
(2010-2011).
-
We added our Interactive
Compilation Interface to mainline GCC 4.6+ to support
our machine learning based pluginized autotuning,
sponsored by Google
[S17,
M7,
P19,
P28,
F6]
(2005-2009).
-
We helped to add our novel run-time adaptation technique
for statically compiled programs based on code
multi-versioning and fast decision trees to mainline
GCC 4.8+ (to automatically maximize program performance
and power consumption without the need for complex
recompilation and JIT infrastructures across a variety
of systems such as mobile devices or data centers with
VM) sponsored by Google
[S14,
F6,
P50,
P32,
P28,
P10,
P6]
(2004-2009).
-
Our novel crowd-tuning approach
(crowdsourcing SW/HW optimization) together with cTuning.org framework with unified interfaces,
public repository of knowledge,
and machine learning web services enabled world's first machine-learning
based self-tuning compiler (MILEPOST GCC)
[project website,
new live optimization repository,
framework,
IBM's press-release]
(2006-2010).
-
After more than a decade of evangelizing collaborative
and reproducible research in computer engineering with
code and data sharing, our community started eventually
accepting this idea.
We even managed to initiate artifact evaluation
at CGO, PPoPP and ADAPT
(1997-cur.).