Line 1: | Line 1: | ||
<center> | <center> | ||
<span style="font-size:x-large"><span style="color: rgb(178, 34, 34)">Enabling collaborative and reproducible computer systems research with an open publication model</span></span> | <span style="font-size:x-large"><span style="color: rgb(178, 34, 34)">Enabling collaborative and reproducible computer systems research with an open publication model</span></span> | ||
− | </center><p style="text-align: center">[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org | + | </center><p style="text-align: center">[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/ae/ppopp2016.html]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/ae/cgo2016.html]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://adapt-workshop.org/motivation2016.html]] [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]] [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]</p><p style="text-align: center"></p><p style="text-align: center">This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].</p> |
+ | |||
== News and upcoming events<br/> == | == News and upcoming events<br/> == | ||
− | * | + | * We have released our new, small, open-source, BSD-licensed Collective Knowledge Framework (cTuning 4 aka CK) for collaborative and reproducible R&D: [http://github.com/ctuning/ck Sources at GitHub], [http://github.com/ctuning/ck/wiki documentation], [http://cknowledge.org/repo live repository to crowdsource experiments such as multi-objective program autotuning], [https://play.google.com/store/apps/details?id=openscience.crowdsource.experiments Android app to crowdsource experiments]. |
− | * | + | *[http://drops.dagstuhl.de/opus/volltexte/2016/5762 Our Dagstuhl report on Artifact Evaluation] |
− | * | + | *[http://cTuning.org/ae Artifact Evaluation for computer systems' conferences, workshops and journals] |
− | * | + | *[http://cTuning.org/ae/ppopp2016.html PPoPP'16 artifact evaluation] |
− | * | + | *[http://cTuning.org/ae/cgo2016.html CGO'16 artifact evaluation] |
+ | *[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - successfuly featured [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, public Reddit-based discussions and artifact evaluation] | ||
+ | *[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for conferences and journals] | ||
== Motivation<br/> == | == Motivation<br/> == | ||
Line 14: | Line 17: | ||
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). <span style="font-size: small">We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:</span> | Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). <span style="font-size: small">We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:</span> | ||
− | *developing public and open source | + | *developing public and open source repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]); |
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]; | *developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]; | ||
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]); | *evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]); | ||
Line 21: | Line 24: | ||
*supporting and improving [http://cTuning.org/ae Artifact Evaluation] for major workshops and conferences including CGO and PPoPP. | *supporting and improving [http://cTuning.org/ae Artifact Evaluation] for major workshops and conferences including CGO and PPoPP. | ||
− | See our manifesto and history [http://cTuning.org/history here]. | + | See our manifesto and history [http://cTuning.org/history.html here]. |
== Our R&D == | == Our R&D == | ||
Line 81: | Line 84: | ||
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent) | *[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent) | ||
*[http://dividiti.blogspot.com dividiti blog] | *[http://dividiti.blogspot.com dividiti blog] | ||
− | |||
*[http://cknowledge.org/repo Collective Knowledge repository] (new) | *[http://cknowledge.org/repo Collective Knowledge repository] (new) | ||
Line 88: | Line 90: | ||
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model] | *[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model] | ||
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ] | *Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ] | ||
+ | *[http://c-mind.org/repo Collective Mind repository] (outdated) | ||
== Acknowledgments<br/> == | == Acknowledgments<br/> == | ||
− | We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [ | + | We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [http://dividiti.com dividiti], [http://www.artifact-eval.org artifact-eval.org colleagues], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support. |
Revision as of 09:06, 13 April 2016
Enabling collaborative and reproducible computer systems research with an open publication model
This wiki is maintained by the non-profit Tuning foundation.
Contents
News and upcoming events
- We have released our new, small, open-source, BSD-licensed Collective Knowledge Framework (cTuning 4 aka CK) for collaborative and reproducible R&D: Sources at GitHub, documentation, live repository to crowdsource experiments such as multi-objective program autotuning, Android app to crowdsource experiments.
- Our Dagstuhl report on Artifact Evaluation
- Artifact Evaluation for computer systems' conferences, workshops and journals
- PPoPP'16 artifact evaluation
- CGO'16 artifact evaluation
- ADAPT'16 @ HiPEAC'16 - successfuly featured our open publication model with community-driven reviewing, public Reddit-based discussions and artifact evaluation
- Dagstuhl perspective workshop on artifact evaluation for conferences and journals
Motivation
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering as a side effect of our MILEPOST , cTuning.org, Collective Mind and Collective Knowledge projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:
- developing public and open source repositories of knowledge (see our pilot live repository [CK and our vision papers [1,2]);
- developing collaborative research and experimentation infrastructure that can share artifacts as reusable components together with the whole experimental setups (see our papers [1,2];
- evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [arXiv , ACM DL]);
- setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [arXiv , ACM DL]);
- improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material;
- supporting and improving Artifact Evaluation for major workshops and conferences including CGO and PPoPP.
See our manifesto and history here.
Our R&D
Together with the community and not-for-profit cTuning foundation we are working on the following topics:
- developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones
- describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact
- developing specification to preserve experiments including all software and hardware dependencies
- dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques
- developing new predictive analytics techniques to explore large design and optimization spaces
- validating and verifying experimental results by the community
- developing common research interfaces for existing or new tools
- developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)
- sharing rare hardware and computational resources for experimental validation
- implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure
- implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)
- speeding up analysis of "big" experimental data
- developing new (interactive) visualization techniques for "big" experimental data
- enabling interactive articles
Our events
- PPoPP'15 artifact evaluation
- CGO'15 artifact evaluation
- ADAPT'15 @ HiPEAC'15 - workshop on adaptive self-tuning computer systems
- ADAPT'14 @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ program and publications ]
- Special journal issue on Reproducible Research Methodologies at IEEE TETC
- ACM SIGPLAN TRUST'14 @ PLDI'14
- REPRODUCE'14 @ HPCA'14
- ADAPT'14 panel @ HiPEAC'14
- HiPEAC'13 CSW thematic session @ ACM ECRC "Making computer engineering a science"
- HiPEAC'12 CSW thematic session
- ASPLOS/EXADAPT'12 panel @ ASPLOS'12
- cTuning lectures (2008-2010)
- GCC Summit'09 discussion
Resources
- Collection of related tools
- Collection of related initiatives
- Collection of related benchmarks and data sets
- Collection of public repositories
- Collection of related lectures
- Collection of related articles
- Collection of related blogs
- Collection of jokes
- Collection of related events
Discussions
- LinkedIn group on reproducible research
- Main mailing list (general collaborative and reproducible R&D in computer engineering)
- cTuning foundation mailing list (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)
Follow us
- cTuning foundation twitter
- cTuning foundation facebook page (recent)
- dividiti blog
- Collective Knowledge repository (new)
Archive
- Outdated cTuning wiki page related to reproducible research and open publication model
- Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ database, web-service for online prediction of optimizations ]
- Collective Mind repository (outdated)
Acknowledgments
We would like to thank our colleagues from the cTuning foundation, dividiti, artifact-eval.org colleagues, OCCAM project for their help, feedback, participation and support.