(Our events)
(Replaced content with "Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science GitHub].")
Line 1: Line 1:
<center>
+
Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science GitHub].
<span style="font-size:x-large"><span style="color: rgb(178, 34, 34)">Enabling collaborative and reproducible computer systems research with an open publication model</span></span>
+
</center><p style="text-align: center">[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/ae/ppopp2016.html]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/ae/cgo2016.html]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://adapt-workshop.org/motivation2016.html]] &nbsp; &nbsp;&nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&nbsp; &nbsp;&nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]</p><p style="text-align: center"></p><p style="text-align: center">This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].</p>
+
 
+
== News and upcoming events<br/> ==
+
 
+
* We have released our new, small, open-source, BSD-licensed Collective Knowledge Framework (cTuning 4 aka CK) for collaborative and reproducible R&D: [http://github.com/ctuning/ck Sources at GitHub], [http://github.com/ctuning/ck/wiki documentation], [http://cknowledge.org/repo live repository to crowdsource experiments such as multi-objective program autotuning], [https://play.google.com/store/apps/details?id=openscience.crowdsource.experiments Android app to crowdsource experiments].
+
*[http://drops.dagstuhl.de/opus/volltexte/2016/5762 Our Dagstuhl report on Artifact Evaluation]
+
*[http://cTuning.org/ae Artifact Evaluation for computer systems' conferences, workshops and journals]
+
*[http://cTuning.org/ae/ppopp2016.html PPoPP'16 artifact evaluation]
+
*[http://cTuning.org/ae/cgo2016.html CGO'16 artifact evaluation]
+
*[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - successfuly featured [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, public Reddit-based discussions and artifact evaluation]
+
*[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for conferences and journals]
+
 
+
== Motivation<br/> ==
+
 
+
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). <span style="font-size: small">We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:</span>
+
 
+
*developing public and open source repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);
+
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];
+
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);
+
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);
+
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material;
+
*supporting and improving [http://cTuning.org/ae Artifact Evaluation] for major workshops and conferences including CGO and PPoPP.
+
 
+
See our manifesto and history [http://cTuning.org/history.html here].
+
 
+
== Our R&D ==
+
 
+
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:
+
 
+
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones
+
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact
+
*developing specification to preserve experiments including all software and hardware dependencies
+
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques
+
*developing new predictive analytics techniques to explore large design and optimization spaces
+
*validating and verifying experimental results by the community
+
*developing common research interfaces for existing or new tools
+
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)
+
*sharing rare hardware and computational resources for experimental validation
+
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure
+
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)
+
*speeding up analysis of "big" experimental data
+
*developing new (interactive) visualization techniques for "big" experimental data
+
*enabling interactive articles
+
 
+
== Our events<br/> ==
+
 
+
*[http://cTuning.org/ae Artifact Evaluation procedures for computer systems conferences]
+
*[http://cTuning.org/ae/ppopp2016.html PPoPP'16 artifact evaluation]
+
*[http://cTuning.org/ae/cgo2016.html CGO'16 artifact evaluation]
+
*[http://adapt-workshop.org/index2016.html ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems
+
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]
+
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]
+
*[http://adapt-workshop.org/2015 ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems
+
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]
+
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &nbsp;Research &nbsp;Methodologies at IEEE TETC
+
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14
+
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14
+
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14
+
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC "Making computer engineering a science"
+
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]
+
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12
+
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]
+
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]
+
 
+
== Resources<br/> ==
+
 
+
*[[Reproducibility:Links|Collection of related tools]]
+
*[[Reproducibility:Initiatives|Collection of related initiatives]]
+
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]
+
*[[Reproducibility:Repositories|Collection of public repositories]]
+
*[[Reproducibility:Lectures|Collection of related lectures]]
+
*[[Reproducibility:Articles|Collection of related articles]]
+
*[[Reproducibility:Blogs|Collection of related blogs]]
+
*[[Reproducibility:Jokes|Collection of jokes]]
+
*[[Reproducibility:Events|Collection of related events]]
+
 
+
== Discussions<br/> ==
+
 
+
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]
+
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&D in computer engineering)
+
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)
+
 
+
== Follow us<br/> ==
+
 
+
*[https://twitter.com/c_tuning cTuning foundation twitter]
+
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)
+
*[http://dividiti.blogspot.com dividiti blog]
+
*[http://cknowledge.org/repo Collective Knowledge repository] (new)
+
 
+
== Archive<br/> ==
+
 
+
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]
+
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]
+
*[http://c-mind.org/repo Collective Mind repository] (outdated)
+
 
+
== Acknowledgments<br/> ==
+
 
+
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [http://dividiti.com dividiti], [http://www.artifact-eval.org artifact-eval.org colleagues], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.
+

Revision as of 13:50, 13 September 2016

Moved to GitHub.


(C) 2011-2014 cTuning foundation