<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://ctuning.org/cm/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gfursin</id>
		<title>Collective Mind v1 - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://ctuning.org/cm/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gfursin"/>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Special:Contributions/Gfursin"/>
		<updated>2026-04-28T17:41:03Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.25.1</generator>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:AE&amp;diff=874</id>
		<title>Reproducibility:AE</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:AE&amp;diff=874"/>
				<updated>2016-09-14T08:32:32Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{#externalredirect: http://cTuning.org/ae}}&lt;br /&gt;
&lt;br /&gt;
This page moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science  GitHub].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Artifact Evaluation in Computer Engineering&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size:x-small&amp;quot;&amp;gt;[ [[Reproducibility|back to main page]] ]&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
Our mission is to push artifact evaluation to major conferences and journals while improving artifact sharing, reviewing and reusing process.&lt;br /&gt;
&lt;br /&gt;
We organized AE for the following events:&lt;br /&gt;
&lt;br /&gt;
*'''[[Reproducibility:AE:PPoPP2016|PPoPP'16]]'''&lt;br /&gt;
*'''[[Reproducibility:AE:CGO2016|CGO'16]]'''&lt;br /&gt;
*'''[http://www.adapt-workshop.org/2015/ ADAPT'15]'''&lt;br /&gt;
*'''[[Reproducibility:AE:PPoPP2015|PPoPP'15]]'''&lt;br /&gt;
*'''[[Reproducibility:AE:CGO2015|CGO'15]]'''&lt;br /&gt;
*[http://www.adapt-workshop.org/2014 '''ADAPT'14''']&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=873</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=873"/>
				<updated>2016-09-14T08:27:01Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{#externalredirect: https://github.com/ctuning/ck/wiki/Enabling-open-science}}&lt;br /&gt;
&lt;br /&gt;
This page moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science  GitHub].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=872</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=872"/>
				<updated>2016-09-13T13:50:51Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: Replaced content with &amp;quot;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science  GitHub].&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science  GitHub].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Events&amp;diff=871</id>
		<title>Reproducibility:Events</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Events&amp;diff=871"/>
				<updated>2016-09-13T13:50:12Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: Replaced content with &amp;quot;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-events  GitHub].&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-events  GitHub].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Jokes&amp;diff=870</id>
		<title>Reproducibility:Jokes</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Jokes&amp;diff=870"/>
				<updated>2016-09-13T13:44:27Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-jokes  GitHub].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Blogs&amp;diff=869</id>
		<title>Reproducibility:Blogs</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Blogs&amp;diff=869"/>
				<updated>2016-09-13T13:40:01Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-blogs  GitHub].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Articles&amp;diff=868</id>
		<title>Reproducibility:Articles</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Articles&amp;diff=868"/>
				<updated>2016-09-13T13:36:21Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: Replaced content with &amp;quot;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-articles  GitHub].&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-articles  GitHub].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Lectures&amp;diff=867</id>
		<title>Reproducibility:Lectures</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Lectures&amp;diff=867"/>
				<updated>2016-09-13T13:25:32Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-lectures  GitHub].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Repositories&amp;diff=866</id>
		<title>Reproducibility:Repositories</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Repositories&amp;diff=866"/>
				<updated>2016-09-13T13:22:25Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: Replaced content with &amp;quot;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-repos  GitHub].&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-repos  GitHub].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Datasets&amp;diff=865</id>
		<title>Reproducibility:Datasets</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Datasets&amp;diff=865"/>
				<updated>2016-09-13T13:19:01Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-datasets  GitHub].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Initiatives&amp;diff=864</id>
		<title>Reproducibility:Initiatives</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Initiatives&amp;diff=864"/>
				<updated>2016-09-13T13:10:39Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: Replaced content with &amp;quot;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-initiatives  GitHub].&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-initiatives  GitHub].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=863</id>
		<title>Reproducibility:Links</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=863"/>
				<updated>2016-09-13T13:10:27Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moved to [https://github.com/ctuning/ck/wiki/Enabling-open-science-tools  GitHub].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=862</id>
		<title>Reproducibility:Links</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=862"/>
				<updated>2016-09-13T13:04:12Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: Replaced content with &amp;quot;Moved [https://github.com/ctuning/ck/wiki/Enabling-open-science-tools  here].&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Moved [https://github.com/ctuning/ck/wiki/Enabling-open-science-tools  here].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=861</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=861"/>
				<updated>2016-04-13T09:09:00Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* Our events */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/ae/ppopp2016.html]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/ae/cgo2016.html]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://adapt-workshop.org/motivation2016.html]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* We have released our new, small, open-source, BSD-licensed Collective Knowledge Framework (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D: [http://github.com/ctuning/ck Sources at GitHub], [http://github.com/ctuning/ck/wiki documentation], [http://cknowledge.org/repo live repository to crowdsource experiments such as multi-objective program autotuning], [https://play.google.com/store/apps/details?id=openscience.crowdsource.experiments Android app to crowdsource experiments].&lt;br /&gt;
*[http://drops.dagstuhl.de/opus/volltexte/2016/5762 Our Dagstuhl report on Artifact Evaluation]&lt;br /&gt;
*[http://cTuning.org/ae Artifact Evaluation for computer systems' conferences, workshops and journals]&lt;br /&gt;
*[http://cTuning.org/ae/ppopp2016.html PPoPP'16 artifact evaluation]&lt;br /&gt;
*[http://cTuning.org/ae/cgo2016.html CGO'16 artifact evaluation]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - successfuly featured [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, public Reddit-based discussions and artifact evaluation]&lt;br /&gt;
*[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for conferences and journals]&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material;&lt;br /&gt;
*supporting and improving [http://cTuning.org/ae Artifact Evaluation] for major workshops and conferences including CGO and PPoPP.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history.html here].&lt;br /&gt;
&lt;br /&gt;
== Our R&amp;amp;D ==&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://cTuning.org/ae Artifact Evaluation procedures for computer systems conferences]&lt;br /&gt;
*[http://cTuning.org/ae/ppopp2016.html PPoPP'16 artifact evaluation]&lt;br /&gt;
*[http://cTuning.org/ae/cgo2016.html CGO'16 artifact evaluation]&lt;br /&gt;
*[http://adapt-workshop.org/index2016.html ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org/2015 ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;br /&gt;
&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [http://dividiti.com dividiti], [http://www.artifact-eval.org artifact-eval.org colleagues], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=860</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=860"/>
				<updated>2016-04-13T09:06:09Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/ae/ppopp2016.html]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/ae/cgo2016.html]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://adapt-workshop.org/motivation2016.html]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* We have released our new, small, open-source, BSD-licensed Collective Knowledge Framework (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D: [http://github.com/ctuning/ck Sources at GitHub], [http://github.com/ctuning/ck/wiki documentation], [http://cknowledge.org/repo live repository to crowdsource experiments such as multi-objective program autotuning], [https://play.google.com/store/apps/details?id=openscience.crowdsource.experiments Android app to crowdsource experiments].&lt;br /&gt;
*[http://drops.dagstuhl.de/opus/volltexte/2016/5762 Our Dagstuhl report on Artifact Evaluation]&lt;br /&gt;
*[http://cTuning.org/ae Artifact Evaluation for computer systems' conferences, workshops and journals]&lt;br /&gt;
*[http://cTuning.org/ae/ppopp2016.html PPoPP'16 artifact evaluation]&lt;br /&gt;
*[http://cTuning.org/ae/cgo2016.html CGO'16 artifact evaluation]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - successfuly featured [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, public Reddit-based discussions and artifact evaluation]&lt;br /&gt;
*[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for conferences and journals]&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material;&lt;br /&gt;
*supporting and improving [http://cTuning.org/ae Artifact Evaluation] for major workshops and conferences including CGO and PPoPP.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history.html here].&lt;br /&gt;
&lt;br /&gt;
== Our R&amp;amp;D ==&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;br /&gt;
&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [http://dividiti.com dividiti], [http://www.artifact-eval.org artifact-eval.org colleagues], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Main_Page&amp;diff=859</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Main_Page&amp;diff=859"/>
				<updated>2016-04-01T21:08:56Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&lt;br /&gt;
'''''In September 2015, we have released a brand new Collective Knowledge framework for collaborative, systematic and reproducible computer system's research, and moved all further developments to GitHub: [http://github.com/ctuning/ck src], [http://github.com/ctuning/ck/wiki docs]'''''&lt;br /&gt;
&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
'''This page is not updated since summer 2015 - see this [http://cTuning.org/reproducibility-wiki wiki] instead!'''''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Collective Mind&amp;lt;br/&amp;gt;&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;''towards collaborative, systematic and reproducible computer engineering''&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;1&amp;quot; width=&amp;quot;1118&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;170&amp;quot; | &amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Validate by community.png|Validate by community.png|link=http://cTuning.org/reproducibility]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:CTuning foundation logo1.png|none|CTuning foundation logo1.png|link=http://cTuning.org]]&amp;lt;/p&amp;gt;&lt;br /&gt;
| rowspan=&amp;quot;1&amp;quot; | &lt;br /&gt;
We are a group of researchers working with the community on a new methodology, infrastructure and repository to enable collaborative and reproducible research and experimentation in computer engineering as a side effect of our projects on combining performance/energy/size auto-tuning with run-time adaptation, crowdsourcing, big data and predictive analytics ('''see our [http://cTuning.org/history manifesto and history]'''). Our approach, in turn, helped to enable [http://c-mind.org/reproducibility new publication model] where all research material (code and data artifacts) is shared along with articles to be continuously discussed, validated and improved by the community! To evangelize this community-driven approach and set up an example, we started releasing all our benchmarks, data sets, predictive models and tools with unified interfaces since 2007 at [http://cTuning.org cTuning.org ]and later at [http://c-mind.org/repo c-mind.org/repo]. We use&amp;amp;nbsp; [http://adapt-workshop.org ADAPT workshop] on self-tuning computing systems to validate our new research and publication model. We hope that it can complement well recent academic initiatives on reproducible research at major conferences while focusing more on [http://c-mind.org/reproducibility technological aspects] of collaborative and reproducible research in computer engineering (rather than just sharing and validating artifacts).&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;''This R&amp;amp;D is supported by [http://cTuning.org the cTuning foundation].''&amp;lt;/p&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Our long term vision&amp;lt;br/&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
With the rapid advances in information technology and all other fields of science comes dramatic growth in the amount of processing data (&amp;quot;big data&amp;quot;). Scientists, engineers and students are drowning in experimental data and often have to divert their research path towards data management, mining, and visualization. Such approaches often require additional interdisciplinary skills including statistical analysis, machine learning, programming and parallelization, database management, and Internet technologies, which still few researchers have or can afford to learn in parallel with their main research work. Multiple frameworks, languages and public data repositories started appearing recently to enable collaborative data analysis and processing but they are often either covering very narrow research topics and too simplistic (just data and code sharing) or very formal and still require special programming skills often including Object Oriented Programming.&lt;br /&gt;
&lt;br /&gt;
Collective Mind technology (cM) attempts to fill in this gap by providing researchers and companies a simple, portable, technology-neutral and practically transparent way to gradually systematize and classify all their data, code and tools. Open source cM framework and repository fully relies on customizable public&amp;amp;nbsp; or private plugins (mostly written in python with support of any other language through OpenME interface) to gradually describe and classify similar data and code objects, or abstract interfaces of ever changing tools thus effectively protecting researchers' experimental setups. cM helps to easily preserve any complex research artifact (collection of files, benchmarks, codelets, datasets, tools, traces, models) with gradually and easily extensible JSON based meta description including classification, properties and either direct or semantic data connections. Furthermore, meta descriptions of all&amp;amp;nbsp; data can be transparently and easily indexed using third-party [http://www.elasticsearch.org ElasticSearch] enabling very fast and complex queries. At the same time, all research artifacts can be exposed to any public or workgroup user through unified web services to crowdsource experimentation, ranking, online learning and knowledge management.&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:C-mind-picture.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
cM uses agile top-down methodology originating from physics to represent any experimental scenario and gradually decompose it into connected plugins with associated data or compose it from already shared plugins similar to &amp;quot;research LEGO&amp;quot;. Universal structure immediately enables replay mode for any experiment, thus making this framework suitable for recent projects on reproducibility of experimental results and new publication model where experiments and techniques are validated, ranked and improved by the community. For example, we easily moved all our past R&amp;amp;D on program and architecture multi-objective auto-tuning, co-design and dynamic adaptation to cM plugins and gradually make them available together with all research artifacts at [http://c-mind.org/repo http://c-mind.org/repo]. We hope that cM will be useful to a broad range of researchers and companies either as an open-source, community driven solution to systematize their research and experimentation, or possibly as an intermediate step before investing into more complex or commercial knowledge management systems.&lt;br /&gt;
&lt;br /&gt;
''Here is [[Reproducibility:Links|our list of links]] to initiatives, publications, tools and techniques related to collaborative and reproducible reserarch, experimentation and development in computer engineering.''&lt;br /&gt;
&lt;br /&gt;
==== Related Collective Mind publications and presentations&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*Our new open publication model proposal (2014) ([http://dl.acm.org/citation.cfm?id=2618142 ACM pdf], [http://arxiv.org/abs/1406.4020 arXiv pdf])- it summarizes our practical experience with sharing and reviewing experimental results and research artifacts since 2007; we plan to validate it at our [http://adapt-workshop.org ADAPT'15]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit publication (2009)] - introducing our vision on reproducible research and describing cTuning.org framework for collaborative and reproducible program and architecture analysis, optimization and co-design (all code and data for machine learning based compiler MILEPOST GCC have been publicly shared at [http://cTuning.org cTuning.org])&lt;br /&gt;
*[http://arxiv.org/abs/1308.2410 INRIA/arXiv technical report (2013)] - introducing long term Collective Mind vision; considerably updated journal version will be available in Fall 2014&lt;br /&gt;
*[http://c-mind.org/repo/?view_cid=shared1:dissemination.publication:530e5f456ea259de ACM TACO publication (2012)] - introducing crowdtuning (crowdsourcing auto-tuning)&lt;br /&gt;
*[http://c-mind.org/repo/?view_cid=shared1:dissemination.publication:a31e374796869125 IJPP publication (2011)] - introducing machine learning based compiler, cTuning.org and reproducible R&amp;amp;D on program and architecture optimization&lt;br /&gt;
*[http://www.slideshare.net/GrigoriFursin/presentation-fursin-hpsc2013fursin1 Long term vision slides] - &amp;quot;Systematizing tuning of computer systems using crowdsourcing and statistics&amp;quot;&lt;br /&gt;
*[http://www.slideshare.net/GrigoriFursin/presentation-fursin-hpsc2013fursin2 cM basics slides] - &amp;quot;Collective Mind infrastructure and repository to crowdsource auto-tuning&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Public repository of knowledge&amp;lt;br/&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
''Do not waste your research material - use Collective Mind Framework and Repository to describe, run and share your experiments with the community!''&lt;br /&gt;
&lt;br /&gt;
*[http://c-mind.org/repo Beta live Collective Mind repository] (3rd generation opened in 2013 substituting previous cTuning repository and infrastructure available since 2008) - we described and shared all our past research developments, codelets, benchmarks, data sets, models, statistical analysis, modeling and online learning plugins and tools to start top-down analysis and optimization of existing computer systems. We used it as the first practical example to motivate new publication model where all research artifacts are continuously shared, validated and improved by the community. After many years, it seems that community finally started moving in this direction and we even see some related initiatives in major conferences including OOPSLA and PLDI. We believe that our project and feedback from the community collected since 2006 is complementary and can help with various technological aspects of collaborative and reproducible research in computer engineering.&lt;br /&gt;
&lt;br /&gt;
= Common infrastructure and support tools =&lt;br /&gt;
&lt;br /&gt;
*[[Tools:CM|Collective Mind Infrastructure (cM)]] - plugin-based framework and repository for collaborative, systematic and reproducible research and experimentation&lt;br /&gt;
**[[Tools:OpenME|OpenME]] - universal and simple event-based interface to &amp;quot;open up&amp;quot; black box applications and third-party tools such as GCC, LLVM and Open64 to be able to monitor, learn and predict any fine-grain optimization decision inside through external plugins&lt;br /&gt;
**[[Tools:Alchemist|Alchemist]] - OpenME plugin to convert compilers into interactive analysis and optimization toolsets&lt;br /&gt;
&lt;br /&gt;
= Events =&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/cm/wiki/Events%3ATRUST2014 Workshop TRUST 2014 on reproducible research methodologies and new publication models] @ PLDI 2014 (Edinburgh, UK)&lt;br /&gt;
*[http://www.occamportal.org/reproduce Workshop REPRODUCE 2014 on reproducible research methodologies and new publication models] @ HPCA 2014 (Orlando, Florida, USA)&lt;br /&gt;
*[http://adapt-workshop.org/program.htm Panel on reproducible research methodologies and new publication models] at ADAPT 2014 @ HiPEAC 2014 (Vienna, Austria)&lt;br /&gt;
*[http://www.hipeac.net/thematic-session/making-computer-engineering-science Thematic session on making computer engineering a science] @&amp;amp;nbsp; ACM ECRC 2013 / HiPEAC computing week 2013 (Paris, France)&lt;br /&gt;
*[http://www.hipeac.net/thematic-session/collective-characterization-optimization-and-design-computer-systems Thematic session on collective characterization, optimization and design of computer systems] @ HiPEAC spring computing week 2012&amp;amp;nbsp; (Goteborg, Sweden)&lt;br /&gt;
*[http://www.hipeac.net/conference/pisa/speedup Tutorial on Speedup-Test: Statistical Methodology to Evaluate Program Speedups and their Optimisation Techniques] @ HiPEAC 2010 (Pisa, Italy)&lt;br /&gt;
*[http://c-mind.org/repo/?view_cid=77154d189d2e226c:0053bdf524fb9a58 Tutorial on cTuning tools for collaborative and reproducible program and architecture characterization and auto-tuning] @ HiPEAC computing systems week 2009 (Infineon, Munich, Germany)&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 Public discussion on collaborative and reproducible analysis, design and optimization of computer systems] @ GCC Summit 2009 (Montreal, Canada)&lt;br /&gt;
&lt;br /&gt;
= Current customized usage scenarios&amp;lt;br/&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
Designing novel many-core computer systems becomes intolerably complex, &amp;amp;nbsp;ad-hoc, costly and error prone due to limitations of available technology, enormous number of available design and optimization choices, and complex interactions between all software and hardware components. Empirical auto-tuning combined with run-time adaptation and machine learning has been demonstrating good potential to address above challenges for more than a decade but still far from the widespread production use due to unbearably long exploration and training times, ever changing tools and their interfaces, lack of a common experimental methodology, and lack of unified mechanisms for knowledge building and exchange apart from publications where reproducibility of results is often not even considered. Since 1993, we have spent more time on preparing and analyzing huge amount of heterogeneous experiments for self-tuning machine-learning based computer systems or trying to validate and reproduce others research results rather than on &amp;lt;span data-scayt_word=&amp;quot;exending&amp;quot; data-scaytid=&amp;quot;24&amp;quot;&amp;gt;exending&amp;lt;/span&amp;gt; our novel ideas.&lt;br /&gt;
&lt;br /&gt;
In 2007, we decided to start collaborative systematization and unification of design and optimization of computer systems combined with a [http://cTuning.org/cm-journal new publication model] where experimental results are validated by the community. One of the possible promising solutions is to combine public repository of knowledge with online auto-tuning, machine learning and &amp;lt;span data-scayt_word=&amp;quot;crowdsourcing&amp;quot; data-scaytid=&amp;quot;25&amp;quot;&amp;gt;crowdsourcing&amp;lt;/span&amp;gt; techniques where &amp;lt;span data-scayt_word=&amp;quot;HiPEAC&amp;quot; data-scaytid=&amp;quot;26&amp;quot;&amp;gt;HiPEAC&amp;lt;/span&amp;gt; and &amp;lt;span data-scayt_word=&amp;quot;cTuning&amp;quot; data-scaytid=&amp;quot;27&amp;quot;&amp;gt;cTuning&amp;lt;/span&amp;gt; communities already have a good practical experience. Such collaborative approach should allow community to continuously validate, systematize and improve collective knowledge about computer systems, and extrapolate it to build faster, more power efficient and reliable computer systems. It can also help to restore the attractiveness of computer engineering making it a more systematic and rigorous discipline rather than &amp;quot;hacking&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
We develop &amp;lt;span data-scayt_word=&amp;quot;cTuning&amp;quot; data-scaytid=&amp;quot;28&amp;quot;&amp;gt;cTuning&amp;lt;/span&amp;gt; collaborative research and development infrastructure and repository (current version is &amp;lt;span data-scayt_word=&amp;quot;cTuning3&amp;quot; data-scaytid=&amp;quot;30&amp;quot;&amp;gt;cTuning3&amp;lt;/span&amp;gt; aka Collective Mind) that enables:&lt;br /&gt;
&lt;br /&gt;
*gradual decomposition and &amp;lt;span data-scayt_word=&amp;quot;parametrization&amp;quot; data-scaytid=&amp;quot;32&amp;quot;&amp;gt;parametrization&amp;lt;/span&amp;gt; of complex computer systems and experiments into unified and inter-connected Collective Mind modules (components or plugins) with extensible meta-information&lt;br /&gt;
*easy co-existance of multiple versions of tools and libraries&lt;br /&gt;
*implementation of experimental pipelines with all related artifacts necessary for collaborative and reproducible research and experimentation&lt;br /&gt;
*collection and sharing of statistics, benchmarks, &amp;lt;span data-scayt_word=&amp;quot;codelets&amp;quot; data-scaytid=&amp;quot;31&amp;quot;&amp;gt;codelets&amp;lt;/span&amp;gt;, tools, data sets and predictive models from the community&lt;br /&gt;
*&amp;lt;span data-scayt_word=&amp;quot;systematizaton&amp;quot; data-scaytid=&amp;quot;34&amp;quot;&amp;gt;systematizaton&amp;lt;/span&amp;gt; of optimization, design space exploration and run-time adaptation techniques (co-design and auto-tuning)&lt;br /&gt;
*collaborative evaluation and improvement of various data mining, classification and predictive modeling techniques for off-line and on-line auto-tuning&lt;br /&gt;
*new publication model (workshops, conferences, journals) with validation of experimental results by the community&lt;br /&gt;
&lt;br /&gt;
Current &amp;lt;span data-scayt_word=&amp;quot;cM&amp;quot; data-scaytid=&amp;quot;35&amp;quot;&amp;gt;cM&amp;lt;/span&amp;gt; version includes public benchmarks, datasets, tools, techniques and some stats from past [http://cTuning.org/lab/people/gfursin &amp;lt;span data-scayt_word=&amp;quot;Grigori&amp;quot; data-scaytid=&amp;quot;36&amp;quot;&amp;gt;Grigori&amp;lt;/span&amp;gt; &amp;lt;span data-scayt_word=&amp;quot;Fursin&amp;amp;#039;s&amp;quot; data-scaytid=&amp;quot;39&amp;quot;&amp;gt;Fursin's&amp;lt;/span&amp;gt; research]:&lt;br /&gt;
&lt;br /&gt;
*support for most &amp;lt;span data-scayt_word=&amp;quot;OSes&amp;quot; data-scaytid=&amp;quot;40&amp;quot;&amp;gt;OSes&amp;lt;/span&amp;gt; and platforms (Linux, Android, Windows; servers, cloud nodes, mobiles, laptops, tablets, &amp;lt;span data-scayt_word=&amp;quot;supercomputers&amp;quot; data-scaytid=&amp;quot;41&amp;quot;&amp;gt;supercomputers&amp;lt;/span&amp;gt;)&lt;br /&gt;
*multiple benchmarks (&amp;lt;span data-scayt_word=&amp;quot;cBench&amp;quot; data-scaytid=&amp;quot;43&amp;quot;&amp;gt;cBench&amp;lt;/span&amp;gt;, &amp;lt;span data-scayt_word=&amp;quot;polybench&amp;quot; data-scaytid=&amp;quot;45&amp;quot;&amp;gt;polybench&amp;lt;/span&amp;gt;, &amp;lt;span data-scayt_word=&amp;quot;SPEC95&amp;quot; data-scaytid=&amp;quot;46&amp;quot;&amp;gt;SPEC95&amp;lt;/span&amp;gt;,&amp;lt;span data-scayt_word=&amp;quot;SPEC2000&amp;quot; data-scaytid=&amp;quot;47&amp;quot;&amp;gt;SPEC2000&amp;lt;/span&amp;gt;,&amp;lt;span data-scayt_word=&amp;quot;SPEC2006&amp;quot; data-scaytid=&amp;quot;48&amp;quot;&amp;gt;SPEC2006&amp;lt;/span&amp;gt;,&amp;lt;span data-scayt_word=&amp;quot;EEMBC&amp;quot; data-scaytid=&amp;quot;49&amp;quot;&amp;gt;EEMBC&amp;lt;/span&amp;gt;,etc), hundreds of MILEPOST/CAPS &amp;lt;span data-scayt_word=&amp;quot;codelets&amp;quot; data-scaytid=&amp;quot;42&amp;quot;&amp;gt;codelets&amp;lt;/span&amp;gt;, &amp;lt;span data-scayt_word=&amp;quot;thosands&amp;quot; data-scaytid=&amp;quot;51&amp;quot;&amp;gt;thosands&amp;lt;/span&amp;gt; of &amp;lt;span data-scayt_word=&amp;quot;cBench&amp;quot; data-scaytid=&amp;quot;44&amp;quot;&amp;gt;cBench&amp;lt;/span&amp;gt; datasets&lt;br /&gt;
*multiple compilers (&amp;lt;span data-scayt_word=&amp;quot;GCC&amp;quot; data-scaytid=&amp;quot;52&amp;quot;&amp;gt;GCC&amp;lt;/span&amp;gt;, &amp;lt;span data-scayt_word=&amp;quot;LLVM&amp;quot; data-scaytid=&amp;quot;53&amp;quot;&amp;gt;LLVM&amp;lt;/span&amp;gt;, &amp;lt;span data-scayt_word=&amp;quot;Open64&amp;quot; data-scaytid=&amp;quot;54&amp;quot;&amp;gt;Open64&amp;lt;/span&amp;gt;, &amp;lt;span data-scayt_word=&amp;quot;PathScale&amp;quot; data-scaytid=&amp;quot;55&amp;quot;&amp;gt;PathScale&amp;lt;/span&amp;gt;, Intel, IBM, &amp;lt;span data-scayt_word=&amp;quot;PGI&amp;quot; data-scaytid=&amp;quot;56&amp;quot;&amp;gt;PGI&amp;lt;/span&amp;gt;)&lt;br /&gt;
*tools for program and architecture characterization (MILEPOST &amp;lt;span data-scayt_word=&amp;quot;GCC&amp;quot; data-scaytid=&amp;quot;57&amp;quot;&amp;gt;GCC&amp;lt;/span&amp;gt; for semantic features and code patterns; hardware counters for dynamic analysis)&lt;br /&gt;
*plugins for powerful visualization and data export in various formats&lt;br /&gt;
*experimental pipeline for universal program and architecture co-design, auto-tuning, performance/energy modeling and machine learning&lt;br /&gt;
*&amp;lt;span data-scayt_word=&amp;quot;OpenME&amp;quot; data-scaytid=&amp;quot;59&amp;quot;&amp;gt;OpenME&amp;lt;/span&amp;gt; interface to instrument programs or statically enable adaptive binaries through &amp;lt;span data-scayt_word=&amp;quot;multi-versioning&amp;quot; data-scaytid=&amp;quot;62&amp;quot;&amp;gt;multi-versioning&amp;lt;/span&amp;gt; and decision trees for run-time adaptation/scheduling while easily mixing CPU/&amp;lt;span data-scayt_word=&amp;quot;CUDA&amp;quot; data-scaytid=&amp;quot;63&amp;quot;&amp;gt;CUDA&amp;lt;/span&amp;gt;/&amp;lt;span data-scayt_word=&amp;quot;OpenCL&amp;quot; data-scaytid=&amp;quot;64&amp;quot;&amp;gt;OpenCL&amp;lt;/span&amp;gt; &amp;lt;span data-scayt_word=&amp;quot;codelets&amp;quot; data-scaytid=&amp;quot;60&amp;quot;&amp;gt;codelets&amp;lt;/span&amp;gt; or any other heterogeneous programming models&lt;br /&gt;
*plugins for online auto-tuning and performance model building&lt;br /&gt;
*machine-learning enabled self-tuning &amp;lt;span data-scayt_word=&amp;quot;cTuning&amp;quot; data-scaytid=&amp;quot;66&amp;quot;&amp;gt;cTuning&amp;lt;/span&amp;gt; CC compiler that can wrap any existing compiler while using crowd-tuning and collective knowledge to continuously improve its own behavior&lt;br /&gt;
*plugins for universal &amp;lt;span data-scayt_word=&amp;quot;P2P&amp;quot; data-scaytid=&amp;quot;69&amp;quot;&amp;gt;P2P&amp;lt;/span&amp;gt; data exchange through &amp;lt;span data-scayt_word=&amp;quot;cM&amp;quot; data-scaytid=&amp;quot;67&amp;quot;&amp;gt;cM&amp;lt;/span&amp;gt; web services&lt;br /&gt;
*optimization statistics for various ARM, Intel and NVidia chips&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See [http://cTuning.org/reproducibility-wiki reproducibility wiki for further details].&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=858</id>
		<title>Reproducibility:Links</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=858"/>
				<updated>2015-10-27T11:54:02Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted (offline and online) tools'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[https://www.docker.io Docker tool] - pack, ship and run applications as a lightweight container&lt;br /&gt;
*[http://www.virtualbox.org VirtualBox] - create VM images&lt;br /&gt;
*[http://www.occamportal.org OCCAM] - open curation for computer architecture modeling&lt;br /&gt;
*[http://vida-nyu.github.io/reprozip ReproZip] - automatically packing experiments ([https://www.usenix.org/conference/tapp13/technical-sessions/presentation/chirigati related article])&lt;br /&gt;
*[http://reproducible.io CARE tool] - Comprehensive Archiver for Reproducible Execution&lt;br /&gt;
*[http://rr-project.org RR] - Mozilla project: records nondeterministic executions and debugs them deterministically&lt;br /&gt;
*[http://www.pgbovine.net/cde.html CDE tool] - automatically create portable Linux applications with all dependencies&lt;br /&gt;
*[http://ipython.org/notebook.html IPython Notebook] - a web-based interactive computational environment where you can combine code execution, text, mathematics, plots and rich media into a single document&lt;br /&gt;
*[http://www.rstudio.com R-studio] - open source and enterprise-ready professional software for R&lt;br /&gt;
*[https://www.codalab.org Codelab] - an experimental platform for collaboration and competition&lt;br /&gt;
*[https://www.grid5000.fr Grid5000]  - large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data&lt;br /&gt;
*[http://www.mygrid.org.uk MyGrid] - develops a suite of tools designed to &amp;quot;help e-Scientists get on with science and get on with scientists&amp;quot;&lt;br /&gt;
*[http://figshare.com FigShare] - managing research in a cloud&lt;br /&gt;
*[http://www.runmycode.org RunMyCode] - online workflows&lt;br /&gt;
*[https://www.aptlab.net AptLab] - online workflows&lt;br /&gt;
*[http://www.taverna.org.uk Taverna] - designing and executing workflows&lt;br /&gt;
*[http://boinc.berkeley.edu BOINC] - open-source software for volunteer computing and grid computing&lt;br /&gt;
*[https://mulcyber.toulouse.inra.fr/projects/ngspipelines NGS pipelines] - integrates pipelines and user interfaces to help biologists to analyse data outputed from biological applications such as RNAseq, sRNAseq, ChipSeq, BS-seq&lt;br /&gt;
*[http://www.seek4science.org SEEK for Science] - finding, sharing and exchanging Data, Models, Simulations and Processes in Science&lt;br /&gt;
*[http://www.cs.umd.edu/projects/skoll/Skoll/Home.html Skoll] - a process &amp;amp; Infrastructure for Distributed, continuous Quality assurance&lt;br /&gt;
*[http://nepi.inria.fr NEPI] - simplifying network experimentation&lt;br /&gt;
*[http://pgbovine.net/burrito.html Burrito] - rethinking the Electronic Lab Notebook&lt;br /&gt;
*[http://orgmode.org Org mode] - keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 open source cTuning technology] - towards collaborative and reproducible performance autotuning via public repository of optimization knowledge, crowdsourcing, machine learning and collective intelligence (2006-cur.)&lt;br /&gt;
&lt;br /&gt;
=== '''Collective Knowledge Framework'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge Framework] - tiny, open source, python application (~200Kb) to help researchers decompose their ad-hoc experimental workflows, benchmarks, data sets, scripts and VM/Docker images into reusable and customizable components with Python wrappers and unified JSON API while using already installed and possibly proprietary tools, benchmarks and data sets:&lt;br /&gt;
&lt;br /&gt;
**[http://cknowledge.org/repo example of a CK-based interactive article with reproducible OpenCL autotuning experiments]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;br /&gt;
**[http://github.com/ctuning/ck/wiki Full documentation]&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=857</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=857"/>
				<updated>2015-10-27T11:49:36Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* ''We have released our new, open-source, BSD-licensed Collective Knowledge Framework (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D: [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - features [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for conferences and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material;&lt;br /&gt;
*supporting and improving [http://cTuning.org/ae Artifact Evaluation] for major workshops and conferences including CGO and PPoPP.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
== Our R&amp;amp;D ==&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;br /&gt;
&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:AE:PPoPP2016&amp;diff=856</id>
		<title>Reproducibility:AE:PPoPP2016</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:AE:PPoPP2016&amp;diff=856"/>
				<updated>2015-10-27T09:38:55Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{#externalredirect: http://cTuning.org/ae/ppopp2016.html}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellpadding=&amp;quot;15&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align: right width: 200px&amp;quot; | [[File:Ae-stamp-ppopp2015.png]]&amp;lt;br/&amp;gt;&lt;br /&gt;
| &amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size:xx-large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Artifact Evaluation for PPoPP'16&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size:small&amp;quot;&amp;gt;[ [http://conf.researchr.org/home/PPoPP-2016 Back to PPoPP'16 conference website] ]&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 97%&amp;quot; border=&amp;quot;0&amp;quot; cellpadding=&amp;quot;20&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;width: 32%&amp;quot; valign=&amp;quot;top&amp;quot; | &amp;lt;div class=&amp;quot;span4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Important dates&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Paper decision: '''9 Nov 2015'''&lt;br /&gt;
*Artifact submission: '''20 Nov 2015 AoE'''&lt;br /&gt;
*Technical clarification: '''14-20 Dec 2015'''&lt;br /&gt;
*Decision announced: '''22 Dec 2015'''&lt;br /&gt;
*Public discussion: '''14 March 2016'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Packaging guidelines&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AE for CGO/PPoPP'15 used the following [http://www.artifact-eval.org/guidelines.html guidelines for artifacts] . We are preparing a revised version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;How to submit&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please read the [http://www.artifact-eval.org/guidelines.html guidelines] on what to submit. Please upload your submission to [https://easychair.org/conferences/?conf=aecgoppopp2016 EasyChair] (joint CGO/PPoPP submission).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Review process&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The review process is described in detail [http://www.artifact-eval.org/review-process.html here]. We are preparing a revised version.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
| style=&amp;quot;width: 33%&amp;quot; valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Artifact Evaluation Committee (AEC)&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Chairs:'''&lt;br /&gt;
&lt;br /&gt;
*[http://cTuning.org/lab/people/gfursin Grigori Fursin], cTuning foundation, France&lt;br /&gt;
*[http://people.cs.pitt.edu/%7Echilders Bruce R. Childers], University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
'''Committee (joint with CGO'16):'''&lt;br /&gt;
&lt;br /&gt;
*'''TBA'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Prior Artifact Evaluation&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Last year, PPoPP 2015 successfully ran [[Reproducibility:AE:PPoPP2015|artifact evaluation process]] for the first time. PPoPP 2016 continues this experiment.&lt;br /&gt;
&lt;br /&gt;
Artifacts have been already evaluated at several other conferences and workshops including recent [http://cTuning.org/event/ae-cgo2015 CGO 2015], [http://pldi14-aec.cs.brown.edu PLDI 2014], [http://2014.splashcon.org/track/splash2014-artifacts OOPSLA 2014] and [http://adapt-workshop.org/2014 ADAPT 2014]. Our eventual goal is to develop common evaluation methodology gradually and collaboratively.&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;width: 32%&amp;quot; valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Discussions&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We consider AE as a continuous learning curve. If you have questions or comments and suggestions on how to improve packaging and reviewing process, please get in touch!&lt;br /&gt;
&lt;br /&gt;
Our presenation about Artifact Evaluation Experience at CGO/PPoPP'15 is available online [http://www.slideshare.net/GrigoriFursin/presentation-fursin-aecgoppopp2015 here].&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:AE:CGO2016&amp;diff=855</id>
		<title>Reproducibility:AE:CGO2016</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:AE:CGO2016&amp;diff=855"/>
				<updated>2015-10-27T09:35:22Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{#externalredirect: http://cTuning.org/ae/cgo2016.html}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellpadding=&amp;quot;15&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align: right width: 200px&amp;quot; | [[File:Ae-stamp-cgo.png]]&amp;lt;br/&amp;gt;&lt;br /&gt;
| &amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size:xx-large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Artifact Evaluation for CGO'16&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size:small&amp;quot;&amp;gt;[ [http://cgo.org/cgo2016 Back to CGO'16 conference website] ]&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 97%&amp;quot; border=&amp;quot;0&amp;quot; cellpadding=&amp;quot;20&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;width: 32%&amp;quot; valign=&amp;quot;top&amp;quot; | &amp;lt;div class=&amp;quot;span4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Important dates&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Paper decision: '''8 Nov 2015'''&lt;br /&gt;
*Artifact submission: '''20 Nov 2015 AoE'''&lt;br /&gt;
*Technical clarification: '''14-20 Dec 2015'''&lt;br /&gt;
*Decision announced: '''22 Dec 2015'''&lt;br /&gt;
*Public discussion: '''14 March 2016'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Packaging guidelines&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AE for CGO/PPoPP'15 used the following [http://www.artifact-eval.org/guidelines.html guidelines for artifacts] . We are preparing a revised version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;How to submit&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please read the [http://www.artifact-eval.org/guidelines.html guidelines] on what to submit. Please upload your submission to [https://easychair.org/conferences/?conf=aecgoppopp2016 EasyChair] (joint CGO/PPoPP submission).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Review process&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The review process is described in detail [http://www.artifact-eval.org/review-process.html here]. We are preparing a revised version.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
| style=&amp;quot;width: 33%&amp;quot; valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Artifact Evaluation Committee (AEC)&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Chairs:'''&lt;br /&gt;
&lt;br /&gt;
*[http://cTuning.org/lab/people/gfursin Grigori Fursin], cTuning foundation, France&lt;br /&gt;
*[http://people.cs.pitt.edu/%7Echilders Bruce R. Childers], University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
'''Committee (joint with PPoPP'16):'''&lt;br /&gt;
&lt;br /&gt;
*'''TBA'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Prior Artifact Evaluation&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Last year, CGO 2015 successfully ran [[Reproducibility:AE:CGO2015|artifact evaluation process]] for the first time. CGO 2016 continues this experiment.&lt;br /&gt;
&lt;br /&gt;
Artifacts have been already evaluated at several other conferences and workshops including recent [http://cTuning.org/event/ae-ppopp2015 PPoPP 2015], [http://pldi14-aec.cs.brown.edu PLDI 2014], [http://2014.splashcon.org/track/splash2014-artifacts OOPSLA 2014] and [http://adapt-workshop.org/2014 ADAPT 2014]. Our eventual goal is to develop common evaluation methodology gradually and collaboratively.&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;width: 32%&amp;quot; valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Discussions&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We consider AE as a continuous learning curve. If you have questions or comments and suggestions on how to improve packaging and reviewing process, please get in touch!&lt;br /&gt;
&lt;br /&gt;
Our presenation about Artifact Evaluation Experience at CGO/PPoPP'15 is available online [http://www.slideshare.net/GrigoriFursin/presentation-fursin-aecgoppopp2015 here].&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:AE:CGO2016&amp;diff=854</id>
		<title>Reproducibility:AE:CGO2016</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:AE:CGO2016&amp;diff=854"/>
				<updated>2015-10-27T09:34:52Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [http://cTuning.org/ae/cgo2016.html]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellpadding=&amp;quot;15&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align: right width: 200px&amp;quot; | [[File:Ae-stamp-cgo.png]]&amp;lt;br/&amp;gt;&lt;br /&gt;
| &amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size:xx-large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Artifact Evaluation for CGO'16&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-size:small&amp;quot;&amp;gt;[ [http://cgo.org/cgo2016 Back to CGO'16 conference website] ]&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 97%&amp;quot; border=&amp;quot;0&amp;quot; cellpadding=&amp;quot;20&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;width: 32%&amp;quot; valign=&amp;quot;top&amp;quot; | &amp;lt;div class=&amp;quot;span4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Important dates&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Paper decision: '''8 Nov 2015'''&lt;br /&gt;
*Artifact submission: '''20 Nov 2015 AoE'''&lt;br /&gt;
*Technical clarification: '''14-20 Dec 2015'''&lt;br /&gt;
*Decision announced: '''22 Dec 2015'''&lt;br /&gt;
*Public discussion: '''14 March 2016'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Packaging guidelines&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AE for CGO/PPoPP'15 used the following [http://www.artifact-eval.org/guidelines.html guidelines for artifacts] . We are preparing a revised version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;How to submit&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please read the [http://www.artifact-eval.org/guidelines.html guidelines] on what to submit. Please upload your submission to [https://easychair.org/conferences/?conf=aecgoppopp2016 EasyChair] (joint CGO/PPoPP submission).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Review process&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The review process is described in detail [http://www.artifact-eval.org/review-process.html here]. We are preparing a revised version.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
| style=&amp;quot;width: 33%&amp;quot; valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Artifact Evaluation Committee (AEC)&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Chairs:'''&lt;br /&gt;
&lt;br /&gt;
*[http://cTuning.org/lab/people/gfursin Grigori Fursin], cTuning foundation, France&lt;br /&gt;
*[http://people.cs.pitt.edu/%7Echilders Bruce R. Childers], University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
'''Committee (joint with PPoPP'16):'''&lt;br /&gt;
&lt;br /&gt;
*'''TBA'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Prior Artifact Evaluation&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Last year, CGO 2015 successfully ran [[Reproducibility:AE:CGO2015|artifact evaluation process]] for the first time. CGO 2016 continues this experiment.&lt;br /&gt;
&lt;br /&gt;
Artifacts have been already evaluated at several other conferences and workshops including recent [http://cTuning.org/event/ae-ppopp2015 PPoPP 2015], [http://pldi14-aec.cs.brown.edu PLDI 2014], [http://2014.splashcon.org/track/splash2014-artifacts OOPSLA 2014] and [http://adapt-workshop.org/2014 ADAPT 2014]. Our eventual goal is to develop common evaluation methodology gradually and collaboratively.&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;width: 32%&amp;quot; valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:large&amp;quot;&amp;gt;'''&amp;lt;span style=&amp;quot;font-family: tahoma,geneva,sans-serif&amp;quot;&amp;gt;Discussions&amp;lt;/span&amp;gt;'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We consider AE as a continuous learning curve. If you have questions or comments and suggestions on how to improve packaging and reviewing process, please get in touch!&lt;br /&gt;
&lt;br /&gt;
Our presenation about Artifact Evaluation Experience at CGO/PPoPP'15 is available online [http://www.slideshare.net/GrigoriFursin/presentation-fursin-aecgoppopp2015 here].&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=853</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=853"/>
				<updated>2015-10-26T11:05:16Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* ''We have released our new, open-source, BSD-licensed Collective Knowledge Framework (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D: [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - features [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for conferences and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material;&lt;br /&gt;
*supporting and improving [http://cTuning.org/ae Artifact Evaluation] for major workshops and conferences including CGO and PPoPP.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
== Our R&amp;amp;D ==&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;br /&gt;
&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=852</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=852"/>
				<updated>2015-10-26T11:03:52Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* Motivation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* ''We have released our new, open-source, BSD-licensed Collective Knowledge Framework (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D: [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - features [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for conferences and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material;&lt;br /&gt;
*supporting and improving [http://cTuning.org/ae Artifact Evaluation] for major workshops and conferences including CGO and PPoPP.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
== Our R&amp;amp;D ==&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Paper and artifact evaluation committee&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp &amp;quot;Validated by the community&amp;quot;. Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.&lt;br /&gt;
&lt;br /&gt;
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]].&lt;br /&gt;
&lt;br /&gt;
== Packing and sharing research and experimental material&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any [[Reproducibility:Links|publicly available tool]]) and ''readme.txt'' file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [[http://arxiv.org/abs/1406.4020 arXiv], [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as [http://adapt-workshop.org ADAPT workshop] and from common discussions during [http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14 workshops]. See also nice guidelines for packing code and data along with publications [http://www.artifact-eval.org/guidelines.html here].&lt;br /&gt;
&lt;br /&gt;
== Validation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our [http://adapt-workshop.org/2014 ADAPT'14 workshop] (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to [http://cTuning.org/lab/people our volunteers], experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a &amp;quot;validated by the community&amp;quot; stamp:&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Logo-validated-by-the-community.png|center|Logo-validated-by-the-community.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=851</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=851"/>
				<updated>2015-10-26T11:02:46Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* Motivation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* ''We have released our new, open-source, BSD-licensed Collective Knowledge Framework (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D: [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - features [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for conferences and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
== Our R&amp;amp;D ==&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Paper and artifact evaluation committee&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp &amp;quot;Validated by the community&amp;quot;. Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.&lt;br /&gt;
&lt;br /&gt;
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]].&lt;br /&gt;
&lt;br /&gt;
== Packing and sharing research and experimental material&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any [[Reproducibility:Links|publicly available tool]]) and ''readme.txt'' file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [[http://arxiv.org/abs/1406.4020 arXiv], [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as [http://adapt-workshop.org ADAPT workshop] and from common discussions during [http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14 workshops]. See also nice guidelines for packing code and data along with publications [http://www.artifact-eval.org/guidelines.html here].&lt;br /&gt;
&lt;br /&gt;
== Validation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our [http://adapt-workshop.org/2014 ADAPT'14 workshop] (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to [http://cTuning.org/lab/people our volunteers], experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a &amp;quot;validated by the community&amp;quot; stamp:&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Logo-validated-by-the-community.png|center|Logo-validated-by-the-community.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=850</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=850"/>
				<updated>2015-10-26T11:01:50Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* News and upcoming events */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* ''We have released our new, open-source, BSD-licensed Collective Knowledge Framework (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D: [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - features [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for conferences and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
=== Community-driven research and developments&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Paper and artifact evaluation committee&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp &amp;quot;Validated by the community&amp;quot;. Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.&lt;br /&gt;
&lt;br /&gt;
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]].&lt;br /&gt;
&lt;br /&gt;
== Packing and sharing research and experimental material&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any [[Reproducibility:Links|publicly available tool]]) and ''readme.txt'' file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [[http://arxiv.org/abs/1406.4020 arXiv], [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as [http://adapt-workshop.org ADAPT workshop] and from common discussions during [http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14 workshops]. See also nice guidelines for packing code and data along with publications [http://www.artifact-eval.org/guidelines.html here].&lt;br /&gt;
&lt;br /&gt;
== Validation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our [http://adapt-workshop.org/2014 ADAPT'14 workshop] (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to [http://cTuning.org/lab/people our volunteers], experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a &amp;quot;validated by the community&amp;quot; stamp:&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Logo-validated-by-the-community.png|center|Logo-validated-by-the-community.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=849</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=849"/>
				<updated>2015-10-26T11:00:51Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* Our interdisciplinary events */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*''Brand new, open-source, BSD-licensed Collective Knowledge SDK (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D is now publicly available at [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - features [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for confernces and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
=== Community-driven research and developments&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Paper and artifact evaluation committee&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp &amp;quot;Validated by the community&amp;quot;. Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.&lt;br /&gt;
&lt;br /&gt;
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]].&lt;br /&gt;
&lt;br /&gt;
== Packing and sharing research and experimental material&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any [[Reproducibility:Links|publicly available tool]]) and ''readme.txt'' file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [[http://arxiv.org/abs/1406.4020 arXiv], [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as [http://adapt-workshop.org ADAPT workshop] and from common discussions during [http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14 workshops]. See also nice guidelines for packing code and data along with publications [http://www.artifact-eval.org/guidelines.html here].&lt;br /&gt;
&lt;br /&gt;
== Validation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our [http://adapt-workshop.org/2014 ADAPT'14 workshop] (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to [http://cTuning.org/lab/people our volunteers], experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a &amp;quot;validated by the community&amp;quot; stamp:&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Logo-validated-by-the-community.png|center|Logo-validated-by-the-community.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=848</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=848"/>
				<updated>2015-10-26T11:00:03Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* Follow us */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*''Brand new, open-source, BSD-licensed Collective Knowledge SDK (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D is now publicly available at [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - features [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for confernces and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
=== Community-driven research and developments&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our interdisciplinary events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
==== Featuring new open publication model and validation of experimental results&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
&lt;br /&gt;
==== Discussing technical aspects to enable reproducibility and open publication model&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Paper and artifact evaluation committee&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp &amp;quot;Validated by the community&amp;quot;. Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.&lt;br /&gt;
&lt;br /&gt;
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]].&lt;br /&gt;
&lt;br /&gt;
== Packing and sharing research and experimental material&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any [[Reproducibility:Links|publicly available tool]]) and ''readme.txt'' file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [[http://arxiv.org/abs/1406.4020 arXiv], [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as [http://adapt-workshop.org ADAPT workshop] and from common discussions during [http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14 workshops]. See also nice guidelines for packing code and data along with publications [http://www.artifact-eval.org/guidelines.html here].&lt;br /&gt;
&lt;br /&gt;
== Validation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our [http://adapt-workshop.org/2014 ADAPT'14 workshop] (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to [http://cTuning.org/lab/people our volunteers], experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a &amp;quot;validated by the community&amp;quot; stamp:&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Logo-validated-by-the-community.png|center|Logo-validated-by-the-community.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=847</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=847"/>
				<updated>2015-10-26T10:59:44Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* Acknowledgments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*''Brand new, open-source, BSD-licensed Collective Knowledge SDK (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D is now publicly available at [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - features [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for confernces and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
=== Community-driven research and developments&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our interdisciplinary events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
==== Featuring new open publication model and validation of experimental results&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
&lt;br /&gt;
==== Discussing technical aspects to enable reproducibility and open publication model&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Paper and artifact evaluation committee&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp &amp;quot;Validated by the community&amp;quot;. Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.&lt;br /&gt;
&lt;br /&gt;
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]].&lt;br /&gt;
&lt;br /&gt;
== Packing and sharing research and experimental material&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any [[Reproducibility:Links|publicly available tool]]) and ''readme.txt'' file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [[http://arxiv.org/abs/1406.4020 arXiv], [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as [http://adapt-workshop.org ADAPT workshop] and from common discussions during [http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14 workshops]. See also nice guidelines for packing code and data along with publications [http://www.artifact-eval.org/guidelines.html here].&lt;br /&gt;
&lt;br /&gt;
== Validation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our [http://adapt-workshop.org/2014 ADAPT'14 workshop] (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to [http://cTuning.org/lab/people our volunteers], experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a &amp;quot;validated by the community&amp;quot; stamp:&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Logo-validated-by-the-community.png|center|Logo-validated-by-the-community.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=846</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=846"/>
				<updated>2015-10-26T10:37:51Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* News and upcoming events */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*''Brand new, open-source, BSD-licensed Collective Knowledge SDK (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D is now publicly available at [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - features [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for confernces and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
=== Community-driven research and developments&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our interdisciplinary events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
==== Featuring new open publication model and validation of experimental results&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
&lt;br /&gt;
==== Discussing technical aspects to enable reproducibility and open publication model&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;br /&gt;
&lt;br /&gt;
== Paper and artifact evaluation committee&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp &amp;quot;Validated by the community&amp;quot;. Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.&lt;br /&gt;
&lt;br /&gt;
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]].&lt;br /&gt;
&lt;br /&gt;
== Packing and sharing research and experimental material&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any [[Reproducibility:Links|publicly available tool]]) and ''readme.txt'' file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [[http://arxiv.org/abs/1406.4020 arXiv], [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as [http://adapt-workshop.org ADAPT workshop] and from common discussions during [http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14 workshops]. See also nice guidelines for packing code and data along with publications [http://www.artifact-eval.org/guidelines.html here].&lt;br /&gt;
&lt;br /&gt;
== Validation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our [http://adapt-workshop.org/2014 ADAPT'14 workshop] (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to [http://cTuning.org/lab/people our volunteers], experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a &amp;quot;validated by the community&amp;quot; stamp:&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Logo-validated-by-the-community.png|center|Logo-validated-by-the-community.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=845</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=845"/>
				<updated>2015-10-26T10:00:07Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative and reproducible computer systems research with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*''Brand new, open-source, BSD-licensed Collective Knowledge SDK (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D is now publicly available at [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - will feature [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation] ''. Paper submission deadline: '''9 October 2015'''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for confernces and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
=== Community-driven research and developments&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our interdisciplinary events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
==== Featuring new open publication model and validation of experimental results&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
&lt;br /&gt;
==== Discussing technical aspects to enable reproducibility and open publication model&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;br /&gt;
&lt;br /&gt;
== Paper and artifact evaluation committee&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp &amp;quot;Validated by the community&amp;quot;. Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.&lt;br /&gt;
&lt;br /&gt;
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]].&lt;br /&gt;
&lt;br /&gt;
== Packing and sharing research and experimental material&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any [[Reproducibility:Links|publicly available tool]]) and ''readme.txt'' file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [[http://arxiv.org/abs/1406.4020 arXiv], [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as [http://adapt-workshop.org ADAPT workshop] and from common discussions during [http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14 workshops]. See also nice guidelines for packing code and data along with publications [http://www.artifact-eval.org/guidelines.html here].&lt;br /&gt;
&lt;br /&gt;
== Validation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our [http://adapt-workshop.org/2014 ADAPT'14 workshop] (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to [http://cTuning.org/lab/people our volunteers], experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a &amp;quot;validated by the community&amp;quot; stamp:&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Logo-validated-by-the-community.png|center|Logo-validated-by-the-community.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=844</id>
		<title>Reproducibility:Links</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=844"/>
				<updated>2015-10-26T09:34:11Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted (offline and online) tools'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[https://www.docker.io Docker tool] - pack, ship and run applications as a lightweight container&lt;br /&gt;
*[http://www.virtualbox.org VirtualBox] - create VM images&lt;br /&gt;
*[http://www.occamportal.org OCCAM] - open curation for computer architecture modeling&lt;br /&gt;
*[http://vida-nyu.github.io/reprozip ReproZip] - automatically packing experiments ([https://www.usenix.org/conference/tapp13/technical-sessions/presentation/chirigati related article])&lt;br /&gt;
*[http://reproducible.io CARE tool] - Comprehensive Archiver for Reproducible Execution&lt;br /&gt;
*[http://rr-project.org RR] - Mozilla project: records nondeterministic executions and debugs them deterministically&lt;br /&gt;
*[http://www.pgbovine.net/cde.html CDE tool] - automatically create portable Linux applications with all dependencies&lt;br /&gt;
*[http://ipython.org/notebook.html IPython Notebook] - a web-based interactive computational environment where you can combine code execution, text, mathematics, plots and rich media into a single document&lt;br /&gt;
*[http://www.rstudio.com R-studio] - open source and enterprise-ready professional software for R&lt;br /&gt;
*[https://www.codalab.org Codelab] - an experimental platform for collaboration and competition&lt;br /&gt;
*[https://www.grid5000.fr Grid5000]  - large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data&lt;br /&gt;
*[http://www.mygrid.org.uk MyGrid] - develops a suite of tools designed to &amp;quot;help e-Scientists get on with science and get on with scientists&amp;quot;&lt;br /&gt;
*[http://figshare.com FigShare] - managing research in a cloud&lt;br /&gt;
*[http://www.runmycode.org RunMyCode] - online workflows&lt;br /&gt;
*[https://www.aptlab.net AptLab] - online workflows&lt;br /&gt;
*[http://www.taverna.org.uk Taverna] - designing and executing workflows&lt;br /&gt;
*[http://boinc.berkeley.edu BOINC] - open-source software for volunteer computing and grid computing&lt;br /&gt;
*[https://mulcyber.toulouse.inra.fr/projects/ngspipelines NGS pipelines] - integrates pipelines and user interfaces to help biologists to analyse data outputed from biological applications such as RNAseq, sRNAseq, ChipSeq, BS-seq&lt;br /&gt;
*[http://www.seek4science.org SEEK for Science] - finding, sharing and exchanging Data, Models, Simulations and Processes in Science&lt;br /&gt;
*[http://www.cs.umd.edu/projects/skoll/Skoll/Home.html Skoll] - a process &amp;amp; Infrastructure for Distributed, continuous Quality assurance&lt;br /&gt;
*[http://nepi.inria.fr NEPI] - simplifying network experimentation&lt;br /&gt;
*[http://pgbovine.net/burrito.html Burrito] - rethinking the Electronic Lab Notebook&lt;br /&gt;
*[http://orgmode.org Org mode] - keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 open source cTuning technology] - towards collaborative and reproducible performance autotuning via public repository of optimization knowledge, crowdsourcing, machine learning and collective intelligence (2006-cur.)&lt;br /&gt;
&lt;br /&gt;
=== '''Collective Knowledge Framework'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge Technology] - preserve (with distributed ID), organize, desribe, share and reuse your code and data via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=843</id>
		<title>Reproducibility:Links</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=843"/>
				<updated>2015-10-26T09:33:31Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted (offline and online) tools'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[https://www.docker.io Docker tool] - pack, ship and run applications as a lightweight container&lt;br /&gt;
*[http://www.virtualbox.org VirtualBox] - create VM images&lt;br /&gt;
*[http://www.occamportal.org OCCAM] - open curation for computer architecture modeling&lt;br /&gt;
*[http://vida-nyu.github.io/reprozip ReproZip] - automatically packing experiments ([https://www.usenix.org/conference/tapp13/technical-sessions/presentation/chirigati related article])&lt;br /&gt;
*[http://reproducible.io CARE tool] - Comprehensive Archiver for Reproducible Execution&lt;br /&gt;
*[http://rr-project.org RR] - Mozilla project: records nondeterministic executions and debugs them deterministically&lt;br /&gt;
*[http://www.pgbovine.net/cde.html CDE tool] - automatically create portable Linux applications with all dependencies&lt;br /&gt;
*[http://ipython.org/notebook.html IPython Notebook] - a web-based interactive computational environment where you can combine code execution, text, mathematics, plots and rich media into a single document&lt;br /&gt;
*[http://www.rstudio.com R-studio] - open source and enterprise-ready professional software for R&lt;br /&gt;
*[https://www.codalab.org Codelab] - an experimental platform for collaboration and competition&lt;br /&gt;
*[https://www.grid5000.fr Grid5000]  - large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data&lt;br /&gt;
*[http://www.mygrid.org.uk MyGrid] - develops a suite of tools designed to &amp;quot;help e-Scientists get on with science and get on with scientists&amp;quot;&lt;br /&gt;
*[http://figshare.com FigShare] - managing research in a cloud&lt;br /&gt;
*[http://www.runmycode.org RunMyCode] - online workflows&lt;br /&gt;
*[https://www.aptlab.net AptLab] - online workflows&lt;br /&gt;
*[http://www.taverna.org.uk Taverna] - designing and executing workflows&lt;br /&gt;
*[http://boinc.berkeley.edu BOINC] - open-source software for volunteer computing and grid computing&lt;br /&gt;
*[https://mulcyber.toulouse.inra.fr/projects/ngspipelines NGS pipelines] - integrates pipelines and user interfaces to help biologists to analyse data outputed from biological applications such as RNAseq, sRNAseq, ChipSeq, BS-seq&lt;br /&gt;
*[http://www.seek4science.org SEEK for Science] - finding, sharing and exchanging Data, Models, Simulations and Processes in Science&lt;br /&gt;
*[http://www.cs.umd.edu/projects/skoll/Skoll/Home.html Skoll] - a process &amp;amp; Infrastructure for Distributed, continuous Quality assurance&lt;br /&gt;
*[http://nepi.inria.fr NEPI] - simplifying network experimentation&lt;br /&gt;
*[http://pgbovine.net/burrito.html Burrito] - rethinking the Electronic Lab Notebook&lt;br /&gt;
*[http://orgmode.org Org mode] - keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 open source cTuning technology] - towards collaborative and reproducible performance autotuning via public repository of optimization knowledge, crowdsourcing, machine learning and collective intelligence (2006-cur.)&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge Technology] - preserve (with distributed ID), organize, desribe, share and reuse your code and data via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=842</id>
		<title>Reproducibility:Links</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=842"/>
				<updated>2015-10-25T16:54:06Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted (offline and online) tools'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[https://www.docker.io Docker tool] (pack, ship and run applications as a lightweight container)&lt;br /&gt;
*[http://www.virtualbox.org] (create VM images)&lt;br /&gt;
*[http://www.occamportal.org OCCAM] - open curation for computer architecture modeling&lt;br /&gt;
*[http://vida-nyu.github.io/reprozip ReproZip] automatically packing experiments ([https://www.usenix.org/conference/tapp13/technical-sessions/presentation/chirigati related article])&lt;br /&gt;
*[http://reproducible.io CARE tool from STMicroelectronics] (Comprehensive Archiver for Reproducible Execution)&lt;br /&gt;
*[http://rr-project.org RR] (Mozilla project: records nondeterministic executions and debugs them deterministically)&lt;br /&gt;
*[http://www.pgbovine.net/cde.html CDE tool] (automatically create portable Linux applications with all dependencies)&lt;br /&gt;
*[http://ipython.org/notebook.html IPython Notebook] (a web-based interactive computational environment where you can combine code execution, text, mathematics, plots and rich media into a single document)&lt;br /&gt;
*[http://www.rstudio.com R-studio] (Open source and enterprise-ready professional software for R)&lt;br /&gt;
*[https://www.codalab.org Codelab] (an experimental platform for collaboration and competition)&lt;br /&gt;
*[https://www.grid5000.fr Grid5000]  large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data&lt;br /&gt;
*[http://www.mygrid.org.uk MyGrid] - develops a suite of tools designed to &amp;quot;help e-Scientists get on with science and get on with scientists&amp;quot;&lt;br /&gt;
*[http://figshare.com FigShare] (managing research in a cloud)&lt;br /&gt;
*[http://www.runmycode.org RunMyCode] (online workflows)&lt;br /&gt;
*[https://www.aptlab.net AptLab] (online workflows)&lt;br /&gt;
*[http://www.taverna.org.uk Taverna] (designing and executing workflows)&lt;br /&gt;
*[http://boinc.berkeley.edu BOINC] (open-source software for volunteer computing and grid computing)&lt;br /&gt;
*[https://mulcyber.toulouse.inra.fr/projects/ngspipelines NGS pipelines] (integrates pipelines and user interfaces to help biologists to analyse data outputed from biological applications such as RNAseq, sRNAseq, ChipSeq, BS-seq)&lt;br /&gt;
*[http://www.seek4science.org SEEK for Science] - finding, sharing and exchanging Data, Models, Simulations and Processes in Science&lt;br /&gt;
*[http://www.cs.umd.edu/projects/skoll/Skoll/Home.html Skoll] (A process &amp;amp; Infrastructure for Distributed, continuous Quality assurance)&lt;br /&gt;
*[http://nepi.inria.fr NEPI] (Simplifying network experimentation)&lt;br /&gt;
*[http://pgbovine.net/burrito.html Burrito] (Rethinking the Electronic Lab Notebook)&amp;lt;br/&amp;gt;[http://hal.inria.fr/inria-00436029 open source cTuning technology] (crowdsource auto-tuning and combine with machine learning using big data, predictive analytics and crowdsourcing) (2006-cur.)&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind technology] (towards collaborative, systematic and reproducible computer engineering using big data, predictive analytics and collective intelligence) (2011-2014)&lt;br /&gt;
*[http://orgmode.org Org mode] - keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge Technology] - preserve (with distributed ID), organize, desribe, share and reuse your code and data via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=841</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=841"/>
				<updated>2015-10-25T16:51:43Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative, systematic and reproducible research and experimentation in computer engineering with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*''Brand new, open-source, BSD-licensed Collective Knowledge SDK (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D is now publicly available at [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - will feature [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation] ''. Paper submission deadline: '''9 October 2015'''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for confernces and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
=== Community-driven research and developments&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our interdisciplinary events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
==== Featuring new open publication model and validation of experimental results&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
&lt;br /&gt;
==== Discussing technical aspects to enable reproducibility and open publication model&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;br /&gt;
&lt;br /&gt;
== Paper and artifact evaluation committee&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp &amp;quot;Validated by the community&amp;quot;. Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.&lt;br /&gt;
&lt;br /&gt;
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]].&lt;br /&gt;
&lt;br /&gt;
== Packing and sharing research and experimental material&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any [[Reproducibility:Links|publicly available tool]]) and ''readme.txt'' file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [[http://arxiv.org/abs/1406.4020 arXiv], [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as [http://adapt-workshop.org ADAPT workshop] and from common discussions during [http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14 workshops]. See also nice guidelines for packing code and data along with publications [http://www.artifact-eval.org/guidelines.html here].&lt;br /&gt;
&lt;br /&gt;
== Validation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our [http://adapt-workshop.org/2014 ADAPT'14 workshop] (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to [http://cTuning.org/lab/people our volunteers], experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a &amp;quot;validated by the community&amp;quot; stamp:&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Logo-validated-by-the-community.png|center|Logo-validated-by-the-community.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Jokes&amp;diff=840</id>
		<title>Reproducibility:Jokes</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Jokes&amp;diff=840"/>
				<updated>2015-10-11T13:03:51Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: Created page with &amp;quot;[ '''''Back to main page''''' ]  === '''Assorted jokes''&amp;lt;br/&amp;gt; === *[http://matt.might.net/articles/crapl CRAPL license] *[https://play.google.com/store/app...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted jokes''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
*[http://matt.might.net/articles/crapl CRAPL license]&lt;br /&gt;
*[https://play.google.com/store/apps/details?id=ctuning.com.researchpapertitlegenerator&amp;amp;hl=en Paper title generator (Android app)]&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=839</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=839"/>
				<updated>2015-10-11T13:02:12Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative, systematic and reproducible research and experimentation in computer engineering with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*''Brand new, open-source, BSD-licensed Collective Knowledge SDK (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D is now publicly available at [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - will feature [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation] ''. Paper submission deadline: '''9 October 2015'''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for confernces and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
=== Community-driven research and developments&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our interdisciplinary events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
==== Featuring new open publication model and validation of experimental results&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
&lt;br /&gt;
==== Discussing technical aspects to enable reproducibility and open publication model&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;br /&gt;
&lt;br /&gt;
== Paper and artifact evaluation committee&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp &amp;quot;Validated by the community&amp;quot;. Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.&lt;br /&gt;
&lt;br /&gt;
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]].&lt;br /&gt;
&lt;br /&gt;
== Packing and sharing research and experimental material&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any [[Reproducibility:Links|publicly available tool]]) and ''readme.txt'' file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [[http://arxiv.org/abs/1406.4020 arXiv], [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as [http://adapt-workshop.org ADAPT workshop] and from common discussions during [http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14 workshops]. See also nice guidelines for packing code and data along with publications [http://www.artifact-eval.org/guidelines.html here].&lt;br /&gt;
&lt;br /&gt;
== Validation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our [http://adapt-workshop.org/2014 ADAPT'14 workshop] (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to [http://cTuning.org/lab/people our volunteers], experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a &amp;quot;validated by the community&amp;quot; stamp:&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Logo-validated-by-the-community.png|center|Logo-validated-by-the-community.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Jokes|Collection of jokes]]&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=838</id>
		<title>Reproducibility</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility&amp;diff=838"/>
				<updated>2015-10-11T13:01:45Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:x-large&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;color: rgb(178, 34, 34)&amp;quot;&amp;gt;Enabling collaborative, systematic and reproducible research and experimentation in computer engineering with an open publication model&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Ae-stamp-cgo.png|Ae-stamp-cgo.png|link=http://cTuning.org/event/ae-cgo2016]] [[File:Ae-stamp-ppopp2015.png|Ae-stamp-ppopp2015.png|link=http://cTuning.org/event/ae-ppopp2016]] [[File:Logo-validated-by-the-community.png|Logo-validated-by-the-community.png|link=http://cTuning.org/reproducibility]] &amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:CTuning foundation logo1.png|cTuning_foundation_logo1.png|link=http://cTuning.org]]&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; [[File:dividiti.png|dividiti.png|link=http://dividiti.com]]&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;This wiki is maintained by the [http://cTuning.org non-profit Tuning foundation].&amp;lt;/p&amp;gt;&lt;br /&gt;
== News and upcoming events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*''Brand new, open-source, BSD-licensed Collective Knowledge SDK (cTuning 4 aka CK) for collaborative and reproducible R&amp;amp;D is now publicly available at [http://github.com/ctuning/ck GitHub] with an [http://github.com/ctuning/ck/wiki online documentation] and [http://cknowledge.org/repo live demo repository].''&lt;br /&gt;
*''[http://adapt-workshop.org ADAPT'16 @ HiPEAC'16] - will feature [http://dl.acm.org/citation.cfm?id=2618142 our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation] ''. Paper submission deadline: '''9 October 2015'''&lt;br /&gt;
*''[http://cTuning.org/event/ae-ppopp2016 PPoPP'16 artifact evaluation]''&lt;br /&gt;
*''[http://cTuning.org/event/ae-cgo2016 CGO'16 artifact evaluation]''&lt;br /&gt;
*''[http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Dagstuhl perspective workshop on artifact evaluation for confernces and journals]''&lt;br /&gt;
&lt;br /&gt;
== Motivation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering ''as a side effect'' of our [http://cTuning.org/project-milepost MILEPOST] , [http://cTuning.org cTuning.org], [http://c-mind.org Collective Mind] and [http://github.com/ctuning/ck Collective Knowledge] projects (speeding up optimization, benchmarking and co-design of computer systems using auto-tuning, big data, predictive analytics and crowdsourcing). &amp;lt;span style=&amp;quot;font-size: small&amp;quot;&amp;gt;We focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*developing public and open source Collective Mind repositories of knowledge (see our pilot live repository [[http://cknowledge.org/repo CK], [http://c-mind.org/repo cMind]] and our vision papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]]);&lt;br /&gt;
*developing [http://github.com/ctuning/ck collaborative research and experimentation infrastructure] that can share artifacts as reusable components together with the whole experimental setups (see our papers [[http://arxiv.org/abs/1506.06256 1],[http://hal.inria.fr/hal-01054763 2]];&lt;br /&gt;
*evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]);&lt;br /&gt;
*improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material.&lt;br /&gt;
&lt;br /&gt;
See our manifesto and history [http://cTuning.org/history here].&lt;br /&gt;
&lt;br /&gt;
=== Community-driven research and developments&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
Together with the community and [http://cTuning.org not-for-profit cTuning foundation] we are working on the following topics:&lt;br /&gt;
&lt;br /&gt;
*developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones&lt;br /&gt;
*describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact&lt;br /&gt;
*developing specification to preserve experiments including all software and hardware dependencies&lt;br /&gt;
*dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques&lt;br /&gt;
*developing new predictive analytics techniques to explore large design and optimization spaces&lt;br /&gt;
*validating and verifying experimental results by the community&lt;br /&gt;
*developing common research interfaces for existing or new tools&lt;br /&gt;
*developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)&lt;br /&gt;
*sharing rare hardware and computational resources for experimental validation&lt;br /&gt;
*implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure&lt;br /&gt;
*implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)&lt;br /&gt;
*speeding up analysis of &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*developing new (interactive) visualization techniques for &amp;quot;big&amp;quot; experimental data&lt;br /&gt;
*enabling interactive articles&lt;br /&gt;
&lt;br /&gt;
== Our interdisciplinary events&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
==== Featuring new open publication model and validation of experimental results&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:AE:PPoPP2015|PPoPP'15 artifact evaluation]]&lt;br /&gt;
*[[Reproducibility:AE:CGO2015|CGO'15 artifact evaluation]]&lt;br /&gt;
*[http://adapt-workshop.org ADAPT'15] @ HiPEAC'15 - workshop on adaptive self-tuning computer systems&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
&lt;br /&gt;
==== Discussing technical aspects to enable reproducibility and open publication model&amp;lt;br/&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;br /&gt;
&lt;br /&gt;
== Acknowledgments&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
We would like to thank our colleagues from the [http://cTuning.org/lab/people cTuning foundation], [https://www.linkedin.com/company/dividiti dividiti], [http://www.artifact-eval.org artifact-eval.org], [http://www.occamportal.org OCCAM project] for their help, feedback, participation and support.&lt;br /&gt;
&lt;br /&gt;
== Paper and artifact evaluation committee&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than pre-selecting a dedicated committee for conferences, we select reviewers for reseach material (artifacts) and publications from a [http://cTuning.org/lab/people pool of our supporters] based on '''''submitted and publicly available''''' publications, their keywords and '''''public discussions''''' as described in our proposal [[http://arxiv.org/abs/1406.4020 arXiv]], [[http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Validated papers receive a stamp &amp;quot;Validated by the community&amp;quot;. Artifacts can be shared along with publication in the ACM Digital LIbrary, HAL, Collective Mind Repository or any other public archive.&lt;br /&gt;
&lt;br /&gt;
As for the workshops, conferences and journals with the traditional publication model (CGO, PPoPP, PLDI), we select artifact evaluation committee (AEC) as described [[Reproducibility:AE|here]].&lt;br /&gt;
&lt;br /&gt;
== Packing and sharing research and experimental material&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
Rather than enforcing specific procedure for packing, sharing and validation of experimental results, we allow authors of the accepted papers to include an archive with all related research material (using any [[Reproducibility:Links|publicly available tool]]) and ''readme.txt'' file describing how to validate their experiments. The main reason is the lack of a universally acceptable solution to pack and share experimental setups. For example, it is not always possible to use Virtual Machines and similar approaches for our research on performance/energy tuning or when some new hardware is being co-designed as we discuss in our proposal [[http://arxiv.org/abs/1406.4020 arXiv], [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]. Therefore, our current intention is to gradually and collaboratively find best procedure for packing using practical experience from our events such as [http://adapt-workshop.org ADAPT workshop] and from common discussions during [http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14 workshops]. See also nice guidelines for packing code and data along with publications [http://www.artifact-eval.org/guidelines.html here].&lt;br /&gt;
&lt;br /&gt;
== Validation&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
After many years of evangelizing collaborative and reproducible research in computer engineering based on our practical experience, we finally start seeing some change in the mentality in academia, industry and funding agencies. Authors of two papers of our [http://adapt-workshop.org/2014 ADAPT'14 workshop] (out of nine accepted) agreed to have experimental results of their papers validated by volunteers. Note that rather than enforcing specific validation rules, we decided to ask authors to pack all their research artifacts as they wish (for example, using a shared virtual machine or as a standard archive) and describe their own validation procedure. Thanks to [http://cTuning.org/lab/people our volunteers], experiments from these papers have been validated, archives shared in [http://c-mind.org/repo our public repository] , and papers marked with a &amp;quot;validated by the community&amp;quot; stamp:&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center&amp;quot;&amp;gt;[[File:Logo-validated-by-the-community.png|center|Logo-validated-by-the-community.png]]&amp;lt;/p&amp;gt;&lt;br /&gt;
== Archive&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://ctuning.org/wiki/index.php?title=Discussions:New_Publication_Model Outdated cTuning wiki page related to reproducible research and open publication model]&lt;br /&gt;
*Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): [ [http://ctuning.org/wiki/index.php/CDatabase database], [http://ctuning.org/wiki/index.php?title=Special:CPredict web-service for online prediction of optimizations] ]&lt;br /&gt;
&lt;br /&gt;
== Resources&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[[Reproducibility:Jokes|Jokes]]&lt;br /&gt;
*[[Reproducibility:Initiatives|Collection of related initiatives]]&lt;br /&gt;
*[[Reproducibility:Links|Collection of related tools]]&lt;br /&gt;
*[[Reproducibility:Datasets|Collection of related benchmarks and data sets]]&lt;br /&gt;
*[[Reproducibility:Repositories|Collection of public repositories]]&lt;br /&gt;
*[[Reproducibility:Lectures|Collection of related lectures]]&lt;br /&gt;
*[[Reproducibility:Articles|Collection of related articles]]&lt;br /&gt;
*[[Reproducibility:Blogs|Collection of related blogs]]&lt;br /&gt;
*[[Reproducibility:Events|Collection of related events]]&lt;br /&gt;
&lt;br /&gt;
== Discussions&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.linkedin.com/groups/Reproducible-research-experimentation-in-computer-7433414 LinkedIn group on reproducible research]&lt;br /&gt;
*[http://groups.google.com/group/collective-knowledge Main mailing list] (general collaborative and reproducible R&amp;amp;D in computer engineering)&lt;br /&gt;
*[http://groups.google.com/group/ctuning-discussions cTuning foundation mailing list] (collaborative and reproducible hardware and software benchmarking, auto-tuning and co-design)&lt;br /&gt;
&lt;br /&gt;
== Follow us&amp;lt;br/&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
*[https://twitter.com/c_tuning cTuning foundation twitter]&lt;br /&gt;
*[https://www.facebook.com/pages/CrowdTuning-Foundation/668405119902805 cTuning foundation facebook page] (recent)&lt;br /&gt;
*[http://dividiti.blogspot.com dividiti blog]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind repository] (outdated)&lt;br /&gt;
*[http://cknowledge.org/repo Collective Knowledge repository] (new)&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Articles&amp;diff=837</id>
		<title>Reproducibility:Articles</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Articles&amp;diff=837"/>
				<updated>2015-10-01T11:23:33Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* Assorted articles and presentations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted articles and presentations'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://www.pl-enthusiast.net/2015/09/01/pl-conference-papers-to-get-a-journal PL conference papers to get a journal] - discussing ACM proposal to formally recognize conference publications as equal in quality as journal publications - controversial&lt;br /&gt;
*Our experience report with encountered problems in computer systems' research and new proposal for ''community-driven reviewing and validation of publications'' [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]&lt;br /&gt;
*[https://hal.inria.fr/inria-00436029 Collective Tuning Initiative: automating and accelerating development and optimization of computing systems]&lt;br /&gt;
*[http://www.the-scientist.com/?articles.view/articleNo/33719/title/Science-s-Reproducibility-Problem Science's Reproducibility Problem], Bob Grant, The Scientist&lt;br /&gt;
*[http://www.cs.huji.ac.il/~feit/papers/Repeat15SIGOPS.pdf From Repeatability to Reproducibility and Corroboration], Dror G. Feitelson, SIGOPS Operating Syst. Rev. 49(1), pp. 3-11, Jan 2015&lt;br /&gt;
*[http://cacm.acm.org/blogs/blog-cacm/98560 Reviewing peer review], Jeannette M. Wing, ACM Blog&lt;br /&gt;
*[http://cacm.acm.org/blogs/blog-cacm/100284 How Should Peer Review Evolve?], Ed H. Chi, ACM Blog&lt;br /&gt;
*[http://crd.lbl.gov/~dhbailey/dhbpapers/twelve-ways.pdf Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers],  David H. Bailey, Supercomputing Review, June 11, 1991&lt;br /&gt;
*[Ten Ways to Fool the Masses When Giving Performance Results on GPUs], Scott Pakin, HPCWire, December 13, 2011 &lt;br /&gt;
*[http://www.cam.ac.uk/research/news/new-gold-standard-established-for-open-and-reproducible-research Cambridge University news about open and reproducible research]&lt;br /&gt;
*[http://reproduciblescience.blogspot.fr/2015/06/interesting-replicable-badge-for.html Interesting Replicable &amp;quot;Badge&amp;quot; for journal articles], Daniel S. Katz&lt;br /&gt;
*[http://thomasleeper.com/2015/05/open-science-language What's in a Name? The Concepts and Language of Replication and Reproducibility]&lt;br /&gt;
*[http://web.eece.maine.edu/~vweaver/papers/wddd08/wddd08_workshop.pdf Are Cycle Accurate Simulations a Waste of Time?], Vincent M. Weaver and Sally A. McKee&lt;br /&gt;
*[http://ehp.niehs.nih.gov/122-a188 Research Wranglers: Initiatives to Improve Reproducibility of Study Findings] (Environmental Health Perspectives)&lt;br /&gt;
*[https://web.stanford.edu/~vcs/talks/VictoriaStoddenICML2010.pdf Reproducible Research in Computational Science: Problems and Solutions For Data and Code Sharing], Victoria Stodden, ICML workshop 2010&lt;br /&gt;
*[http://www.the-scientist.com/?articles.view/articleNo/32426/title/Predatory-Publishing Predatory publishing]&lt;br /&gt;
*[http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review Three myths about scientific peer review]&lt;br /&gt;
*[http://spyros.zoupanos.net/papers/p39.open-repeatability.pdf The repeatability experiment of SIGMOD 2008], SIGMOD Record, vol. 37, no. 1 (March 2008), 39-45. &lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=1536616.1536619 &amp;quot;Begin with an Author's response to Reviews&amp;quot;] - proposal to submit past reviews along with articles&lt;br /&gt;
*[https://www.dropbox.com/s/67n49i9k3cb28mo/GOBLE-CW2014.ppt Presentation by Professor Carole Goble]&lt;br /&gt;
*Dennis McCafferty, [http://dl.acm.org/citation.cfm?id=1831415 &amp;quot;Should Code be Released?&amp;quot;]&lt;br /&gt;
*[http://www.elsevier.com/connect/can-data-be-peer-reviewed Elsevier:Can data be peer-reviewed?]&lt;br /&gt;
*[https://hal.inria.fr/hal-01054763 Collective mind: Towards practical and collaborative auto-tuning]&lt;br /&gt;
*[http://arxiv.org/abs/1506.06256 Collective Mind, Part II: Towards Performance- and Cost-Aware Software Engineering as a Natural Science]&lt;br /&gt;
*[http://www.executablepapers.com Elsevier:The Executable Paper Grand Challenge]&lt;br /&gt;
*[http://www.sciencemag.org/content/334/6060/1225.full Special section on Data replication &amp;amp; reproducibility], Science magazine, 2 December 2011&lt;br /&gt;
*Chris Drummond, [http://cogprints.org/7691/7/ICMLws09.pdf &amp;quot;Replicability is not Reproducibility: Nor is it Good Science&amp;quot;]&lt;br /&gt;
*[http://theconversation.com/science-is-in-a-reproducibility-crisis-how-do-we-resolve-it-16998 Science is in a reproducibility crisis - how do we resolve it?]&lt;br /&gt;
*[http://www.w3.org/2001/sw/hcls/notes/hcls-dataset W3C Dataset Descriptions: HCLS Community Profile]&lt;br /&gt;
*My blog article on [http://www.software.ac.uk/blog/2014-07-22-automatic-performance-tuning-and-reproducibility-side-effect &amp;quot;Automatic performance tuning and reproducibility as a side effect&amp;quot;]&amp;amp;nbsp; for the Software Sustainability Institute&lt;br /&gt;
*[http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble Trouble at the Lab], The Economist, 2013&lt;br /&gt;
*[https://www.force11.org/group/joint-declaration-data-citation-principles-final Joint Declaration of Data Citation Principles]&lt;br /&gt;
*[http://www.scientificamerican.com/article/puzzling-measurement-of-big-g-gravitational-constant-ignites-debate-slide-show/ Puzzling Measurement of &amp;quot;Big G&amp;quot; Gravitational Constant Ignites Debate]&lt;br /&gt;
*[http://www.bipm.org/utils/common/documents/jcgm/JCGM_200_2012.pdf International vocabulary of metrology – Basic and general concepts and associated terms (VIM)]&lt;br /&gt;
*[http://retractionwatch.com/2014/09/05/white-house-takes-notice-of-reproducibility-in-science-and-wants-your-opinion White House takes notice of reproducibility in science, and wants your opinion]&lt;br /&gt;
*Problems during performance benchmarking:&lt;br /&gt;
**[https://homes.cs.washington.edu/~bornholt/post/performance-evaluation.html [https://homes.cs.washington.edu/~bornholt/post/performance-evaluation.html]]&amp;lt;br/&amp;gt;''We also experienced many similar issues during our work on auto-tuning and machine learning:''&lt;br /&gt;
**[http://hal.inria.fr/hal-01054763 [http://hal.inria.fr/hal-01054763]]&lt;br /&gt;
**[http://arxiv.org/abs/1406.4020 [http://arxiv.org/abs/1406.4020]]&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=2723872 ACM SIGOPS Operating Systems Review - Special Issue on Repeatability and Sharing of Experimental Artifacts]&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;amp;arnumber=5768098 Vinton G. Cerf. &amp;quot;Bit Rot: Long-Term Preservation of Digital Information&amp;quot; [Point of View]]&lt;br /&gt;
*[http://www.nature.com/news/the-top-100-papers-1.16224 About citations: less related though interesting]&lt;br /&gt;
&lt;br /&gt;
=== Journals with artifact sharing and evaluation ===&lt;br /&gt;
&lt;br /&gt;
*[http://www.ipol.im IPOL Journal: Image Processing On Line&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://netlib.org/toms ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE (TOMS)]&lt;br /&gt;
*[http://db-reproducibility.seas.harvard.edu ACM SIGMOD 2015]&lt;br /&gt;
&lt;br /&gt;
=== National requirements ===&lt;br /&gt;
* [https://www.epsrc.ac.uk/files/aboutus/standards/clarificationsofexpectationsresearchdatamanagement UK EPSRC expectations on research data management]&lt;br /&gt;
* University of Manchester, UK:&lt;br /&gt;
** http://www.library.manchester.ac.uk/services-and-support/staff/research/services/research-data-management/policy&lt;br /&gt;
** http://www.library.manchester.ac.uk/services-and-support/staff/research/services/research-data-management/data-management-planning&lt;br /&gt;
* Cornell University, USA: [http://data.research.cornell.edu/content/best-practices Best Practices]&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Articles&amp;diff=836</id>
		<title>Reproducibility:Articles</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Articles&amp;diff=836"/>
				<updated>2015-10-01T11:01:07Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: /* Assorted articles and presentations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted articles and presentations'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://www.pl-enthusiast.net/2015/09/01/pl-conference-papers-to-get-a-journal PL conference papers to get a journal] - discussing ACM proposal to formally recognize conference publications as equal in quality as journal publications - controversial&lt;br /&gt;
*Our experience report with encountered problems in computer systems' research and new proposal for ''community-driven reviewing and validation of publications'' [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]&lt;br /&gt;
*[https://hal.inria.fr/inria-00436029 Collective Tuning Initiative: automating and accelerating development and optimization of computing systems]&lt;br /&gt;
*[http://www.the-scientist.com/?articles.view/articleNo/33719/title/Science-s-Reproducibility-Problem Science's Reproducibility Problem], Bob Grant, The Scientist&lt;br /&gt;
*[http://www.cs.huji.ac.il/~feit/papers/Repeat15SIGOPS.pdf From Repeatability to Reproducibility and Corroboration], Dror G. Feitelson, SIGOPS Operating Syst. Rev. 49(1), pp. 3-11, Jan 2015&lt;br /&gt;
*[http://cacm.acm.org/blogs/blog-cacm/98560 Reviewing peer review], Jeannette M. Wing, ACM Blog&lt;br /&gt;
*[http://cacm.acm.org/blogs/blog-cacm/100284 How Should Peer Review Evolve?], Ed H. Chi, ACM Blog&lt;br /&gt;
*[http://crd.lbl.gov/~dhbailey/dhbpapers/twelve-ways.pdf Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers],  David H. Bailey, Supercomputing Review, June 11, 1991&lt;br /&gt;
*[Ten Ways to Fool the Masses When Giving Performance Results on GPUs], Scott Pakin, HPCWire, December 13, 2011 &lt;br /&gt;
*[http://www.cam.ac.uk/research/news/new-gold-standard-established-for-open-and-reproducible-research Cambridge University news about open and reproducible research]&lt;br /&gt;
*[http://reproduciblescience.blogspot.fr/2015/06/interesting-replicable-badge-for.html Interesting Replicable &amp;quot;Badge&amp;quot; for journal articles], Daniel S. Katz&lt;br /&gt;
*[http://thomasleeper.com/2015/05/open-science-language What's in a Name? The Concepts and Language of Replication and Reproducibility]&lt;br /&gt;
*[http://ehp.niehs.nih.gov/122-a188 Research Wranglers: Initiatives to Improve Reproducibility of Study Findings] (Environmental Health Perspectives)&lt;br /&gt;
*[https://web.stanford.edu/~vcs/talks/VictoriaStoddenICML2010.pdf Reproducible Research in Computational Science: Problems and Solutions For Data and Code Sharing], Victoria Stodden, ICML workshop 2010&lt;br /&gt;
*[http://www.the-scientist.com/?articles.view/articleNo/32426/title/Predatory-Publishing Predatory publishing]&lt;br /&gt;
*[http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review Three myths about scientific peer review]&lt;br /&gt;
*[http://spyros.zoupanos.net/papers/p39.open-repeatability.pdf The repeatability experiment of SIGMOD 2008], SIGMOD Record, vol. 37, no. 1 (March 2008), 39-45. &lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=1536616.1536619 &amp;quot;Begin with an Author's response to Reviews&amp;quot;] - proposal to submit past reviews along with articles&lt;br /&gt;
*[https://www.dropbox.com/s/67n49i9k3cb28mo/GOBLE-CW2014.ppt Presentation by Professor Carole Goble]&lt;br /&gt;
*Dennis McCafferty, [http://dl.acm.org/citation.cfm?id=1831415 &amp;quot;Should Code be Released?&amp;quot;]&lt;br /&gt;
*[http://www.elsevier.com/connect/can-data-be-peer-reviewed Elsevier:Can data be peer-reviewed?]&lt;br /&gt;
*[http://www.executablepapers.com Elsevier:The Executable Paper Grand Challenge]&lt;br /&gt;
*[http://www.sciencemag.org/content/334/6060/1225.full Special section on Data replication &amp;amp; reproducibility], Science magazine, 2 December 2011&lt;br /&gt;
*Chris Drummond, [http://cogprints.org/7691/7/ICMLws09.pdf &amp;quot;Replicability is not Reproducibility: Nor is it Good Science&amp;quot;]&lt;br /&gt;
*[http://theconversation.com/science-is-in-a-reproducibility-crisis-how-do-we-resolve-it-16998 Science is in a reproducibility crisis - how do we resolve it?]&lt;br /&gt;
*[http://www.w3.org/2001/sw/hcls/notes/hcls-dataset W3C Dataset Descriptions: HCLS Community Profile]&lt;br /&gt;
*My blog article on [http://www.software.ac.uk/blog/2014-07-22-automatic-performance-tuning-and-reproducibility-side-effect &amp;quot;Automatic performance tuning and reproducibility as a side effect&amp;quot;]&amp;amp;nbsp; for the Software Sustainability Institute&lt;br /&gt;
*[http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble Trouble at the Lab], The Economist, 2013&lt;br /&gt;
*[https://www.force11.org/group/joint-declaration-data-citation-principles-final Joint Declaration of Data Citation Principles]&lt;br /&gt;
*[http://www.scientificamerican.com/article/puzzling-measurement-of-big-g-gravitational-constant-ignites-debate-slide-show/ Puzzling Measurement of &amp;quot;Big G&amp;quot; Gravitational Constant Ignites Debate]&lt;br /&gt;
*[http://www.bipm.org/utils/common/documents/jcgm/JCGM_200_2012.pdf International vocabulary of metrology – Basic and general concepts and associated terms (VIM)]&lt;br /&gt;
*[http://retractionwatch.com/2014/09/05/white-house-takes-notice-of-reproducibility-in-science-and-wants-your-opinion White House takes notice of reproducibility in science, and wants your opinion]&lt;br /&gt;
*Problems during performance benchmarking:&lt;br /&gt;
**[https://homes.cs.washington.edu/~bornholt/post/performance-evaluation.html [https://homes.cs.washington.edu/~bornholt/post/performance-evaluation.html]]&amp;lt;br/&amp;gt;''We also experienced many similar issues during our work on auto-tuning and machine learning:''&lt;br /&gt;
**[http://hal.inria.fr/hal-01054763 [http://hal.inria.fr/hal-01054763]]&lt;br /&gt;
**[http://arxiv.org/abs/1406.4020 [http://arxiv.org/abs/1406.4020]]&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=2723872 ACM SIGOPS Operating Systems Review - Special Issue on Repeatability and Sharing of Experimental Artifacts]&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;amp;arnumber=5768098 Vinton G. Cerf. &amp;quot;Bit Rot: Long-Term Preservation of Digital Information&amp;quot; [Point of View]]&lt;br /&gt;
*[http://www.nature.com/news/the-top-100-papers-1.16224 About citations: less related though interesting]&lt;br /&gt;
&lt;br /&gt;
=== Journals with artifact sharing and evaluation ===&lt;br /&gt;
&lt;br /&gt;
*[http://www.ipol.im IPOL Journal: Image Processing On Line&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://netlib.org/toms ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE (TOMS)]&lt;br /&gt;
*[http://db-reproducibility.seas.harvard.edu ACM SIGMOD 2015]&lt;br /&gt;
&lt;br /&gt;
=== National requirements ===&lt;br /&gt;
* [https://www.epsrc.ac.uk/files/aboutus/standards/clarificationsofexpectationsresearchdatamanagement UK EPSRC expectations on research data management]&lt;br /&gt;
* University of Manchester, UK:&lt;br /&gt;
** http://www.library.manchester.ac.uk/services-and-support/staff/research/services/research-data-management/policy&lt;br /&gt;
** http://www.library.manchester.ac.uk/services-and-support/staff/research/services/research-data-management/data-management-planning&lt;br /&gt;
* Cornell University, USA: [http://data.research.cornell.edu/content/best-practices Best Practices]&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Events&amp;diff=835</id>
		<title>Reproducibility:Events</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Events&amp;diff=835"/>
				<updated>2015-09-25T12:08:29Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted events'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*2016.May: [http://mmsys2016.itec.aau.at/dataset-track MMSys 2016 Dataset Track]&lt;br /&gt;
*2016.March: Dagstuhl seminar on &amp;quot;Rethinking Experimental Methods in Computing&amp;quot;: [http://www.dagstuhl.de/programm/kalender/semhp/?semnr=16111 Home Page]&lt;br /&gt;
*2016.Jan: [http://adapt-workshop.org/index2016.html ADAPT'16] - workshop on adaptive, self-tuning computer systems that will feature our open publication model with community-driven reviewing, reddit-based discussions and artifact evaluation&lt;br /&gt;
*2015.Nov.20: Deadline for artifact submission for accepted papers for [http://cTuning.org/event/ae-ppopp2016 PPoPP'16] and [http://cTuning.org/event/ae-cgo2016 CGO'16]&lt;br /&gt;
*2015.Nov.1-4: Dastuhl perspective workshop on artifact evaluation: [http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 Home page], [http://www.dagstuhl.de/program/calendar/partlist/?semnr=15452 participants]&lt;br /&gt;
*2015.Oct.19: [http://www.software.ac.uk/software-credit Software Credit Workshop] organized by Software Sustainability Institute in London. It will will explore what contribution software can and should make for academic reputational credit.&lt;br /&gt;
*2015.Sep.23-25: [https://rd-alliance.org/plenary-meetings/rda-sixth-plenary-meeting.html Research Data Alliance 6th plenary meeting in Paris]&lt;br /&gt;
*2015.Jul.27: [http://www.ntms-conf.org/ntms-2015/index.php/workshops/reproducibility-of-computation-based-research-workshop Workshop on Reproducibility of Computation Based Research:  Languages, Standards, Methodologies and Platforms]&lt;br /&gt;
*2015.Feb.9: our joint CGO/PPoPP'15 session on artifact evaluation experience [http://www.slideshare.net/GrigoriFursin/presentation-fursin-aecgoppopp2015 PDF]&lt;br /&gt;
*2015.Jan.: [http://www.sigops.org/osr.html ACM SIGOPS Operating Systems Review . Special Issue on Repeatability and Sharing of Experimental Artifacts]&lt;br /&gt;
*2015.Jan.: [http://adapt-workshop.org ADAPT'15] - workshop on adaptive self-tuning computer systems. &lt;br /&gt;
*2014.Oct.27-30: [http://bigscientificdata.org/cask14 &amp;quot;1st International Workshop on Collaborative methodologies to Accelerate Scientific Knowledge discovery in big data (CASK) 2014&amp;quot;]&lt;br /&gt;
*[http://www.occamportal.org/images/reproduce/TETC-SI-REPRODUCE.pdf Special journal issue] on Reproducible &amp;amp;nbsp;Research &amp;amp;nbsp;Methodologies at IEEE TETC&lt;br /&gt;
*[http://wssspe.researchcomputing.org.uk/wssspe2 2nd Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2)] to be held in conjunction with SC14, Sunday, 16 November 2014, New Orleans, LA, USA&lt;br /&gt;
*[https://www.xsede.org/web/reproducibility Reproducibility @ XSEDE: An XSEDE14 workshop (July, 2014)]&lt;br /&gt;
*[http://adapt-workshop.org/2014 ADAPT'14] - workshop on adaptive self-tuning computer systems [ [http://adapt-workshop.org/2014/program.htm program and publications] ]&lt;br /&gt;
*[http://c-mind.org/events/trust2014 ACM SIGPLAN TRUST'14] @ PLDI'14&lt;br /&gt;
*[http://www.software.ac.uk/cw14 Collaborative Workshop 2014 (CW14) - software in your reproducible research]&lt;br /&gt;
*[http://www.occamportal.org/reproduce REPRODUCE'14] @ HPCA'14&lt;br /&gt;
*[http://www.adapt-workshop.org/2014/program.htm ADAPT'14 panel] @ HiPEAC'14&lt;br /&gt;
*[https://icerm.brown.edu/tw14-5-cemc Challenges in 21st Century Experimental Mathematical Computation] July 21-25, 2014&lt;br /&gt;
*[http://www.eecg.toronto.edu/%7Eenright/wddd WDDD (Workshop on Duplicating, Deconstructing, and Debunking)]&lt;br /&gt;
*[http://ctuning.org/making-computer-engineering-a-science-2013 HiPEAC'13 CSW thematic session] @ ACM ECRC &amp;quot;Making computer engineering a science&amp;quot;&lt;br /&gt;
*[http://ctuning.org/hipeac3-thematic-session-2012-04 HiPEAC'12 CSW thematic session]&lt;br /&gt;
*[http://www.executablepapers.com Elsevier:The Executable Paper Grand Challenge]&lt;br /&gt;
*[http://exadapt.org/2012/program.html ASPLOS/EXADAPT'12 panel] @ ASPLOS'12&lt;br /&gt;
*[http://www.stodden.net/AMP2011 AMP workshop 2011 (Reproducible Research: Tools and Strategies for Scientific Computing)]&lt;br /&gt;
*[http://ctuning.org/lab/education cTuning lectures (2008-2010)]&lt;br /&gt;
*[http://hal.inria.fr/inria-00436029 GCC Summit'09 discussion]&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=834</id>
		<title>Reproducibility:Links</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=834"/>
				<updated>2015-09-24T15:40:14Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted (offline and online) tools'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge Technology] - preserve (with distributed ID), organize, desribe, share and reuse your code and data via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;br /&gt;
*[https://www.docker.io Docker tool] (pack, ship and run applications as a lightweight container)&lt;br /&gt;
*[http://vida-nyu.github.io/reprozip ReproZip] automatically packing experiments ([https://www.usenix.org/conference/tapp13/technical-sessions/presentation/chirigati related article])&lt;br /&gt;
*[http://reproducible.io CARE tool from STMicroelectronics] (Comprehensive Archiver for Reproducible Execution)&lt;br /&gt;
*[http://rr-project.org RR] (Mozilla project: records nondeterministic executions and debugs them deterministically)&lt;br /&gt;
*[http://www.pgbovine.net/cde.html CDE tool] (automatically create portable Linux applications with all dependencies)&lt;br /&gt;
*[http://ipython.org/notebook.html IPython Notebook] (a web-based interactive computational environment where you can combine code execution, text, mathematics, plots and rich media into a single document)&lt;br /&gt;
*[http://www.rstudio.com R-studio] (Open source and enterprise-ready professional software for R)&lt;br /&gt;
*[https://www.codalab.org Codelab] (an experimental platform for collaboration and competition)&lt;br /&gt;
*[https://www.grid5000.fr Grid5000]  large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data&lt;br /&gt;
*[http://www.mygrid.org.uk MyGrid] - develops a suite of tools designed to &amp;quot;help e-Scientists get on with science and get on with scientists&amp;quot;&lt;br /&gt;
*[http://figshare.com FigShare] (managing research in a cloud)&lt;br /&gt;
*[http://www.runmycode.org RunMyCode] (online workflows)&lt;br /&gt;
*[https://www.aptlab.net AptLab] (online workflows)&lt;br /&gt;
*[http://www.taverna.org.uk Taverna] (designing and executing workflows)&lt;br /&gt;
*[http://boinc.berkeley.edu BOINC] (open-source software for volunteer computing and grid computing)&lt;br /&gt;
*[https://mulcyber.toulouse.inra.fr/projects/ngspipelines NGS pipelines] (integrates pipelines and user interfaces to help biologists to analyse data outputed from biological applications such as RNAseq, sRNAseq, ChipSeq, BS-seq)&lt;br /&gt;
*[http://www.seek4science.org SEEK for Science] - finding, sharing and exchanging Data, Models, Simulations and Processes in Science&lt;br /&gt;
*[http://www.cs.umd.edu/projects/skoll/Skoll/Home.html Skoll] (A process &amp;amp; Infrastructure for Distributed, continuous Quality assurance)&lt;br /&gt;
*[http://nepi.inria.fr NEPI] (Simplifying network experimentation)&lt;br /&gt;
*[http://pgbovine.net/burrito.html Burrito] (Rethinking the Electronic Lab Notebook)&amp;lt;br/&amp;gt;[http://hal.inria.fr/inria-00436029 open source cTuning technology] (crowdsource auto-tuning and combine with machine learning using big data, predictive analytics and crowdsourcing) (2006-cur.)&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind technology] (towards collaborative, systematic and reproducible computer engineering using big data, predictive analytics and collective intelligence) (2011-2014)&lt;br /&gt;
*[http://www.occamportal.org OCCAM] - open curation for computer architecture modeling&lt;br /&gt;
*[http://orgmode.org Org mode] - keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=833</id>
		<title>Reproducibility:Links</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=833"/>
				<updated>2015-09-24T15:37:01Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted (offline and online) tools'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge Technology] - preserve (with distributed ID), organize, desribe, share and reuse your code and data via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;br /&gt;
*[https://www.docker.io Docker tool] (pack, ship and run applications as a lightweight container)&lt;br /&gt;
*[http://vida-nyu.github.io/reprozip ReproZip] automatically packing experiments ([https://www.usenix.org/conference/tapp13/technical-sessions/presentation/chirigati related article])&lt;br /&gt;
*[http://reproducible.io CARE tool from STMicroelectronics] (Comprehensive Archiver for Reproducible Execution)&lt;br /&gt;
*[http://rr-project.org RR] (Mozilla project: records nondeterministic executions and debugs them deterministically)&lt;br /&gt;
*[http://www.pgbovine.net/cde.html CDE tool] (automatically create portable Linux applications with all dependencies)&lt;br /&gt;
*[http://ipython.org/notebook.html IPython Notebook] (a web-based interactive computational environment where you can combine code execution, text, mathematics, plots and rich media into a single document)&lt;br /&gt;
*[http://www.rstudio.com R-studio] (Open source and enterprise-ready professional software for R)&lt;br /&gt;
*[https://www.codalab.org Codelab] (an experimental platform for collaboration and competition)&lt;br /&gt;
*[https://www.grid5000.fr Grid5000]  large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data&lt;br /&gt;
*[http://www.mygrid.org.uk MyGrid] - develops a suite of tools designed to &amp;quot;help e-Scientists get on with science and get on with scientists&amp;quot;&lt;br /&gt;
*[http://figshare.com FigShare] (managing research in a cloud)&lt;br /&gt;
*[http://www.runmycode.org RunMyCode] (online workflows)&lt;br /&gt;
*[https://www.aptlab.net AptLab] (online workflows)&lt;br /&gt;
*[http://www.taverna.org.uk Taverna] (designing and executing workflows)&lt;br /&gt;
*[http://boinc.berkeley.edu BOINC] (open-source software for volunteer computing and grid computing)&lt;br /&gt;
*[https://mulcyber.toulouse.inra.fr/projects/ngspipelines NGS pipelines] (integrates pipelines and user interfaces to help biologists to analyse data outputed from biological applications such as RNAseq, sRNAseq, ChipSeq, BS-seq)&lt;br /&gt;
*[http://www.cs.umd.edu/projects/skoll/Skoll/Home.html Skoll] (A process &amp;amp; Infrastructure for Distributed, continuous Quality assurance)&lt;br /&gt;
*[http://nepi.inria.fr NEPI] (Simplifying network experimentation)&lt;br /&gt;
*[http://pgbovine.net/burrito.html Burrito] (Rethinking the Electronic Lab Notebook)&amp;lt;br/&amp;gt;[http://hal.inria.fr/inria-00436029 open source cTuning technology] (crowdsource auto-tuning and combine with machine learning using big data, predictive analytics and crowdsourcing) (2006-cur.)&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind technology] (towards collaborative, systematic and reproducible computer engineering using big data, predictive analytics and collective intelligence) (2011-2014)&lt;br /&gt;
*[http://www.occamportal.org OCCAM] - open curation for computer architecture modeling&lt;br /&gt;
*[http://orgmode.org Org mode] - keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Repositories&amp;diff=832</id>
		<title>Reproducibility:Repositories</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Repositories&amp;diff=832"/>
				<updated>2015-09-24T15:36:34Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]'''''&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted misc repositories and initiatives'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge] - sharing and reusing artifacts with distributed ID via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;br /&gt;
*[http://gigadb.org (GIGA)n DB]&lt;br /&gt;
*[http://galaxyproject.org Data Intensive Biology]&lt;br /&gt;
*[http://www.galaxyzoo.org/?lang=en GalaxyZoo] (classification of galaxies)&lt;br /&gt;
*[https://olivearchive.org Olive Archive] (preserving executable content)&lt;br /&gt;
*[http://openscience.us/repo Tera-PROMISE&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[https://www.openaire.eu OpenAire (CERN)&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[https://zenodo.org Zenodo]&lt;br /&gt;
*[http://fair-dom.org FAIRDOM - establish a data and model management service facility for Systems Biology]&lt;br /&gt;
*[http://researchcompendia.org ResearchCompendia&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.researchobject.org ResearchObject]&lt;br /&gt;
*[http://www.researchobject.org/initiative Other initiatives related to ResearchObject]&lt;br /&gt;
*[https://archive.org Internet Archive&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.nationalarchives.gov.uk The national archives&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.dcc.ac.uk Digital Curation Center]&lt;br /&gt;
*[http://riojournal.com Research Ideas and Outcome Journal] - controversial&lt;br /&gt;
*[http://www.wikidata.org/wiki/Wikidata:Introduction WikiData]&lt;br /&gt;
*[http://www.dpn.org The digital preservation network&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.ddialliance.org Data Documentation Initiative]&lt;br /&gt;
*[http://recomputation.org recomputation.org]&lt;br /&gt;
*[https://open-data.europa.eu Open datasets&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://rmap-project.info/rmap RMap PROJECT] - preserve the many-to-many complex relationships among scholarly publications and their underlying data&lt;br /&gt;
*[http://datahub.io DataHub]&lt;br /&gt;
*[http://www.execandshare.org/CompanionSite www.execandshare.org] - creates a companion website associated with a submitted paper to implement the methodology presented in the paper&lt;br /&gt;
*[https://www.datacite.org/contact Datacite] (citing data as DOI, Germany, has connections with CERN)&lt;br /&gt;
*[http://www.crossref.org CrossRef]&lt;br /&gt;
*[http://www.knowledge-exchange.info Knowledge Exchange]&lt;br /&gt;
*[http://www.doi.org International DOI Foundation]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind pilot live repository] (deprecated in 2014 for [http://cknowledge.org/repo Collective Knowledge])&lt;br /&gt;
*[http://ctuning.org/repo cTuning.org (performance tuning database, outdated)]&lt;br /&gt;
*[http://www.occamportal.org OCCAM project] - open curation for computer architecture modeling&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=831</id>
		<title>Reproducibility:Links</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=831"/>
				<updated>2015-09-24T15:35:09Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted (offline and online) tools'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge] - preserve (with distributed ID), organize, desribe, share and reuse your code and data via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;br /&gt;
*[https://www.docker.io Docker tool] (pack, ship and run applications as a lightweight container)&lt;br /&gt;
*[http://vida-nyu.github.io/reprozip ReproZip] automatically packing experiments ([https://www.usenix.org/conference/tapp13/technical-sessions/presentation/chirigati related article])&lt;br /&gt;
*[http://reproducible.io CARE tool from STMicroelectronics] (Comprehensive Archiver for Reproducible Execution)&lt;br /&gt;
*[http://rr-project.org RR] (Mozilla project: records nondeterministic executions and debugs them deterministically)&lt;br /&gt;
*[http://www.pgbovine.net/cde.html CDE tool] (automatically create portable Linux applications with all dependencies)&lt;br /&gt;
*[http://ipython.org/notebook.html IPython Notebook] (a web-based interactive computational environment where you can combine code execution, text, mathematics, plots and rich media into a single document)&lt;br /&gt;
*[http://www.rstudio.com R-studio] (Open source and enterprise-ready professional software for R)&lt;br /&gt;
*[https://www.codalab.org Codelab] (an experimental platform for collaboration and competition)&lt;br /&gt;
*[https://www.grid5000.fr Grid5000]  large-scale and versatile testbed for experiment-driven research in all areas of computer science, with a focus on parallel and distributed computing including Cloud, HPC and Big Data&lt;br /&gt;
*[http://figshare.com FigShare] (managing research in a cloud)&lt;br /&gt;
*[http://www.runmycode.org RunMyCode] (online workflows)&lt;br /&gt;
*[https://www.aptlab.net AptLab] (online workflows)&lt;br /&gt;
*[http://www.taverna.org.uk Taverna] (designing and executing workflows)&lt;br /&gt;
*[http://boinc.berkeley.edu BOINC] (open-source software for volunteer computing and grid computing)&lt;br /&gt;
*[https://mulcyber.toulouse.inra.fr/projects/ngspipelines NGS pipelines] (integrates pipelines and user interfaces to help biologists to analyse data outputed from biological applications such as RNAseq, sRNAseq, ChipSeq, BS-seq)&lt;br /&gt;
*[http://www.cs.umd.edu/projects/skoll/Skoll/Home.html Skoll] (A process &amp;amp; Infrastructure for Distributed, continuous Quality assurance)&lt;br /&gt;
*[http://nepi.inria.fr NEPI] (Simplifying network experimentation)&lt;br /&gt;
*[http://pgbovine.net/burrito.html Burrito] (Rethinking the Electronic Lab Notebook)&amp;lt;br/&amp;gt;[http://hal.inria.fr/inria-00436029 open source cTuning technology] (crowdsource auto-tuning and combine with machine learning using big data, predictive analytics and crowdsourcing) (2006-cur.)&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind technology] (towards collaborative, systematic and reproducible computer engineering using big data, predictive analytics and collective intelligence) (2011-2014)&lt;br /&gt;
*[http://www.occamportal.org OCCAM project] - open curation for computer architecture modeling&lt;br /&gt;
*[http://orgmode.org Org mode] - keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Repositories&amp;diff=830</id>
		<title>Reproducibility:Repositories</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Repositories&amp;diff=830"/>
				<updated>2015-09-24T15:32:36Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]'''''&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted misc repositories and initiatives'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge] - sharing and reusing artifacts with distributed ID via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;br /&gt;
*[http://gigadb.org (GIGA)n DB]&lt;br /&gt;
*[http://galaxyproject.org Data Intensive Biology]&lt;br /&gt;
*[http://www.galaxyzoo.org/?lang=en GalaxyZoo] (classification of galaxies)&lt;br /&gt;
*[https://olivearchive.org Olive Archive] (preserving executable content)&lt;br /&gt;
*[http://openscience.us/repo Tera-PROMISE&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[https://www.openaire.eu OpenAire (CERN)&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[https://zenodo.org Zenodo]&lt;br /&gt;
*[http://researchcompendia.org ResearchCompendia&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.researchobject.org ResearchObject]&lt;br /&gt;
*[http://www.researchobject.org/initiative Other initiatives related to ResearchObject]&lt;br /&gt;
*[https://archive.org Internet Archive&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.nationalarchives.gov.uk The national archives&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.dcc.ac.uk Digital Curation Center]&lt;br /&gt;
*[http://riojournal.com Research Ideas and Outcome Journal] - controversial&lt;br /&gt;
*[http://www.wikidata.org/wiki/Wikidata:Introduction WikiData]&lt;br /&gt;
*[http://www.dpn.org The digital preservation network&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.ddialliance.org Data Documentation Initiative]&lt;br /&gt;
*[http://recomputation.org recomputation.org]&lt;br /&gt;
*[https://open-data.europa.eu Open datasets&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://rmap-project.info/rmap RMap PROJECT] - preserve the many-to-many complex relationships among scholarly publications and their underlying data&lt;br /&gt;
*[http://datahub.io DataHub]&lt;br /&gt;
*[http://www.execandshare.org/CompanionSite www.execandshare.org] - creates a companion website associated with a submitted paper to implement the methodology presented in the paper&lt;br /&gt;
*[https://www.datacite.org/contact Datacite] (citing data as DOI, Germany, has connections with CERN)&lt;br /&gt;
*[http://www.crossref.org CrossRef]&lt;br /&gt;
*[http://www.knowledge-exchange.info Knowledge Exchange]&lt;br /&gt;
*[http://www.doi.org International DOI Foundation]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind pilot live repository] (deprecated in 2014 for [http://cknowledge.org/repo Collective Knowledge])&lt;br /&gt;
*[http://ctuning.org/repo cTuning.org (performance tuning database, outdated)]&lt;br /&gt;
*[http://www.occamportal.org OCCAM project] - open curation for computer architecture modeling&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Articles&amp;diff=829</id>
		<title>Reproducibility:Articles</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Articles&amp;diff=829"/>
				<updated>2015-09-24T15:31:09Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted articles and presentations'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://www.pl-enthusiast.net/2015/09/01/pl-conference-papers-to-get-a-journal PL conference papers to get a journal] - discussing ACM proposal to formally recognize conference publications as equal in quality as journal publications - controversial&lt;br /&gt;
*Our experience report with encountered problems in computer systems' research and new proposal for ''community-driven reviewing and validation of publications'' [[http://arxiv.org/abs/1406.4020 arXiv] , [http://dl.acm.org/citation.cfm?id=2618142 ACM DL]]&lt;br /&gt;
*[http://www.the-scientist.com/?articles.view/articleNo/33719/title/Science-s-Reproducibility-Problem Science's Reproducibility Problem], Bob Grant, The Scientist&lt;br /&gt;
*[http://www.cs.huji.ac.il/~feit/papers/Repeat15SIGOPS.pdf From Repeatability to Reproducibility and Corroboration], Dror G. Feitelson, SIGOPS Operating Syst. Rev. 49(1), pp. 3-11, Jan 2015&lt;br /&gt;
*[http://cacm.acm.org/blogs/blog-cacm/98560 Reviewing peer review], Jeannette M. Wing, ACM Blog&lt;br /&gt;
*[http://cacm.acm.org/blogs/blog-cacm/100284 How Should Peer Review Evolve?], Ed H. Chi, ACM Blog&lt;br /&gt;
*[http://crd.lbl.gov/~dhbailey/dhbpapers/twelve-ways.pdf Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers],  David H. Bailey, Supercomputing Review, June 11, 1991&lt;br /&gt;
*[Ten Ways to Fool the Masses When Giving Performance Results on GPUs], Scott Pakin, HPCWire, December 13, 2011 &lt;br /&gt;
*[http://www.cam.ac.uk/research/news/new-gold-standard-established-for-open-and-reproducible-research Cambridge University news about open and reproducible research]&lt;br /&gt;
*[http://reproduciblescience.blogspot.fr/2015/06/interesting-replicable-badge-for.html Interesting Replicable &amp;quot;Badge&amp;quot; for journal articles], Daniel S. Katz&lt;br /&gt;
*[http://thomasleeper.com/2015/05/open-science-language What's in a Name? The Concepts and Language of Replication and Reproducibility]&lt;br /&gt;
*[http://ehp.niehs.nih.gov/122-a188 Research Wranglers: Initiatives to Improve Reproducibility of Study Findings] (Environmental Health Perspectives)&lt;br /&gt;
*[https://web.stanford.edu/~vcs/talks/VictoriaStoddenICML2010.pdf Reproducible Research in Computational Science: Problems and Solutions For Data and Code Sharing], Victoria Stodden, ICML workshop 2010&lt;br /&gt;
*[http://www.the-scientist.com/?articles.view/articleNo/32426/title/Predatory-Publishing Predatory publishing]&lt;br /&gt;
*[http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review Three myths about scientific peer review]&lt;br /&gt;
*[http://spyros.zoupanos.net/papers/p39.open-repeatability.pdf The repeatability experiment of SIGMOD 2008], SIGMOD Record, vol. 37, no. 1 (March 2008), 39-45. &lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=1536616.1536619 &amp;quot;Begin with an Author's response to Reviews&amp;quot;] - proposal to submit past reviews along with articles&lt;br /&gt;
*[https://www.dropbox.com/s/67n49i9k3cb28mo/GOBLE-CW2014.ppt Presentation by Professor Carole Goble]&lt;br /&gt;
*Dennis McCafferty, [http://dl.acm.org/citation.cfm?id=1831415 &amp;quot;Should Code be Released?&amp;quot;]&lt;br /&gt;
*[http://www.elsevier.com/connect/can-data-be-peer-reviewed Elsevier:Can data be peer-reviewed?]&lt;br /&gt;
*[http://www.executablepapers.com Elsevier:The Executable Paper Grand Challenge]&lt;br /&gt;
*[http://www.sciencemag.org/content/334/6060/1225.full Special section on Data replication &amp;amp; reproducibility], Science magazine, 2 December 2011&lt;br /&gt;
*Chris Drummond, [http://cogprints.org/7691/7/ICMLws09.pdf &amp;quot;Replicability is not Reproducibility: Nor is it Good Science&amp;quot;]&lt;br /&gt;
*[http://theconversation.com/science-is-in-a-reproducibility-crisis-how-do-we-resolve-it-16998 Science is in a reproducibility crisis - how do we resolve it?]&lt;br /&gt;
*[http://www.w3.org/2001/sw/hcls/notes/hcls-dataset W3C Dataset Descriptions: HCLS Community Profile]&lt;br /&gt;
*My blog article on [http://www.software.ac.uk/blog/2014-07-22-automatic-performance-tuning-and-reproducibility-side-effect &amp;quot;Automatic performance tuning and reproducibility as a side effect&amp;quot;]&amp;amp;nbsp; for the Software Sustainability Institute&lt;br /&gt;
*[http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble Trouble at the Lab], The Economist, 2013&lt;br /&gt;
*[https://www.force11.org/group/joint-declaration-data-citation-principles-final Joint Declaration of Data Citation Principles]&lt;br /&gt;
*[http://www.scientificamerican.com/article/puzzling-measurement-of-big-g-gravitational-constant-ignites-debate-slide-show/ Puzzling Measurement of &amp;quot;Big G&amp;quot; Gravitational Constant Ignites Debate]&lt;br /&gt;
*[http://www.bipm.org/utils/common/documents/jcgm/JCGM_200_2012.pdf International vocabulary of metrology – Basic and general concepts and associated terms (VIM)]&lt;br /&gt;
*[http://retractionwatch.com/2014/09/05/white-house-takes-notice-of-reproducibility-in-science-and-wants-your-opinion White House takes notice of reproducibility in science, and wants your opinion]&lt;br /&gt;
*Problems during performance benchmarking:&lt;br /&gt;
**[https://homes.cs.washington.edu/~bornholt/post/performance-evaluation.html [https://homes.cs.washington.edu/~bornholt/post/performance-evaluation.html]]&amp;lt;br/&amp;gt;''We also experienced many similar issues during our work on auto-tuning and machine learning:''&lt;br /&gt;
**[http://hal.inria.fr/hal-01054763 [http://hal.inria.fr/hal-01054763]]&lt;br /&gt;
**[http://arxiv.org/abs/1406.4020 [http://arxiv.org/abs/1406.4020]]&lt;br /&gt;
*[http://dl.acm.org/citation.cfm?id=2723872 ACM SIGOPS Operating Systems Review - Special Issue on Repeatability and Sharing of Experimental Artifacts]&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;amp;arnumber=5768098 Vinton G. Cerf. &amp;quot;Bit Rot: Long-Term Preservation of Digital Information&amp;quot; [Point of View]]&lt;br /&gt;
*[http://www.nature.com/news/the-top-100-papers-1.16224 About citations: less related though interesting]&lt;br /&gt;
&lt;br /&gt;
=== Journals with artifact sharing and evaluation ===&lt;br /&gt;
&lt;br /&gt;
*[http://www.ipol.im IPOL Journal: Image Processing On Line&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://netlib.org/toms ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE (TOMS)]&lt;br /&gt;
*[http://db-reproducibility.seas.harvard.edu ACM SIGMOD 2015]&lt;br /&gt;
&lt;br /&gt;
=== National requirements ===&lt;br /&gt;
* [https://www.epsrc.ac.uk/files/aboutus/standards/clarificationsofexpectationsresearchdatamanagement UK EPSRC expectations on research data management]&lt;br /&gt;
* University of Manchester, UK:&lt;br /&gt;
** http://www.library.manchester.ac.uk/services-and-support/staff/research/services/research-data-management/policy&lt;br /&gt;
** http://www.library.manchester.ac.uk/services-and-support/staff/research/services/research-data-management/data-management-planning&lt;br /&gt;
* Cornell University, USA: [http://data.research.cornell.edu/content/best-practices Best Practices]&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Repositories&amp;diff=828</id>
		<title>Reproducibility:Repositories</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Repositories&amp;diff=828"/>
				<updated>2015-09-24T15:27:53Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]'''''&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted misc repositories'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge] - sharing and reusing artifacts with distributed ID via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;br /&gt;
*[http://gigadb.org (GIGA)n DB]&lt;br /&gt;
*[http://galaxyproject.org Data Intensive Biology]&lt;br /&gt;
*[http://www.galaxyzoo.org/?lang=en GalaxyZoo] (classification of galaxies)&lt;br /&gt;
*[https://olivearchive.org Olive Archive] (preserving executable content)&lt;br /&gt;
*[http://openscience.us/repo Tera-PROMISE&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[https://www.openaire.eu OpenAire (CERN)&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[https://zenodo.org Zenodo]&lt;br /&gt;
*[http://researchcompendia.org ResearchCompendia&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.researchobject.org ResearchObject]&lt;br /&gt;
*[http://www.researchobject.org/initiative Other initiatives related to ResearchObject]&lt;br /&gt;
*[https://archive.org Internet Archive&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.nationalarchives.gov.uk The national archives&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.dcc.ac.uk Digital Curation Center]&lt;br /&gt;
*[http://riojournal.com Research Ideas and Outcome Journal] - controversial&lt;br /&gt;
*[http://www.wikidata.org/wiki/Wikidata:Introduction WikiData]&lt;br /&gt;
*[http://www.dpn.org The digital preservation network&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.ddialliance.org Data Documentation Initiative]&lt;br /&gt;
*[http://recomputation.org recomputation.org]&lt;br /&gt;
*[https://open-data.europa.eu Open datasets&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://rmap-project.info/rmap RMap PROJECT] - preserve the many-to-many complex relationships among scholarly publications and their underlying data&lt;br /&gt;
*[http://datahub.io DataHub]&lt;br /&gt;
*[http://www.execandshare.org/CompanionSite www.execandshare.org] - creates a companion website associated with a submitted paper to implement the methodology presented in the paper&lt;br /&gt;
*[https://www.datacite.org/contact Datacite] (citing data as DOI, Germany, has connections with CERN)&lt;br /&gt;
*[http://www.crossref.org CrossRef]&lt;br /&gt;
*[http://www.doi.org International DOI Foundation]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind pilot live repository] (deprecated in 2014 for [http://cknowledge.org/repo Collective Knowledge])&lt;br /&gt;
*[http://ctuning.org/repo cTuning.org (performance tuning database, outdated)]&lt;br /&gt;
*[http://www.occamportal.org OCCAM project] - open curation for computer architecture modeling&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=827</id>
		<title>Reproducibility:Links</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=827"/>
				<updated>2015-09-24T15:27:24Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted (offline and online) tools'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge] - preserve (with distributed ID), organize, desribe, share and reuse your code and data via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;br /&gt;
*[https://www.docker.io Docker tool] (pack, ship and run applications as a lightweight container)&lt;br /&gt;
*[http://vida-nyu.github.io/reprozip ReproZip] automatically packing experiments ([https://www.usenix.org/conference/tapp13/technical-sessions/presentation/chirigati related article])&lt;br /&gt;
*[http://reproducible.io CARE tool from STMicroelectronics] (Comprehensive Archiver for Reproducible Execution)&lt;br /&gt;
*[http://rr-project.org RR] (Mozilla project: records nondeterministic executions and debugs them deterministically)&lt;br /&gt;
*[http://www.pgbovine.net/cde.html CDE tool] (automatically create portable Linux applications with all dependencies)&lt;br /&gt;
*[http://ipython.org/notebook.html IPython Notebook] (a web-based interactive computational environment where you can combine code execution, text, mathematics, plots and rich media into a single document)&lt;br /&gt;
*[http://www.rstudio.com R-studio] (Open source and enterprise-ready professional software for R)&lt;br /&gt;
*[https://www.codalab.org Codelab] (an experimental platform for collaboration and competition)&lt;br /&gt;
*[http://figshare.com FigShare] (managing research in a cloud)&lt;br /&gt;
*[http://www.runmycode.org RunMyCode] (online workflows)&lt;br /&gt;
*[https://www.aptlab.net AptLab] (online workflows)&lt;br /&gt;
*[http://www.taverna.org.uk Taverna] (designing and executing workflows)&lt;br /&gt;
*[http://boinc.berkeley.edu BOINC] (open-source software for volunteer computing and grid computing)&lt;br /&gt;
*[https://mulcyber.toulouse.inra.fr/projects/ngspipelines NGS pipelines] (integrates pipelines and user interfaces to help biologists to analyse data outputed from biological applications such as RNAseq, sRNAseq, ChipSeq, BS-seq)&lt;br /&gt;
*[http://www.cs.umd.edu/projects/skoll/Skoll/Home.html Skoll] (A process &amp;amp; Infrastructure for Distributed, continuous Quality assurance)&lt;br /&gt;
*[http://nepi.inria.fr NEPI] (Simplifying network experimentation)&lt;br /&gt;
*[http://pgbovine.net/burrito.html Burrito] (Rethinking the Electronic Lab Notebook)&amp;lt;br/&amp;gt;[http://hal.inria.fr/inria-00436029 open source cTuning technology] (crowdsource auto-tuning and combine with machine learning using big data, predictive analytics and crowdsourcing) (2006-cur.)&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind technology] (towards collaborative, systematic and reproducible computer engineering using big data, predictive analytics and collective intelligence) (2011-2014)&lt;br /&gt;
*[http://www.occamportal.org OCCAM project] - open curation for computer architecture modeling&lt;br /&gt;
*[http://orgmode.org Org mode] - keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=826</id>
		<title>Reproducibility:Links</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Links&amp;diff=826"/>
				<updated>2015-09-24T15:24:35Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]''''' ]&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted (offline and online) tools'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge] - preserve (with distributed ID), organize, desribe, share and reuse your code and data via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;br /&gt;
*[https://www.docker.io Docker tool] (pack, ship and run applications as a lightweight container)&lt;br /&gt;
*[http://vida-nyu.github.io/reprozip ReproZip] automatically packing experiments ([https://www.usenix.org/conference/tapp13/technical-sessions/presentation/chirigati related article])&lt;br /&gt;
*[http://reproducible.io CARE tool from STMicroelectronics] (Comprehensive Archiver for Reproducible Execution)&lt;br /&gt;
*[http://rr-project.org RR] (Mozilla project: records nondeterministic executions and debugs them deterministically)&lt;br /&gt;
*[http://www.pgbovine.net/cde.html CDE tool] (automatically create portable Linux applications with all dependencies)&lt;br /&gt;
*[http://ipython.org/notebook.html IPython Notebook] (a web-based interactive computational environment where you can combine code execution, text, mathematics, plots and rich media into a single document)&lt;br /&gt;
*[http://www.rstudio.com R-studio] (Open source and enterprise-ready professional software for R)&lt;br /&gt;
*[https://www.codalab.org Codelab] (an experimental platform for collaboration and competition)&lt;br /&gt;
*[http://figshare.com FigShare] (managing research in a cloud)&lt;br /&gt;
*[http://www.runmycode.org RunMyCode] (online workflows)&lt;br /&gt;
*[https://www.aptlab.net AptLab] (online workflows)&lt;br /&gt;
*[http://www.taverna.org.uk Taverna] (designing and executing workflows)&lt;br /&gt;
*[http://boinc.berkeley.edu BOINC] (open-source software for volunteer computing and grid computing)&lt;br /&gt;
*[https://mulcyber.toulouse.inra.fr/projects/ngspipelines NGS pipelines] (integrates pipelines and user interfaces to help biologists to analyse data outputed from biological applications such as RNAseq, sRNAseq, ChipSeq, BS-seq)&lt;br /&gt;
*[http://www.cs.umd.edu/projects/skoll/Skoll/Home.html Skoll] (A process &amp;amp; Infrastructure for Distributed, continuous Quality assurance)&lt;br /&gt;
*[http://nepi.inria.fr NEPI] (Simplifying network experimentation)&lt;br /&gt;
*[http://pgbovine.net/burrito.html Burrito] (Rethinking the Electronic Lab Notebook)&amp;lt;br/&amp;gt;[http://hal.inria.fr/inria-00436029 open source cTuning technology] (crowdsource auto-tuning and combine with machine learning using big data, predictive analytics and crowdsourcing) (2006-cur.)&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind technology] (towards collaborative, systematic and reproducible computer engineering using big data, predictive analytics and collective intelligence) (2011-2014)&lt;br /&gt;
*[http://http://www.occamportal.org OCCAM project] - open curation for computer architecture modeling&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	<entry>
		<id>http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Repositories&amp;diff=825</id>
		<title>Reproducibility:Repositories</title>
		<link rel="alternate" type="text/html" href="http://ctuning.org/cm/wiki/index.php?title=Reproducibility:Repositories&amp;diff=825"/>
				<updated>2015-09-24T15:20:28Z</updated>
		
		<summary type="html">&lt;p&gt;Gfursin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[ '''''[[Reproducibility|Back to main page]]'''''&lt;br /&gt;
&lt;br /&gt;
=== '''Assorted misc repositories'''&amp;lt;br/&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
*[http://github.com/ctuning/ck Collective Knowledge] - sharing and reusing artifacts with distributed ID via GitHub&lt;br /&gt;
**[http://cknowledge.org/repo CK-based interactive article]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-clsmith PLDI'15 artifact converted to CK format]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-pamela-project Artifact repository for UK PAMELA project]&lt;br /&gt;
**[http://github.com/ctuning/reproduce-carp-project Artifact repository for EU FP7 CARP project]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-programs Example of benchmarks shared in CK format]&lt;br /&gt;
**[http://github.com/ctuning/ctuning-datasets-min Example of data sets shared in CK format]&lt;br /&gt;
**[https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&amp;amp;usp=sharing Examples of CK repositories shared as zip files]&lt;br /&gt;
*[http://gigadb.org (GIGA)n DB]&lt;br /&gt;
*[http://galaxyproject.org Data Intensive Biology]&lt;br /&gt;
*[http://www.galaxyzoo.org/?lang=en GalaxyZoo] (classification of galaxies)&lt;br /&gt;
*[https://olivearchive.org Olive Archive] (preserving executable content)&lt;br /&gt;
*[http://openscience.us/repo Tera-PROMISE&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[https://www.openaire.eu OpenAire (CERN)&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[https://zenodo.org Zenodo]&lt;br /&gt;
*[http://researchcompendia.org ResearchCompendia&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.researchobject.org ResearchObject]&lt;br /&gt;
*[http://www.researchobject.org/initiative Other initiatives related to ResearchObject]&lt;br /&gt;
*[https://archive.org Internet Archive&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.nationalarchives.gov.uk The national archives&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.dcc.ac.uk Digital Curation Center]&lt;br /&gt;
*[http://riojournal.com Research Ideas and Outcome Journal] - controversial&lt;br /&gt;
*[http://www.wikidata.org/wiki/Wikidata:Introduction WikiData]&lt;br /&gt;
*[http://www.dpn.org The digital preservation network&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://www.ddialliance.org Data Documentation Initiative]&lt;br /&gt;
*[http://recomputation.org recomputation.org]&lt;br /&gt;
*[https://open-data.europa.eu Open datasets&amp;lt;br/&amp;gt;]&lt;br /&gt;
*[http://rmap-project.info/rmap RMap PROJECT] - preserve the many-to-many complex relationships among scholarly publications and their underlying data&lt;br /&gt;
*[http://datahub.io DataHub]&lt;br /&gt;
*[http://www.execandshare.org/CompanionSite www.execandshare.org] - creates a companion website associated with a submitted paper to implement the methodology presented in the paper&lt;br /&gt;
*[https://www.datacite.org/contact Datacite] (citing data as DOI, Germany, has connections with CERN)&lt;br /&gt;
*[http://www.crossref.org CrossRef]&lt;br /&gt;
*[http://www.doi.org International DOI Foundation]&lt;br /&gt;
*[http://c-mind.org/repo Collective Mind pilot live repository] (not supported, development moved to Collective Knowledge)&lt;br /&gt;
*[http://ctuning.org/repo cTuning.org (performance tuning database, outdated)]&lt;br /&gt;
*[http://http://www.occamportal.org OCCAM project] - open curation for computer architecture modeling&lt;/div&gt;</summary>
		<author><name>Gfursin</name></author>	</entry>

	</feed>