Line 1: Line 1:
{| style="width: 100%" border="0" cellpadding="10" cellspacing="1"
+
<table style="width: 100%" border="0" cellpadding="10" cellspacing="1">
|- valign="top"
+
| width="170" | <p style="text-align: center">[[File:Validate by community.png|Validate by community.png|link=http://cTuning.org/reproducibility]]</p><p style="text-align: center">'''Important dates:'''</p>
+
<span style="font-size:small"><span style="color: rgb(255, 0, 0)">'''''Abstract submission: March 7, 2014 (Anywhere on Earth)'''''</span><br/>Paper submission: March 14, 2014 (Anywhere on Earth)<br/>Notification: April 14, 2014<br/>Final version: May 2, 2013<br/>Workshop: June 12, 2013<br/></span>
+
<p style="text-align: center">'''Collective Mind project:'''</p><p style="text-align: center"><br/>[&nbsp;[http://c-mind.org/repo Live repository]&nbsp;], [&nbsp;[http://cTuning.org/tools/cm Framework]&nbsp;]</p>
+
  
 
+
<tr valign="top">
| <p style="text-align: center">'''<span style="font-size:x-large">TRUST 2014</span>'''</p><p style="text-align: center"><span style="font-size:medium">1st International Workshop on Reproducible Research Methodologies and New Publication Models</span></p><p style="text-align: center">June 12, 2014 (afternoon), Edinburgh, UK</p><p style="text-align: center">(co-located with [http://conferences.inf.ed.ac.uk/pldi2014 PLDI 2014])</p><p style="text-align: center"></p><p style="text-align: center"><span style="color:#ff0000">''We particularly focus on technological aspects to enable reproducible research and experimentation in computer engineering''</span></p>
+
<td width="170"> <p style="text-align: center"><img src="/cm/wiki/images/e/ee/Validate_by_community.png" _fck_mw_filename="Validate by community.png" alt="Validate by community.png" link="http://cTuning.org/reproducibility" /></p><p style="text-align: center"><b>Important dates:</b></p>
'''<span style="font-size:large">Call for papers</span>'''
+
<p><span style="font-size:small"><b><span style="color: rgb(255, 0, 0)"><i>Abstract submission: March 7, 2014</i></span></b>Paper submission: March 14, 2014 Notification: April 14, 2014<br />Final version: May 2, 2013<br />Workshop: June 12, 2013</span>
 
+
</p>
It becomes excessively challenging or even impossible to capture, share and accurately reproduce experimental results in computer engineering for fair and trustable evaluation and future improvement due to ever rising complexity of the design, analysis and optimization of computer systems, increasing number of ad-hoc tools, interfaces and techniques, lack of a common experimental methodology, and lack of simple and unified mechanisms, tools and repositories to preserve and exchange knowledge apart from numerous publications where reproducibility is often not even considered. After focusing on systematic and community-driven program and architecture auto-tuning and co-design combined with machine learning and crowdsourcing [http://cTuning.org during past 6 years] we faced numerous, practical challenges related to reproducible experimentation. Based on this experience and feedback from the community, we decided to organize this workshop as an interdisciplinary forum for academic and industrial researchers, practitioners and developers in computer engineering to discuss challenges, ideas, experience, trustable and reproducible research methodologies, practical techniques, tools and repositories to:
+
<p style="text-align: center"><b>Collective Mind project:</b></p><p style="text-align: center"><img _fck_mw_valid="false" _fck_mw_filename="C-mind.org.png" alt="" class="fck_mw_notfound" /></p><p style="text-align: center">[&#160;<a href="http://c-mind.org/repo">Live repository</a>&#160;], [&#160;<a href="http://cTuning.org/tools/cm">Framework</a>&#160;]</p>
 
+
<p><br />
*capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones
+
</p>
*describe and catalog whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact
+
</td>
*validate and verify experimental results by the community
+
<td> <p style="text-align: center"><b><span style="font-size:x-large">TRUST 2014</span></b></p><p style="text-align: center"><span style="font-size:medium">1st International Workshop on Reproducible Research Methodologies and New Publication Models</span></p><p style="text-align: center">June 12, 2014 (afternoon), Edinburgh, UK</p><p style="text-align: center">(co-located with <a href="http://conferences.inf.ed.ac.uk/pldi2014">PLDI 2014</a>)</p><p style="text-align: center"></p><p style="text-align: center"><span style="color:#ff0000"><i>We particularly focus on technological aspects to enable reproducible research and experimentation in computer engineering</i></span></p>
*develop common research interfaces for existing or new tools
+
<p><b><span style="font-size:large">Call for papers</span></b>
*develop common experimental frameworks and repositories
+
</p><p>It becomes excessively challenging or even impossible to capture, share and accurately reproduce experimental results in computer engineering for fair and trustable evaluation and future improvement due to ever rising complexity of the design, analysis and optimization of computer systems, increasing number of ad-hoc tools, interfaces and techniques, lack of a common experimental methodology, and lack of simple and unified mechanisms, tools and repositories to preserve and exchange knowledge apart from numerous publications where reproducibility is often not even considered. After focusing on systematic and community-driven program and architecture auto-tuning and co-design combined with machine learning and crowdsourcing <a href="http://cTuning.org">during past 6 years</a> we faced numerous, practical challenges related to reproducible experimentation. Based on this experience and feedback from the community, we decided to organize this workshop as an interdisciplinary forum for academic and industrial researchers, practitioners and developers in computer engineering to discuss challenges, ideas, experience, trustable and reproducible research methodologies, practical techniques, tools and repositories to:
*share rare hardware and computational resources for experimental validation
+
</p>
*deal with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques
+
<ul><li>capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones
*implement previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure
+
</li><li>describe and catalog whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact
*implement open access to publications and data (particularly discussing intellectual property IP and legal issues)
+
</li><li>validate and verify experimental results by the community
*enable interactive articles
+
</li><li>develop common research interfaces for existing or new tools
 
+
</li><li>develop common experimental frameworks and repositories
'''<span style="font-size:large">Submission guidelines</span>'''
+
</li><li>share rare hardware and computational resources for experimental validation
 
+
</li><li>deal with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques
'''''Easychair submission website is [https://www.easychair.org/conferences/?conf=trust20140 open]''.'''
+
</li><li>implement previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure
 
+
</li><li>implement open access to publications and data (particularly discussing intellectual property IP and legal issues)
We invite papers in three categories (''please use these prefixes for your submission title''):
+
</li><li>enable interactive articles
 
+
</li></ul>
*'''T1:''' Extended abstracts should be at most 3 pages long (excluding bibliography). We welcome preliminary and exploratory work, presentation of related tools and repositories in development, experience reports, and wild & crazy ideas.
+
<p><b><span style="font-size:large">Submission guidelines</span></b>
*'''T2:''' Full papers should be at most 6 pages long (excluding bibliography). Papers in this category are expected to have relatively mature content.
+
</p><p><b><i>Easychair submission website is <a href="https://www.easychair.org/conferences/?conf=trust20140">open</a></i>.</b>
*'''T3:''' Papers validating and sharing past research on design and optimization of computer systems published in relevant conferences. These papers should be at most 6 pages long (excluding bibliography).
+
</p><p>We invite papers in three categories (<i>please use these prefixes for your submission title</i>):
 
+
</p>
Submissions should be in PDF formatted with double column/single-spacing using 10pt fonts and printable on US letter or A4 sized paper. All papers will be peer-reviewed. Accepted papers can be published online on the conference website that will not prevent later publication of extended papers. We currently arrange proceedings to be published in the ACM Digital Library.
+
<ul><li><b>T1:</b> Extended abstracts should be at most 3 pages long (excluding bibliography). We welcome preliminary and exploratory work, presentation of related tools and repositories in development, experience reports, and wild &amp; crazy ideas.
 
+
</li><li><b>T2:</b> Full papers should be at most 6 pages long (excluding bibliography). Papers in this category are expected to have relatively mature content.
'''<span style="font-size:large">Important dates</span>'''
+
</li><li><b>T3:</b> Papers validating and sharing past research on design and optimization of computer systems published in relevant conferences. These papers should be at most 6 pages long (excluding bibliography).
 
+
</li></ul>
*<span style="color:#ff0000">'''''Abstract submission: March 7, 2014 (Anywhere on Earth)'''''</span>
+
<p>Submissions should be in PDF formatted with double column/single-spacing using 10pt fonts and printable on US letter or A4 sized paper. All papers will be peer-reviewed. Accepted papers can be published online on the conference website that will not prevent later publication of extended papers. We currently arrange proceedings to be published in the ACM Digital Library.
*Paper submission: March 14, 2014 (Anywhere on Earth)
+
</p><p><b><span style="font-size:large">Important dates</span></b>
*Notification: April 14, 2014
+
</p>
*Final version: May 2, 2013
+
<ul><li><span style="color:#ff0000"><i><b>Abstract submission: March 7, 2014 (Anywhere on Earth)</b></i></span>
*Workshop: June 12, 2013
+
</li><li>Paper submission: March 14, 2014 (Anywhere on Earth)
 
+
</li><li>Notification: April 14, 2014
'''<span style="font-size:large">Workshop organizers</span>'''
+
</li><li>Final version: May 2, 2013
 
+
</li><li>Workshop: June 12, 2013
*[http://cTuning.org/lab/people/gfursin Grigori Fursin], INRIA, France (Collective Mind / cTuning project)
+
</li></ul>
*[http://people.cs.pitt.edu/~childers Bruce Childers], [http://www.pitt.edu/~akjones Alex K.Jones] and [http://people.cs.pitt.edu/~mosse Daniel Mosse], University of Pittsburgh, USA (OCCAM project)
+
<p><b><span style="font-size:large">Workshop organizers</span></b>
 
+
</p>
'''<span style="font-size:large">Program committee</span>'''
+
<ul><li><a href="http://cTuning.org/lab/people/gfursin">Grigori Fursin</a>, INRIA, France (Collective Mind / cTuning project)
 
+
</li><li><a href="http://people.cs.pitt.edu/~childers">Bruce Childers</a>, <a href="http://www.pitt.edu/~akjones">Alex K.Jones</a> and <a href="http://people.cs.pitt.edu/~mosse">Daniel Mosse</a>, University of Pittsburgh, USA (OCCAM project)
*TBA
+
</li></ul>
 
+
<p><b><span style="font-size:large">Program committee</span></b>
'''<span style="font-size:large">Related projects and initiatives</span>'''&nbsp;
+
</p>
 
+
<ul><li>TBA
*[http://adapt-workshop.org/program.htm ADAPT panel on reproducible research methodologies and new publication models] (January 2014)
+
</li></ul>
*[http://c-mind.org/repo Collective Mind technology for collaborative, systematic and reproducible computer engineering]
+
<p><b><span style="font-size:large">Related projects and initiatives</span></b>&#160;
*[http://hal.inria.fr/inria-00436029 cTuning technology for collaborative and reproducible auto-tuning and machine learning] (2006-2011)
+
</p>
*[http://www.occamportal.org OCCAM project for reproducible computer architecture simulation]
+
<ul><li><a href="http://adapt-workshop.org/program.htm">ADAPT panel on reproducible research methodologies and new publication models</a> (January 2014)
*[http://Splashcon.org/2013/cfp/665 Artifact evaluation at OOSPLA'13]
+
</li><li><a href="http://c-mind.org/repo">Collective Mind technology for collaborative, systematic and reproducible computer engineering</a>
*[http://pldi14-aec.cs.brown.edu/ Artifact evaluation at PLDI'14]
+
</li><li><a href="http://hal.inria.fr/inria-00436029">cTuning technology for collaborative and reproducible auto-tuning and machine learning</a> (2006-2011)
 
+
</li><li><a href="http://www.occamportal.org">OCCAM project for reproducible computer architecture simulation</a>
|}
+
</li><li><a href="http://Splashcon.org/2013/cfp/665">Artifact evaluation at OOSPLA'13</a>
 +
</li><li><a href="http://pldi14-aec.cs.brown.edu/">Artifact evaluation at PLDI'14</a>
 +
</li></ul>
 +
</td></tr></table>

Revision as of 21:37, 15 December 2013

<img src="/cm/wiki/images/e/ee/Validate_by_community.png" _fck_mw_filename="Validate by community.png" alt="Validate by community.png" link="http://cTuning.org/reproducibility" />

Important dates:

Abstract submission: March 7, 2014Paper submission: March 14, 2014 Notification: April 14, 2014
Final version: May 2, 2013
Workshop: June 12, 2013

Collective Mind project:

<img _fck_mw_valid="false" _fck_mw_filename="C-mind.org.png" alt="" class="fck_mw_notfound" />

[ <a href="http://c-mind.org/repo">Live repository</a> ], [ <a href="http://cTuning.org/tools/cm">Framework</a> ]


TRUST 2014

1st International Workshop on Reproducible Research Methodologies and New Publication Models

June 12, 2014 (afternoon), Edinburgh, UK

(co-located with <a href="http://conferences.inf.ed.ac.uk/pldi2014">PLDI 2014</a>)

We particularly focus on technological aspects to enable reproducible research and experimentation in computer engineering

Call for papers

It becomes excessively challenging or even impossible to capture, share and accurately reproduce experimental results in computer engineering for fair and trustable evaluation and future improvement due to ever rising complexity of the design, analysis and optimization of computer systems, increasing number of ad-hoc tools, interfaces and techniques, lack of a common experimental methodology, and lack of simple and unified mechanisms, tools and repositories to preserve and exchange knowledge apart from numerous publications where reproducibility is often not even considered. After focusing on systematic and community-driven program and architecture auto-tuning and co-design combined with machine learning and crowdsourcing <a href="http://cTuning.org">during past 6 years</a> we faced numerous, practical challenges related to reproducible experimentation. Based on this experience and feedback from the community, we decided to organize this workshop as an interdisciplinary forum for academic and industrial researchers, practitioners and developers in computer engineering to discuss challenges, ideas, experience, trustable and reproducible research methodologies, practical techniques, tools and repositories to:

  • capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones
  • describe and catalog whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact
  • validate and verify experimental results by the community
  • develop common research interfaces for existing or new tools
  • develop common experimental frameworks and repositories
  • share rare hardware and computational resources for experimental validation
  • deal with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques
  • implement previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure
  • implement open access to publications and data (particularly discussing intellectual property IP and legal issues)
  • enable interactive articles

Submission guidelines

Easychair submission website is <a href="https://www.easychair.org/conferences/?conf=trust20140">open</a>.

We invite papers in three categories (please use these prefixes for your submission title):

  • T1: Extended abstracts should be at most 3 pages long (excluding bibliography). We welcome preliminary and exploratory work, presentation of related tools and repositories in development, experience reports, and wild & crazy ideas.
  • T2: Full papers should be at most 6 pages long (excluding bibliography). Papers in this category are expected to have relatively mature content.
  • T3: Papers validating and sharing past research on design and optimization of computer systems published in relevant conferences. These papers should be at most 6 pages long (excluding bibliography).

Submissions should be in PDF formatted with double column/single-spacing using 10pt fonts and printable on US letter or A4 sized paper. All papers will be peer-reviewed. Accepted papers can be published online on the conference website that will not prevent later publication of extended papers. We currently arrange proceedings to be published in the ACM Digital Library.

Important dates

  • Abstract submission: March 7, 2014 (Anywhere on Earth)
  • Paper submission: March 14, 2014 (Anywhere on Earth)
  • Notification: April 14, 2014
  • Final version: May 2, 2013
  • Workshop: June 12, 2013

Workshop organizers

Program committee

  • TBA

Related projects and initiatives 


(C) 2011-2014 cTuning foundation