Line 41: Line 41:
 
'''<span style="font-size:large">Related initiatives</span>'''
 
'''<span style="font-size:large">Related initiatives</span>'''
  
*[[c-mind.org/repo|Collective Mind technology for collaborative and reproducible computer engineering]]  
+
*[http://c-mind.org/repo Collective Mind technology for collaborative and reproducible computer engineering]
 
*[http://cTuning.org cTuning technology for collaborative and reproducible auto-tuning and machine learning]
 
*[http://cTuning.org cTuning technology for collaborative and reproducible auto-tuning and machine learning]
 
*[http://www.occamportal.org OCCAM project for reproducible computer architecture simulation]
 
*[http://www.occamportal.org OCCAM project for reproducible computer architecture simulation]
*[[splashcon.org/2013/cfp/665|Artifact evaluation at OOSPLA'13]]
+
*[http://Splashcon.org/2013/cfp/665 Artifact evaluation at OOSPLA'13]
 
*[http://pldi14-aec.cs.brown.edu/ Artifact evaluation at PLDI'14]
 
*[http://pldi14-aec.cs.brown.edu/ Artifact evaluation at PLDI'14]
  
 
|}
 
|}

Revision as of 06:19, 13 December 2013

Trust2014 logo1.jpg

TRUST 2014

1st International Workshop on Reproducible Research Methodologies and New Publication Models

June 12, 2014 (afternoon), Edinburgh, UK

(co-located with PLDI 2014)


Call for papers

It becomes excessively challenging or even impossible to capture, share and accurately reproduce experimental results in computer engineering for fair and trustable evaluation and future improvement due to ever rising complexity of the design, analysis and optimization of computer systems, increasing number of ad-hoc tools, interfaces and techniques, lack of a common experimental methodology, and lack of simple and unified mechanisms, tools and repositories to preserve and exchange knowledge apart from numerous publications where reproducibility is often not even considered. This workshop is an interdisciplinary forum for academic and industrial researchers, practitioners and developers in computer engineering to discuss ideas, experience, trustable and reproducible research methodologies, practical techniques, tools and repositories to:

  • capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones
  • describe and catalog whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact
  • validate and verify experimental results by the community
  • develop common research interfaces for existing or new tools
  • deal with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques
  • implement previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure
  • implement open access to publications and data (particularly discussing intellectual property IP and legal issues)
  • enable interactive articles

Submission guidelines

  • TBA

Important dates

  • Abstract submission: March 7, 2014. (Anywhere on Earth)
  • Paper submission: March 14, 2014. (Anywhere on Earth)
  • Notification: April 14, 2014
  • Final version: May 2, 2013
  • Workshop: June 12, 2013

Workshop organizers

Program committee

  • TBA

Related initiatives


(C) 2011-2014 cTuning foundation