Line 5: | Line 5: | ||
'''<span style="font-size:large">Call for papers</span>''' | '''<span style="font-size:large">Call for papers</span>''' | ||
− | It becomes excessively challenging or even impossible to capture, share and accurately reproduce experimental results in computer engineering for fair and trustable evaluation and future improvement due to ever rising complexity of the design, analysis and optimization of computer systems, increasing number of ad-hoc tools, interfaces and techniques, lack of a common experimental methodology, and lack of simple and unified mechanisms, tools and repositories to preserve and exchange knowledge apart from numerous publications where reproducibility is often not even considered. | + | It becomes excessively challenging or even impossible to capture, share and accurately reproduce experimental results in computer engineering for fair and trustable evaluation and future improvement due to ever rising complexity of the design, analysis and optimization of computer systems, increasing number of ad-hoc tools, interfaces and techniques, lack of a common experimental methodology, and lack of simple and unified mechanisms, tools and repositories to preserve and exchange knowledge apart from numerous publications where reproducibility is often not even considered. After focusing on systematic and community-driven program and architecture auto-tuning and co-design combined with machine learning and crowdsourcing [http://cTuning.org during past 6 years] we faced numerous, practical challenges related to reproducible experimentation. Based on this experience and feedback from the community, we decided to organize this workshop as an interdisciplinary forum for academic and industrial researchers, practitioners and developers in computer engineering to discuss challenges, ideas, experience, trustable and reproducible research methodologies, practical techniques, tools and repositories to: |
*capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones | *capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones | ||
Line 47: | Line 47: | ||
*TBA | *TBA | ||
− | '''<span style="font-size:large">Related projects</span>''' | + | '''<span style="font-size:large">Related projects and initiatives</span>''' |
+ | *[http://adapt-workshop.org/program.htm ADAPT panel on reproducible research methodologies and new publication models] (January 2014) | ||
*[http://c-mind.org/repo Collective Mind technology for collaborative, systematic and reproducible computer engineering] | *[http://c-mind.org/repo Collective Mind technology for collaborative, systematic and reproducible computer engineering] | ||
*[http://hal.inria.fr/inria-00436029 cTuning technology for collaborative and reproducible auto-tuning and machine learning] (2006-2011) | *[http://hal.inria.fr/inria-00436029 cTuning technology for collaborative and reproducible auto-tuning and machine learning] (2006-2011) |
Revision as of 13:05, 15 December 2013
TRUST 2014 1st International Workshop on Reproducible Research Methodologies and New Publication Models June 12, 2014 (afternoon), Edinburgh, UK (co-located with PLDI 2014) We particularly focus on technological aspects to enable reproducible research and experimentation in computer engineering Call for papers It becomes excessively challenging or even impossible to capture, share and accurately reproduce experimental results in computer engineering for fair and trustable evaluation and future improvement due to ever rising complexity of the design, analysis and optimization of computer systems, increasing number of ad-hoc tools, interfaces and techniques, lack of a common experimental methodology, and lack of simple and unified mechanisms, tools and repositories to preserve and exchange knowledge apart from numerous publications where reproducibility is often not even considered. After focusing on systematic and community-driven program and architecture auto-tuning and co-design combined with machine learning and crowdsourcing during past 6 years we faced numerous, practical challenges related to reproducible experimentation. Based on this experience and feedback from the community, we decided to organize this workshop as an interdisciplinary forum for academic and industrial researchers, practitioners and developers in computer engineering to discuss challenges, ideas, experience, trustable and reproducible research methodologies, practical techniques, tools and repositories to:
Submission guidelines Easychair submission website is open. We invite papers in three categories (please use these prefixes for your submission title):
Submissions should be in PDF formatted with double column/single-spacing using 10pt fonts and printable on US letter or A4 sized paper. Accepted papers can be published online on the conference website and it will not prevent later publication of extended papers. The proceedings of the full papers and extended abstracts presenting new work will be published in the ACM digital library (currently being arranged). Important dates
Workshop organizers
Program committee
Related projects and initiatives
|