Line 15: | Line 15: | ||
'''<span style="font-size:large">Call for papers</span>''' | '''<span style="font-size:large">Call for papers</span>''' | ||
− | It becomes excessively challenging or even impossible to capture, share and accurately reproduce experimental results in computer engineering for fair and trustable evaluation and future improvement. This is often due to ever rising complexity of the design, analysis and optimization of computer systems, increasing number of ad-hoc tools, interfaces and techniques, lack of a common experimental methodology, and lack of simple and unified mechanisms, tools and repositories to preserve and exchange knowledge. After focusing on [http://cTuning.org systematic and community-driven program and architecture auto-tuning and co-design combined with machine learning and crowdsourcing] during past years we also faced numerous, practical challenges related to reproducible experimentation. Therefore, we organize this workshop as an interdisciplinary forum for academic and industrial researchers, practitioners and developers in computer engineering to discuss challenges, ideas, experience, trustable and reproducible research methodologies, practical techniques, tools and repositories to: | + | It becomes excessively challenging or even impossible to capture, share and accurately reproduce experimental results in computer engineering for fair and trustable evaluation and future improvement. This is often due to ever rising complexity of the design, analysis and optimization of computer systems, increasing number of ad-hoc tools, interfaces and techniques, lack of a common experimental methodology, and lack of simple and unified mechanisms, tools and repositories to preserve and exchange knowledge apart from numerious publications where reproducibility if often not even considered. After focusing on [http://cTuning.org systematic and community-driven program and architecture auto-tuning and co-design combined with machine learning and crowdsourcing] during past years we also faced numerous, practical challenges related to reproducible experimentation. Therefore, we organize this workshop as an interdisciplinary forum for academic and industrial researchers, practitioners and developers in computer engineering to discuss challenges, ideas, experience, trustable and reproducible research methodologies, practical techniques, tools and repositories to: |
*capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones | *capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones | ||
Line 26: | Line 26: | ||
*implement previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure | *implement previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure | ||
*implement open access to publications and data (particularly discussing intellectual property IP and legal issues) | *implement open access to publications and data (particularly discussing intellectual property IP and legal issues) | ||
+ | *improve reviewing process | ||
*enable interactive articles | *enable interactive articles | ||
Revision as of 11:14, 27 January 2014
Important dates: Abstract submission: March 7, 2014 (Anywhere on Earth) LinkedIn group [ Link ] Collective Mind project:
OCCAM project [ Link ] Sponsors If your company or institution is interested to become our sponsor, please don't hesitate to contact us !
|
TRUST 2014 1st ACM SIGPLAN Workshop on Reproducible Research Methodologies co-located with PLDI 2014 June 12, 2014 (afternoon), Edinburgh, UK We particularly focus on technological aspects to enable reproducible research and experimentation on program and architecture empirical analysis, optimization, simulation, co-design and run-time adaptation. Call for papers It becomes excessively challenging or even impossible to capture, share and accurately reproduce experimental results in computer engineering for fair and trustable evaluation and future improvement. This is often due to ever rising complexity of the design, analysis and optimization of computer systems, increasing number of ad-hoc tools, interfaces and techniques, lack of a common experimental methodology, and lack of simple and unified mechanisms, tools and repositories to preserve and exchange knowledge apart from numerious publications where reproducibility if often not even considered. After focusing on systematic and community-driven program and architecture auto-tuning and co-design combined with machine learning and crowdsourcing during past years we also faced numerous, practical challenges related to reproducible experimentation. Therefore, we organize this workshop as an interdisciplinary forum for academic and industrial researchers, practitioners and developers in computer engineering to discuss challenges, ideas, experience, trustable and reproducible research methodologies, practical techniques, tools and repositories to:
Submission guidelines Easychair submission website is open. We invite papers in three categories (please use these prefixes for your submission title):
Submissions should be in PDF formatted with double column/single-spacing using 10pt fonts and printable on US letter or A4 sized paper. All papers will be peer-reviewed. Accepted papers can be published online on the conference website that will not prevent later publication of extended papers. We currently arrange proceedings to be published in the ACM Digital Library. Important dates
Workshop organizers
Program committee
Assorted related projects and initiatives
|