Paper decision: | |
Artifact submission: | |
Decision announced: | |
Camera-ready paper: | |
Conference: | |
Public discussion: | Fisher Conference Center |
MLSys'19 AE Chairs
Grigori Fursin (cTuning foundation / dividiti)
Gennady Pekhimenko (University of Toronto)
MLSys'19 AE Committee
Reproducibility Initiative for MLSys 2019
It's becoming increasingly difficult to reproduce results from systems and ML papers. Voluntarily Artifact Evaluation (AE) was successfully introduced at systems conferences and tournaments (ReQuEST, PPoPP and CGO, and Supercomputing) to validate experimental results by an independent AE Committee, share unified Artifact Appendices, and assign reproducibility badges.
MLSys also promotes reproducibility of experimental results and encourages code and data sharing to help the community quickly validate and compare alternative approaches. Authors of accepted MLSys'19 papers are invited to formally describe supporting material (code, data, models, workflows, results) using the standard Artifact Appendix template and submit it to the Artifact Evaluation process (AE). Note that this submission is voluntary and will not influence the final decision regarding the papers. The point is to help authors validate experimental results from their accepted papers by an independent AE Committee in a collaborative way, and to help readers find articles with available, functional, and validated artifacts! For example, ACM Digital Library already allows one to find papers with available artifacts and reproducible results!
You need to prepare your artifacts and appendix using the following guidelines. You can then submit your paper with artifact appendix via dedicated MLSys AE website before January 18, 2019. Your submission will be then reviewed according to the following guidelines. Please, do not forget to provide a list of hardware, software, benchmark and data set dependencies in your artifact abstract - this is essential to find appropriate evaluators!
AE is run by a separate committee whose task is to assess how submitted artifacts support the work described in accepted papers based on the standard ACM Artifact Review and Badging policy. Since it may be very time consuming to perform full validation of AI/ML experiments while requiring expensive computational resources, we decided to only validate if submitted artifacts are "available" and "functional" at MLSys'19. However, you still need to provide a small sample data set to test the functionality of your artifact. Thus, depending on evaluation results, camera-ready papers will include the artifact appendix and will receive at most two ACM stamps of approval printed on their first page:
If you have questions, please check AE FAQs,
discuss them via the dedicated AE google group
or get in touch with MLSys AE chairs!