-
Notifications
You must be signed in to change notification settings - Fork 0
Enabling open science
[ Home ]
We have developed our open source Collective Knowledge framework and live repository to enable collaborative and reproducible experimentation with an open publication model. Here we collect links to various related initiatives, repositories, tools, articles and events - feel free to help us update it or discuss this community-driven initiative via CK mailing list and LinkedIn group on reproducibility!- New version of our open repository for collaborative and reproducible SW/HW optimization and co-design is available!
- New version of our Android-based application for experiment crowdsourcing has been released!
- We are organizing upcoming PPoPP'17 artifact evaluation
- We are organizing upcoming CGO'17 artifact evaluation
- Dagstuhl report on Artifact Evaluation is online
- Artifact Evaluation for computer systems' conferences, workshops and journals including PPoPP,CGO,PACT
- ADAPT'16 @ HiPEAC'16 featured for the first time our open publication model with community-driven reviewing, public Reddit-based discussions and artifact evaluation
- Dagstuhl perspective workshop on artifact evaluation for conferences and journals
Since 2006 we have been trying to solve problems with reproducibility of experimental results in computer engineering as a side effect of our MILEPOST , cTuning.org, Collective Mind and Collective Knowledge projects to speed up optimization, benchmarking and co-design of computer systems and neural networks using multi-objective autotuning, big data, predictive analytics and experiment crowdsourcing (see our experience report and the cTuning foundation history).
We now focus on the following technological and social aspects to enable collaborative, systematic and reproducible research and experimentation particularly related to benchmarking, optimization and co-design of faster, smaller, cheaper, more power efficient and reliable software and hardware:
- developing collaborative research and experimentation infrastructure that can share artifacts as reusable components together with the whole experimental setups (see P1, P2;
- developing public and open source repositories of knowledge (see our live repository and our vision papers P1, P2);
- evangelizing and enabling new open publication model for online workshops, conferences and journals (see our proposal arXiv / ACM DL);
- setting up and improving procedure for sharing and evaluating experimental results and all related material for workshops, conferences and journals (see our proposal arXiv / ACM DL);
- improving sharing, description of dependencies, and statistical reproducibility of experimental results and related material;
- supporting and improving Artifact Evaluation for major workshops and conferences including PPoPP, CGO and PACT.
- Collection of related tools
- Collection of related initiatives
- Collection of related benchmarks and data sets
- Collection of public repositories
- Collection of related lectures
- Collection of related articles
- Collection of related blogs
- Collection of jokes
- Collection of related events
- Artifact Evaluation procedures for computer systems conferences
- PPoPP'17 artifact evaluation
- CGO'17 artifact evaluation
- RTSS'16 artifact evaluation
- PACT'16 artifact evaluation
- PPoPP'16 artifact evaluation
- CGO'16 artifact evaluation
- ADAPT'15 @ HiPEAC'15 - workshop on adaptive self-tuning computer systems
- PPoPP'15 artifact evaluation
- CGO'15 artifact evaluation
- ADAPT'15 @ HiPEAC'15 - workshop on adaptive self-tuning computer systems
- ADAPT'14 @ HiPEAC'14 - workshop on adaptive self-tuning computer systems [program and publications]
- Special journal issue on Reproducible Research Methodologies at IEEE TETC
- ACM SIGPLAN TRUST'14 @ PLDI'14
- REPRODUCE'14 @ HPCA'14
- ADAPT'14 panel @ HiPEAC'14
- HiPEAC'13 CSW thematic session @ ACM ECRC "Making computer engineering a science"
- HiPEAC'12 CSW thematic session
- ASPLOS/EXADAPT'12 panel @ ASPLOS'12
- cTuning lectures (2008-2010)
- GCC Summit'09 discussion
Together with the community, non-profit cTuning foundation and dividiti we are working on the following topics to enable open research:
- developing tools and methodology to capture, preserve, formalize, systematize, exchange and improve knowledge and experimental results including negative ones
- describing and cataloging whole experimental setups with all related material including algorithms, benchmarks, codelets, datasets, tools, models and any other artifact
- developing specification to preserve experiments including all software and hardware dependencies
- dealing with variability and rising amount of experimental data using statistical analysis, data mining, predictive modeling and other techniques
- developing new predictive analytics techniques to explore large design and optimization spaces
- validating and verifying experimental results by the community
- developing common research interfaces for existing or new tools
- developing common experimental frameworks and repositories (enable automation, re-execution and sharing of experiments)
- sharing rare hardware and computational resources for experimental validation
- implementing previously published experimental scenarios (auto-tuning, run-time adaptation) using common infrastructure
- implementing open access to publications and data (particularly discussing intellectual property IP and legal issues)
- speeding up analysis of "big" experimental data
- developing new (interactive) visualization techniques for "big" experimental data
- enabling interactive articles
- LinkedIn group on reproducible research
- Main mailing list (open research powered by Collective Knowledge framework)
- cTuning foundation mailing list (collaborative computer systems' research)
- Grigori Fursin's twitter
- cTuning foundation twitter
- cTuning foundation facebook page
- dividiti blog
- Collective Knowledge repository
- Outdated cTuning wiki page related to reproducible research and open publication model
- Outdated cTuning repository for program/processor performance/power/size optimization (2008-2010): database, web-service for online prediction of optimizations (first ML web-service with JSON API)
- Outdated Collective Mind repository for crowd-tuning (2012-2014): Link
We would like to thank our colleagues from the cTuning foundation, dividiti, artifact-eval.org colleagues, OCCAM project for their help, feedback, participation and support.
If you would like to discuss our open science initiative, do not hesitate to get in touch via CK mailing list!
CK development is coordinated by the non-profit cTuning foundation and dividiti