Skip to content

Latest commit

 

History

History
281 lines (171 loc) · 15.5 KB

overview.md

File metadata and controls

281 lines (171 loc) · 15.5 KB

Statistics 159/259, Spring 2022 Course Summary

Overview

This course teaches “the why and how” of reproducible and collaborative research by combining questions of good computational practice in science, open science and statistical data analysis, in the context of today’s research environment. We will interleave practical topics in software engineering and statistical computing with broader discussions on elements of the philosophy of science and the foundations of statistics.

More details can be found in the syllabus.

Key Resources

  • Communication: class Piazza.

  • Lectures will be recorded and posted in the Kaltura system (visible via bCourses), but attendance is mandatory. (Much of the pedagogical value of the class is in participating in discussions and code reviews).

  • Course readings that are not easy to find free on the web or through the UC Berkeley Library will be posted to bCourses.

  • Computing resources

    • We will use Jupyter notebooks. We will start with hosted notebooks on our Stat 159 JupyterHub. Later in the term, we will discuss installing Jupyter on your own device. The JupyterHub server will have all the packages you need pre-installed.
    • The sources for class notes and most other materials are available on github, with a rendered version here.
    • Assignments should be submitted by pull request to your private repositories using the GitHub Clasroom.
    • Whenever you need to work with GitHub, remember to activate GitHub authentication from the JupyterHub, by running the command github-app-user-auth at a terminal and following the instructions. If once authenticated you can't push to a given repo, it may be that you forgot to add that repo/org to your setup of the authentication app, go here to configure the app's permissions.
  • A note on the Berkeley Library EZProxy: Some of the resources listed here are scientific articles available only behind journal paywalls. If you haven't already, you should configure your web browser to use the campus library EZProxy so you can access them even if you are working from an off-campus network.

Textbook and supporting materials

While not strictly a textbook for this course, we will rely heavily on the excellent, openly licensed: Research software engineering in Python. We will complement it with these other scientific python resources:

Other bibliography

Above are a list of books and websites mostly focusing on computational skills, and this is a list of all the bibliography we'll refer to in the course. Some of these will become assigned readings, while others are available for your reference.

PLOS Ten Simple Rules

The PLOS Ten Simple Rules collection has many short, valuable papers full of relevant, practical advice in this space. A few that stand out, though many (if not most) are worth your time, are "Ten simple rules for ...":

Computational research

Open Source Software and Open Science

Data Management

The art of research

National Academies Reports

These are key reports produced by the National Academies of Science, Engineering and Medicine. They were created by teams of world experts in the field, and inform policy in multiple areas:

Other general references on reproduciblity and open science

Reproducibility and earth/climate science

Concepts

Terms related to reproducibility

  • reproducibility

  • replicability

  • repeatability

  • computational reproducibility

  • "preproducibility"

Reproducibility and the Philosophy of Science

  • the role of replication in science

  • "virtual witnessing" and the role(s) of scientific publishing

Obstacles to reproducibility

  • data availability

    • data
    • data format
    • data dictionary
    • data cleaning and munging
    • data pre-processing
  • reliance on proprietary software

  • analysis

    • breadcrumbs / description
    • actual code
    • description and what was done are often different
    • scripting analyses is key--but not enough
    • software versions, libraries, compilers, environments, hardware can matter

Obstacles to replicability

  • lack of preproducibility: what was done?

  • "researcher degrees of freedom"

    • what was considered but not tried, or tried and discarded?
    • choice of hypotheses, P-hacking
    • choice of data subsets
    • choice of transformations
    • choice of models
    • choice of estimators
      • if Bayesian, choice of prior
      • if frequentist, what method and why?
      • constraints?
    • choice of measures of uncertainty
      • nonparametric / model-based / parametric / asymptotic
      • local / global
      • selective inference, P-hacking, cherry-picking, "garden of forking paths"
      • hypothesis tests: what is the full null? What does it have to do with reality?
  • "file-drawer effect"

    • small $n$ studies
  • ignoring multiplicity & multiple testing (including selective inference)

  • intrinsic variability

  • sensitivity to "influential" observations

  • appropriate level of abstraction

Obstacles to good science and applied Statistics

  • confirmation bias

  • Foundational issues; misinterpretations of probability and uncertainty

    • Interpretation of probability
      • prior probabilities
    • Types of uncertainty
      • Epistemic and aleatory uncertainty
      • constraints versus priors
    • Bayesian and frequentist measures of uncertainty
    • Duality between minimax and Bayes estimation
    • models versus response schedules
  • model mania

    • correlation (even really strong correlation) is not causation
    • fit does not imply correctness
    • familiarity does not imply appropriateness (Fallacies do not cease to be fallacies because they become fashions. —G.K. Chesterton)
    • Statistical practice as superstition
  • ritualization of Statistics, cargo-cult science

  • bad incentive structure in academia

Weaponizing reproducible/open science

Key ideas/tools from software engineering that can help improve science

  • revision/version control

  • documentation, documentation, documentation

  • modularity and abstraction

  • scripted analyses and automation

  • unit tests, regression tests, coverage tests, continuous integration

  • code review

  • pair programming

  • consistency: APIs, calling signatures, object-oriented code

  • separating data, computation, presentation