LazyPPL is a Haskell library for Bayesian probabilistic programming, drawing on the fact that lazy data structures are a natural idiom for programming with infinite-dimensional Bayesian methods such as Poisson/Gaussian/Dirichlet processes, etc. The crucial semantic idea, inspired by developments in synthetic probability theory, is to work with two separate monads: an affine monad of probability, which supports laziness, and a commutative, non-affine monad of measures, which does not.
It provides a Metropolis-Hastings implementation in Haskell that works with lazy programs, together with some examples.
Various examples are given in Literate Haskell at https://lazyppl-team.github.io.
The main LazyPPL modules are available on Hackage and Stackage.
To try the different examples, download the repository and use stack run wiener-exe
and so on.
The source is in src/
.
A Docker image is available. To start a container, run the following command:
docker pull youkad/lazyppl:0.2
docker run -it --rm -v ~/images:/opt/lazyppl/images youkad/lazyppl:0.2
Laziness appears to be a useful method for specifying non-parametric models. For example, we often use processes or infinite dimensional functions, and this is fine because only finite parts are explored in practice.
The aim of this implementation is to demonstrate that it is possible to have first-class lazy structures in probabilistic programming. For example, we have first-class
- stochastic memoization:
memoize :: (a -> Prob b) -> Prob (a -> b)
- Gaussian processes:
wiener :: Prob (Double -> Double)
- point processes:
poissonPP :: Prob [Double]
- Dirichlet process:
dp :: Double -> Prob a -> Prob (Prob a)
-
WienerDemo
(lhs
) contains a simple implementation of regression using a Wiener process. Via maintaining a hidden table of previous calls, it appears to be a bona fide random function$R\to R$ that is constructed lazily. Because the functions are built lazily, some values of the functions will be sampled during the simulation, and others just during the plotting.
RegressionDemo
(lhs
) demonstrates piecewise linear regression. Key idea: the change points are drawn from a lazy Poisson process.
Clustering
(lhs
) contains some simple clustering examples, where the number of clusters is unknown. Key uses of laziness: stick-breaking is lazy, and we also use stochastic memoization.
ProgramInduction
(lhs
) contains a simple example of program induction over a simple arithmetic language. Key use of laziness: Random expressions are represented as an infinite forest together with a finite path through it.
- We also have examples of feature extraction (Additive Clustering via the Indian Buffet Process), and relation inference (via the Mondrian Process, and a simple Chinese-Restaurant-Process based Infinite Relational Model).
LazyPPL.hs
contains a likelihood weighted importance sampling algorithm (lwis) and a Metropolis-Hastings algorithm, together with the basic monads. There are two monads,Prob
andMeas
.Prob
is to be thought of as a monad of probability measures. It provides a functionuniform
.Meas
is to be thought of as a monad of unnormalized measures. It provides an interface usingsample
(which draws from aProb
) andscore
(which weights a trace, typically by a likelihood).- Our first Metropolis Hastings algorithm
mh
takes as an argument a probabilityp
of changing any given site. This is different from single-site MH (which picks exactly one site to change each step) and multi-site MH (which changes all sites at each step). The reason for this is that in the lazy setting we cannot explore how many sites there are without triggering more computation (that would require a lot of Haskell hacking). We can recover single-site MH by puttingp=1/num_sites
and multi-site MH by puttingp=1
. - A second Metropolis Hastings algorithm
mh1
implements a single-site proposal kernel by inspecting the Haskell heap. - A key implementation idea is that the underlying sample space is
Tree
, which comprises a lazy tree ofDouble
s that is infinitely wide and infinitely deep. Informally, at any moment, if you need some unknown or lazy amount of randomness, you can grab just one branch of the tree, without worrying about affecting the other branches. That branch will itself be aTree
, so this can all happen recursively or in a nested way without any problems.
LazyPPL/Distributions.hs
contains common parametric distributions such as normal distributions etc.. We find that the types of Haskell are also quite illuminating for complex distributions, see the types of the processesmemoize
,wiener
,poissonPP
,dp
above. We also use abstract types, which capture some essence of exchangeability, for example:- in a Chinese Restaurant Process,
newCustomer :: Restaurant -> Prob Table
(DirichletP.hs
); - in an Indian Buffet Process,
newCustomer :: Restaurant -> Prob [Dish]
(IBP.hs
).
- in a Chinese Restaurant Process,
The system uses Haskell stack. You need to install stack first if you want to use the system in the standard way.
To build, type
stack build
.
This may take some time (>1 hour) if it is your first time ever using stack.
For plotting, install Python packages required for the matplotlib
Haskell wrapper: python3 -m pip install -U matplotlib numpy tk scipy
.
To run, type
stack run wiener-exe
or stack run regression-exe
or stack run clustering-exe
, or explore with stack ghci
.
For the Gaussian process example, the hmatrix
library requires the GSL, BLAS and LAPACK development packages. On Ubuntu, you can install them with sudo apt-get install libgsl0-dev liblapack-dev
.
The core of the library can be found in src/LazyPPL.hs
and some useful, common, and interesting distributions can be found in src/LazyPPL/Distributions.hs
and src/LazyPPL/Distributions/
.
src/LazyPPL.hs
contains a likelihood-weighted importance sampling algorithm (lwis
) and a Metropolis-Hastings algorithm, together with the basic monads. There are two monads,Prob
andMeas
.Prob
is to be thought of as an affine monad of probability measures. It comes with various predefined probability distributions such asuniform
(insrc/LazyPPL.hs
),normal
,exponential
,beta
,poisson
,dirichlet
,iid
, etc (insrc/LazyPPL/Distributions.hs
).Meas
is to be thought of as a non-affine monad of unnormalized measures. It provides an interface usingsample
(which draws from aProb
) andscore
(which multiplies the current trace by a given likelihood weight).- Our first Metropolis Hastings algorithm
mh
takes as an argument a probabilityp
of changing any given site of the rose tree of random seeds. This is different from single-site MH (which picks exactly one site to change each step) and multi-site MH (which changes all sites at each step). The reason for this is that, in the lazy setting, we cannot explore how many active sites there are without triggering more computation (which would require a lot of Haskell hacking). We can approximate single-site MH by puttingp=1/num_sites
and recover multi-site MH by puttingp=1
. - A second Metropolis Hastings algorithm
mh1
implements a single-site proposal kernel by inspecting the Haskell heap. - A key implementation idea is that the underlying sample space is
Tree
, which comprises a lazy tree ofDouble
s that is infinitely wide and infinitely deep. Informally, at any moment, if you need some unknown or lazy amount of randomness, you can target just one branch of the tree, without worrying about affecting the other branches. That branch will itself be aTree
, so this can all happen recursively without any problems.
src/LazyPPL/Distributions.hs
contains common distributions such as, among others, the normal, exponential, gamma, dirichlet distributions. We find that Haskell types are quite illuminating to construct sophisticated distributions. For example, abstract types capture some essence of exchangeability, for example:- in the Chinese Restaurant Process,
newCustomer :: Restaurant -> Prob Table
(src/LazyPPL/Distributions/DirichletP.hs
); - in the Indian Buffet Process,
newCustomer :: Restaurant -> Prob [Dish]
(src/LazyPPL/Distributions/IBP.hs
).
- in the Chinese Restaurant Process,
The rest of the files in the src/
directory consists of various examples of probabilistic models that we can write using our library.
If you write your own Haskell module, add it to package.yaml
first (remembering to change the main:
and -main-is
parts accordingly), and then execute stack run
or stack ghci
as needed.
For example, if the module is named ReviewerTest
(saved as src/ReviewerTest.hs
), you will need to add the following to package.yaml
:
reviewertest-exe:
main: ReviewerTest.hs
source-dirs: src
ghc-options:
- -threaded
- -rtsopts
- -with-rtsopts=-N
- -main-is ReviewerTest
dependencies:
- lazyppl
Then, run it with stack run reviewertest-exe
or open it up in the REPL using stack ghci src/ReviewerTest.hs
.