Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lazyppl #175

Merged
merged 194 commits into from
Aug 30, 2022
Merged
Show file tree
Hide file tree
Changes from 193 commits
Commits
Show all changes
194 commits
Select commit Hold shift + click to select a range
3263e4b
analytic
Nov 23, 2021
9b79de6
docs
Dec 18, 2021
c4ae117
docs
Dec 19, 2021
9f3e49b
docs
Dec 26, 2021
e750b1e
docs
Jan 16, 2022
99d4c04
inference
reubenharry Mar 15, 2022
6478c0a
executable docs
reubenharry Mar 18, 2022
97517a2
more comments, added BayesianModel
reubenharry Mar 21, 2022
4577135
Remove MaybeT and use ExceptT
reubenharry Mar 23, 2022
e9b7158
remove vscode and update gitignore
reubenharry Mar 23, 2022
6c1214a
delete additional changes: Failure
reubenharry Mar 23, 2022
1bbdb2d
delete broken files
reubenharry Mar 23, 2022
003aca4
update test for enumerator
reubenharry Mar 23, 2022
31d3dd1
update models
reubenharry Mar 27, 2022
a48d806
tips on inference
reubenharry Mar 27, 2022
cb100df
density
reubenharry Mar 27, 2022
530f4d9
analytic
reubenharry Mar 29, 2022
4d2a447
analytic
reubenharry Mar 29, 2022
a057100
more pipes
reubenharry Mar 30, 2022
344fb10
update docs
reubenharry Mar 31, 2022
91f968b
linear outliers as in Gen tutorial
reubenharry Apr 1, 2022
0909ea9
update docs
reubenharry Apr 7, 2022
4ef4ccd
cleanup SMC.hs
reubenharry Apr 7, 2022
aa18cfc
more cleanup, and explicit imports
reubenharry Apr 7, 2022
10e464e
remove Rejection
reubenharry Apr 7, 2022
42a8a40
cleanup
reubenharry Apr 8, 2022
753e72a
enumerate
reubenharry Apr 8, 2022
e87ab45
binning
reubenharry Apr 8, 2022
7787b2a
comment
reubenharry Apr 11, 2022
66ec33a
explore
reubenharry Apr 13, 2022
ae590fc
nested inference example
reubenharry Apr 18, 2022
105a63f
found bug in Gamma
reubenharry Apr 25, 2022
a8d9a17
debugging
reubenharry Apr 26, 2022
93b1775
port lazyppl monad into monad-bayes
reubenharry Apr 26, 2022
f9b537e
mh
reubenharry Apr 26, 2022
f63cd79
add conjugacy tests
reubenharry Apr 28, 2022
5ea0f16
working branch
reubenharry Apr 28, 2022
b446c47
tried symbolic
reubenharry Apr 29, 2022
00f087a
no symbolic
reubenharry May 2, 2022
dc065ea
Merge branch 'analytic' into models
reubenharry May 2, 2022
84633be
remove reflection
reubenharry May 2, 2022
8e899cd
Merge branch 'analytic' into models
reubenharry May 2, 2022
24f6343
midway: need to finish hmm and enumerator:
reubenharry May 2, 2022
5bd1837
update docs
reubenharry May 4, 2022
b08e9dd
DP mixture: remove
reubenharry May 9, 2022
5875b43
build docs manually
reubenharry May 9, 2022
656ff50
insegel theme
reubenharry May 9, 2022
e49115e
stanford theme
reubenharry May 9, 2022
e8236f0
lint
reubenharry May 9, 2022
aab90b0
lint
reubenharry May 10, 2022
84dd4f0
ormulo
May 10, 2022
072137a
nix ormolu fixes:
May 10, 2022
3bce561
merge main
reubenharry May 11, 2022
803327b
docs
reubenharry May 12, 2022
01512ec
trying to fix mathjax
reubenharry May 15, 2022
54814c1
Merge branch 'master' into docs
reubenharry May 15, 2022
69173bb
merge docs
reubenharry May 15, 2022
f3f53c6
remove empirical
reubenharry May 15, 2022
ec3fe94
merge docs
reubenharry May 15, 2022
af06b06
notebooks
reubenharry May 15, 2022
810aad9
Merge branch 'notebook' into histogram
reubenharry May 15, 2022
aab8095
notebooks: sampling
reubenharry May 15, 2022
8d143d2
merge cleanup
reubenharry May 15, 2022
0b94b79
merge notebook
reubenharry May 15, 2022
5287df4
cleaning up notebooks
reubenharry May 16, 2022
5dddb4f
remove helpers
reubenharry May 16, 2022
b829b03
Merge branch 'cleanup' into notebook
reubenharry May 16, 2022
a6d29da
when
reubenharry May 16, 2022
e7652d7
no helpers
reubenharry May 16, 2022
62319ee
models: lts
reubenharry May 16, 2022
9cff121
models: lts
reubenharry May 16, 2022
1cf551e
SMC notebook
reubenharry May 16, 2022
c2988d6
merged
reubenharry May 16, 2022
fd97ec3
weighted sampling
reubenharry May 17, 2022
2e6dd30
formatting
reubenharry May 18, 2022
70426f6
Merge branch 'notebook' into cleanup
reubenharry May 18, 2022
9f887b3
working on analytic
reubenharry May 18, 2022
6d36c1c
test enumerator
reubenharry May 18, 2022
6d4839c
discovered a bug in Integrator
reubenharry May 18, 2022
13cc36f
first attempt to fix normalization bug
reubenharry May 18, 2022
e19b2db
bug seems to be fixed
reubenharry May 18, 2022
bf7bdbf
update tests
reubenharry May 18, 2022
96ea8df
working tests: normalNormal goes awry if observations are huge
reubenharry May 19, 2022
328a16f
cleanup
reubenharry May 19, 2022
f378401
Bayesian in class, and remove LinearWithOutliers
reubenharry May 19, 2022
4b3e4e5
model updates
reubenharry May 23, 2022
3e21cff
yaml
Jun 4, 2022
77df67d
prepare for PR
Jun 6, 2022
93cf699
address warnings
Jun 6, 2022
9782c8c
review
Jun 6, 2022
4d8e98c
more warnings fixed
Jun 8, 2022
ea6cb50
merged
reubenharry Jun 9, 2022
51268c7
less warnings
Jun 9, 2022
9978861
update ghc version stated in cabal
reubenharry Jun 9, 2022
1aef717
Merge branch 'cleanup' of github.com:tweag/monad-bayes into cleanup
reubenharry Jun 9, 2022
8e32fbf
remove warnings
Jun 9, 2022
16f5e11
merge main
Jun 9, 2022
6c78b06
fix build
Jun 9, 2022
7c69878
version bump
Jun 9, 2022
8174b55
merged but tests fail
Jun 9, 2022
0fc99be
remove safe
Jun 9, 2022
d7011b5
Merge branch 'cleanup' into histogram
Jun 9, 2022
b293eca
fix merge
Jun 9, 2022
54b0165
fix rmsmc bug you introduced
reubenharry Jun 9, 2022
cb6cc06
integrator as MonadInfer
reubenharry Jun 10, 2022
3777955
updates
reubenharry Jun 10, 2022
165bd24
updates
reubenharry Jun 10, 2022
9fd5c00
updates
reubenharry Jun 10, 2022
e2ee3a9
fix the integrator
reubenharry Jun 10, 2022
b418558
fix the normalization of the integrator
reubenharry Jun 10, 2022
59ef114
merged
reubenharry Jun 11, 2022
fe29850
inference
reubenharry Jun 11, 2022
ae07cda
rope in analyticT
reubenharry Jun 13, 2022
7a1f775
remove integratorT
Jun 13, 2022
3e388f4
nix notebooks
Jun 14, 2022
1dad9f5
notebooks
Jun 14, 2022
fdb0177
rmsmc
Jun 14, 2022
0e074a8
lazy ppl
Jun 14, 2022
0a2dd21
hoistFirst bug again
Jun 15, 2022
44d3363
hoistFirst bug again
Jun 15, 2022
935ad8f
diagrams
Jun 15, 2022
277cb63
fp notebook
Jun 15, 2022
021aa60
remove integratorT
Jun 15, 2022
757d8cc
Merge branch 'cleanup' into histogram
Jun 15, 2022
ea01eaa
Merge branch 'histogram' into analytic
Jun 15, 2022
463b12f
changelog and version bump
Jun 15, 2022
20d7c21
Merge branch 'models' into lazyppl
Jun 15, 2022
2929ffa
presentation
Jun 16, 2022
c791cc9
notebooks
Jun 17, 2022
96d6523
notebooks
Jun 17, 2022
d1c6a19
cleanup
Jun 17, 2022
3faeb6f
docs
Jun 18, 2022
20275a5
docs
Jun 18, 2022
8ad0eac
presentation
Jun 20, 2022
5b55ea4
update talk
Jun 21, 2022
0b56e10
notebook
Jun 23, 2022
2d7ff29
works, but readd basic
Jun 27, 2022
99575ec
first working example of marginalization
Jun 27, 2022
c0cae57
renaming
Jun 28, 2022
3b3eb3d
continue cleanup
Jun 28, 2022
a71dabe
tests: not yet integrated in
Jun 28, 2022
343acde
tests: integrated in
Jun 28, 2022
49b72a7
bayesian pmmh
Jun 28, 2022
b4baf7b
switch over to new smc
Jun 28, 2022
bb4304c
switch over to new smc
Jun 28, 2022
d6beee0
improve resampleGeneric sum
Jun 28, 2022
78a0d52
independent in Class
Jun 28, 2022
20cf17b
parser
reubenharry Jul 5, 2022
74d1858
nbs
reubenharry Jul 8, 2022
3bdba31
no numSteps
reubenharry Jul 15, 2022
b9fa06c
make clean
reubenharry Jul 15, 2022
8c5e533
merge
reubenharry Jul 15, 2022
9f414a3
fix tests
reubenharry Jul 15, 2022
f096bf0
fix tests
reubenharry Jul 15, 2022
22c20b8
fix tests
reubenharry Jul 15, 2022
4849cb7
fix tests
reubenharry Jul 15, 2022
bb82c1b
pmmh
reubenharry Jul 15, 2022
5453626
withParticles
reubenharry Jul 15, 2022
4890596
withParticles
reubenharry Jul 15, 2022
4f7e666
Sequential.Free
reubenharry Jul 18, 2022
18c3fc1
Merge branch 'api' into sequentialCata
reubenharry Jul 18, 2022
c1c83af
Sequential.Free
reubenharry Jul 18, 2022
1d51b15
Density.Free + docs
reubenharry Jul 18, 2022
ce61682
docs
reubenharry Jul 18, 2022
be6331b
docs
reubenharry Jul 18, 2022
26d3d9e
Sampler.Strict
reubenharry Jul 18, 2022
0f8bd34
update nix
reubenharry Jul 25, 2022
72c5a79
cabal format
reubenharry Jul 25, 2022
d64962c
update nix
reubenharry Jul 25, 2022
7245f09
fix build
reubenharry Jul 25, 2022
86a72ef
merge sampler changes, fix changelog
reubenharry Jul 26, 2022
409b14d
bump changelog and version. bump ghc version to 9.2.3
reubenharry Jul 26, 2022
f99ed20
update docs and change mcmc so unweighted is not needed
reubenharry Jul 26, 2022
bc44e0f
remove sequentialCata
reubenharry Jul 28, 2022
331171c
Merge branch 'simplifyFreeSampler' into lazyppl
reubenharry Jul 28, 2022
73bee27
nbs
reubenharry Jul 28, 2022
b7e9bdc
merge
reubenharry Jul 28, 2022
eacf068
fix cabal
reubenharry Jul 28, 2022
45e9bf0
merge
reubenharry Jul 28, 2022
fee39c5
updating notebooks
reubenharry Jul 28, 2022
89d029c
mhTrans'
reubenharry Aug 1, 2022
0b23adb
fix mh
reubenharry Aug 1, 2022
897addb
Merge branch 'simplifyFreeSampler' into lazyppl
reubenharry Aug 1, 2022
c8f2b1d
infinite populations
reubenharry Aug 3, 2022
116b8d8
simple density
reubenharry Aug 15, 2022
5409e2f
simple density
reubenharry Aug 15, 2022
622a753
merge
reubenharry Aug 15, 2022
08beb3c
merge
reubenharry Aug 16, 2022
174bfff
lint
reubenharry Aug 26, 2022
35ed104
lint
reubenharry Aug 26, 2022
9df3dae
lazy sampler
reubenharry Aug 26, 2022
4db1caf
lazy sampler
reubenharry Aug 26, 2022
94f7f49
clean up
reubenharry Aug 29, 2022
dcf520d
Resolve merge conflict
idontgetoutmuch Aug 30, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
venv
_cache
docs/build
*.csv
Expand Down
3 changes: 3 additions & 0 deletions .netlify/state.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"siteId": "12c0eb5b-a921-45cf-af85-b903b65b801c"
}
4 changes: 2 additions & 2 deletions benchmark/SSM.hs
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ import Control.Monad.Bayes.Inference.RMSMC (rmsmcDynamic)
import Control.Monad.Bayes.Inference.SMC
import Control.Monad.Bayes.Inference.SMC2 as SMC2 (smc2)
import Control.Monad.Bayes.Population
import Control.Monad.Bayes.Population (population, resampleMultinomial, runPopulation)
import Control.Monad.Bayes.Sampler (sampleIO, sampleIOfixed, sampleWith)
import Control.Monad.Bayes.Population (population, resampleMultinomial)
import Control.Monad.Bayes.Sampler.Strict (sampleIO, sampleIOfixed, sampleWith)
import Control.Monad.Bayes.Weighted (unweighted)
import Control.Monad.IO.Class (MonadIO (liftIO))
import NonlinearSSM (generateData, model, param)
Expand Down
28 changes: 21 additions & 7 deletions benchmark/Single.hs
Original file line number Diff line number Diff line change
@@ -1,21 +1,36 @@
{-# LANGUAGE DerivingStrategies #-}
{-# LANGUAGE ImportQualifiedPost #-}

import Control.Monad.Bayes.Class
import Control.Monad.Bayes.Class (MonadInfer)
import Control.Monad.Bayes.Inference.MCMC (MCMCConfig (..), Proposal (SingleSiteMH))
import Control.Monad.Bayes.Inference.RMSMC
import Control.Monad.Bayes.Inference.RMSMC (rmsmcBasic)
import Control.Monad.Bayes.Inference.SMC
( SMCConfig (SMCConfig, numParticles, numSteps, resampler),
smc,
)
import Control.Monad.Bayes.Population
import Control.Monad.Bayes.Population (population)
import Control.Monad.Bayes.Sampler
import Control.Monad.Bayes.Sampler.Strict
import Control.Monad.Bayes.Traced
import Control.Monad.Bayes.Weighted
import Control.Monad.ST (runST)
import Data.Time
import Data.Time (diffUTCTime, getCurrentTime)
import HMM qualified
import LDA qualified
import LogReg qualified
import Options.Applicative
( Applicative (liftA2),
ParserInfo,
auto,
execParser,
fullDesc,
help,
info,
long,
maybeReader,
option,
short,
)
import System.Random.MWC (GenIO, createSystemRandom)

data Model = LR Int | HMM Int | LDA (Int, Int)
Expand All @@ -42,7 +57,7 @@ getModel model = (size model, program model)
data Alg = SMC | MH | RMSMC
deriving stock (Read, Show)

runAlg :: Model -> Alg -> Sampler GenIO IO String
runAlg :: Model -> Alg -> SamplerIO String
runAlg model alg =
case alg of
SMC ->
Expand All @@ -61,8 +76,7 @@ runAlg model alg =

infer :: Model -> Alg -> IO ()
infer model alg = do
g <- createSystemRandom
x <- sampleWith (runAlg model alg) g
x <- sampleIOfixed (runAlg model alg)
print x

opts :: ParserInfo (Model, Alg)
Expand Down
4 changes: 2 additions & 2 deletions benchmark/Speed.hs
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ import Control.Monad.Bayes.Class (MonadInfer, MonadSample)
import Control.Monad.Bayes.Inference.MCMC (MCMCConfig (MCMCConfig, numBurnIn, numMCMCSteps, proposal), Proposal (SingleSiteMH))
import Control.Monad.Bayes.Inference.RMSMC (rmsmcDynamic)
import Control.Monad.Bayes.Inference.SMC (SMCConfig (SMCConfig, numParticles, numSteps, resampler), smc)
import Control.Monad.Bayes.Population (population, resampleSystematic, runPopulation)
import Control.Monad.Bayes.Sampler (SamplerIO, sampleIOfixed, sampleWith)
import Control.Monad.Bayes.Population (population, resampleSystematic)
import Control.Monad.Bayes.Sampler.Strict (SamplerIO, sampleIOfixed)
import Control.Monad.Bayes.Traced (mh)
import Control.Monad.Bayes.Weighted (unweighted)
import Criterion.Main
Expand Down
24 changes: 22 additions & 2 deletions docs/source/probprog.md
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ which gives
[([1,2,3,4],0.5),([2,3,4,5],0.5)]
```

### Near exact inference for continuous distributions
## Near exact inference for continuous distributions

Monad-Bayes does not currently support exact inference (via symbolic solving) for continuous distributions. However, it *does* support numerical integration. For example, for the distribution defined by

Expand Down Expand Up @@ -289,7 +289,7 @@ model = do

we must first `normalize` the model, as in `probability (0, 0.1) (normalize model)`.

### Independent forward sampling
## Independent forward sampling

For any probabilistic program `p` without any `condition` or `factor` statements, we may do `sampler p` or `sampleIOfixed p` (to run with a fixed seed) to obtain a sample in an ancestral fashion. For example, consider:

Expand Down Expand Up @@ -335,6 +335,26 @@ run = (sampler . weighted) example
is an IO operation which when run, will display either `(False, 0.0)` or `(True, 1.0)`


## Lazy sampling

If you want to forward sample from an infinite program, just as a distribution over infinite lists, you can use monad-bayes's lazy sampler, which is based on LazyPPL. For example,

```haskell
import qualified Control.Monad.Bayes.Sampler.Lazy as Lazy

example :: MonadSample m => m [Double]
example = do
x <- random
fmap (x:) example

infiniteList <- Lazy.sampler example
take 4 infiniteList
```

To perform weighted sampling, use `lwis` from `Control.Monad.Bayes.Inference.Lazy.WIS` as in `lwis 10 example`. This takes 10 weighted samples, and produces an infinite stream of samples, regarding those 10 as an empirical distribution.

LazyPPL's `mh` implementation is also available.

## Markov Chain Monte Carlo

There are several versions of metropolis hastings MCMC defined in monad-bayes. The standard version is found in `Control.Monad.Bayes.Inference.MCMC`. You can use it as follows:
Expand Down
65 changes: 31 additions & 34 deletions docs/source/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -391,6 +391,7 @@ Summary of key info on `Sequential`:
- `instance MonadSample m => instance MonadSample (Sequential m)`
- `instance MonadCond m => instance MonadCond (Sequential m)`


```haskell
newtype Sequential m a =
Sequential {runSequential :: Coroutine (Await ()) m a}
Expand Down Expand Up @@ -474,56 +475,53 @@ hoistFirst :: (forall x. m x -> m x) -> Sequential m a -> Sequential m a
hoistFirst f = Sequential . Coroutine . f . resume . runSequential
```

<!-- As an example, consider:

TODO: Enumerator example with `trace` -->

When `m` is `Population n` for some other `n`, then `resampleGeneric` gives us one example of the natural transformation we want. In other words, operating in `Sequential (Population n)` works, and not only works but does something statistically interesting: particle filtering (aka SMC).




### FreeSampler
### Density

Summary of key info on `FreeSampler`:
Summary of key info on `Density`:

- `FreeSampler :: (Type -> Type) -> (Type -> Type)`
- `instance MonadSample (FreeSampler m)`
- `Density :: (Type -> Type) -> (Type -> Type)`
- `instance MonadSample (Density m)`
- **No** instance for `MonadCond`

`FreeSampler m` is not often going to be used on its own, but instead as part of the `Traced` type, defined below. A `FreeSampler m a` represents a reified execution of the program.
A *trace* of a program of type `MonadSample m => m a` is an execution of the program, so a choice for each of the random values. Recall that `random` underlies all of the random values in a program, so a trace for a program is fully specified by a list of `Double`s, giving the value of each call to `random`.

With this in mind, a `Density m a` is an interpretation of a probabilistic program as a function from a trace to the *density* of that execution of the program.

Monad-bayes offers two implementations, in `Control.Monad.Bayes.Density.State` and `Control.Monad.Bayes.Density.Free`. The first is slow but easy to understand, the second is more sophisticated, but faster.

`FreeSampler m` is best understood if you're familiar with the standard use of a free monad to construct a domain specific language. For probability in particular, see this [blog post](https://jtobin.io/simple-probabilistic-programming). Here's the definition:
The former is relatively straightforward: the `MonadSample` instance implements `random` as `get`ting the trace (using `get` from `MonadState`), using (and removing) the first element (`put` from `MonadState`), and writing that element to the output (using `tell` from `MonadWriter`). If the trace is empty, the `random` from the underlying monad is used, but the result is still written with `tell`.

The latter is best understood if you're familiar with the standard use of a free monad to construct a domain specific language. For probability in particular, see this [blog post](https://jtobin.io/simple-probabilistic-programming). Here's the definition:

```haskell
newtype SamF a = Random (Double -> a)
newtype FreeSampler m a =
FreeSampler {runFreeSampler :: FT SamF m a}
newtype Density m a =
Density {density :: FT SamF m a}

instance Monad m => MonadSample (FreeSampler m) where
random = FreeSampler $ liftF (Random id)
instance Monad m => MonadSample (Density m) where
random = Density $ liftF (Random id)
```

The monad-bayes implementation uses a more efficient implementation of `FreeT`, namely `FT` from the `free` package, known as the *Church transformed Free monad*. This is a technique explained in https://begriffs.com/posts/2016-02-04-difference-lists-and-codennsity.html. But that only changes the operational semantics - performance aside, it works just the same as the standard `FreeT` datatype.

If you unpack the definition, you get:

```haskell
FreeSampler m a ~ m (Either a (Double -> (FreeSampler m a)))
Density m a ~ m (Either a (Double -> (Density m a)))
```

As you can see, this is rather like `Coroutine`, except to "resume", you must provide a new `Double`, corresponding to the value of some particular random choice.

Since `FreeT` is a transformer, we can use `lift` to get a `MonadSample` instance.


A *trace* of a program of type `MonadSample m => m a` is an execution of the program, so a choice for each of the random values. Recall that `random` underlies all of the random values in a program, so a trace for a program is fully specified by a list of `Double`s, giving the value of each call to `random`.

Given a probabilistic program interpreted in `FreeSampler m`, we can "run" it to produce a program in the underlying monad `m`. For simplicity, consider the case of a program `bernoulli 0.5 :: FreeSampler SamplerIO Bool`. We can then use the following function:
`density` is then defined using the folding pattern `iterFT`, which interprets `SamF` in the appropriate way:

```haskell
withPartialRandomness :: MonadSample m => [Double] -> FreeSampler m a -> m (a, [Double])
withPartialRandomness randomness (FreeSampler m) =
density :: MonadSample m => [Double] -> Density m a -> m (a, [Double])
density randomness (Density m) =
runWriterT $ evalStateT (iterTM f $ hoistFT lift m) randomness
where
f (Random k) = do
Expand All @@ -538,7 +536,7 @@ withPartialRandomness randomness (FreeSampler m) =
k x
```

This takes a list of `Double`s (a representation of a trace), and a probabilistic program like `example`, and gives back a `SamplerIO (Bool, [Double])`. At each call to `random` in `example`, the next double in the list is used. If the list of doubles runs out, calls are made to `random` using the underlying monad, which in our example is `SamplerIO`. Hence "with*Partial*Randomness".
This takes a list of `Double`s (a representation of a trace), and a probabilistic program like `example`, and gives back a `SamplerIO (Bool, [Double])`. At each call to `random` in `example`, the next double in the list is used. If the list of doubles runs out, calls are made to `random` using the underlying monad.

<!-- The intuition here is that given a list of doubles in $[0,1]$, you can evaluate any probabilistic program. If your list of numbers is shorter than the number of calls to `random` in the program, the remaining calls are made in the underlying `MonadSample` instance `m`. -->

Expand All @@ -554,7 +552,7 @@ Summary of key info on `Traced`:
- `instance MonadSample m => MonadSample (Traced m)`
- `instance MonadCond m => MonadCond (Traced m)`

`Traced m` is actually several related interpretations, each built on top of `FreeSampler`. These range in complexity.
`Traced m` is actually several related interpretations, each built on top of `Density`. These range in complexity.



Expand All @@ -576,12 +574,12 @@ data Trace a = Trace
}
```

We also need a specification of the probabilistic program in question, free of any particular interpretation. That is precisely what `FreeSampler` is for.
We also need a specification of the probabilistic program in question, free of any particular interpretation. That is precisely what `Density` is for.

The simplest version of `Traced` is in `Control.Monad.Bayes.Traced.Basic`

```haskell
Traced m a ~ (FreeSampler Identity a, Log Double), m (Trace a))
Traced m a ~ (Density Identity a, Log Double), m (Trace a))
```

A `Traced` interpretation of a model is a particular run of the model with its corresponding probability, alongside a distribution over `Trace` info, which records: the value of each call to `random`, the value of the final output, and the density of this program trace.
Expand Down Expand Up @@ -707,7 +705,7 @@ A single step in this chain (in Metropolis Hasting MCMC) looks like this:

```haskell
mhTrans :: MonadSample m =>
Weighted (FreeSampler m) a -> Trace a -> m (Trace a)
Weighted (Density m) a -> Trace a -> m (Trace a)
mhTrans m t@Trace {variables = us, density = p} = do
let n = length us
us' <- do
Expand All @@ -717,15 +715,14 @@ mhTrans m t@Trace {variables = us, density = p} = do
(xs, _ : ys) -> return $ xs ++ (u' : ys)
_ -> error "impossible"
((b, q), vs) <-
runWriterT $ weighted
$ Weighted.hoist (WriterT . withPartialRandomness us') m
runWriterT $ weighted $ Weighted.hoist (WriterT . density us') m
let ratio = (exp . ln) $ min 1
(q * fromIntegral n / (p * fromIntegral (length vs)))
accept <- bernoulli ratio
return $ if accept then Trace vs b q else t
```

Our probabilistic program is interpreted in the type `Weighted (FreeSampler m) a`, which is an instance of `MonadInfer`. We use this to define our kernel on traces. We begin by perturbing the list of doubles contained in the trace by selecting a random position in the list and resampling there. We could do this *proposal* in a variety of ways, but here, we do so by choosing a double from the list at random and resampling it (hence, *single site* trace MCMC). We then run the program on this new list of doubles; `((b,q), vs)` is the outcome, probability, and result of all calls to `random`, respectively (recalling that the list of doubles may be shorter than the number of calls to `random`). The value of these is probabilistic in the underlying monad `m`. We then use the MH criterion to decide whether to accept the new list of doubles as our trace.
Our probabilistic program is interpreted in the type `Weighted (Density m) a`, which is an instance of `MonadInfer`. We use this to define our kernel on traces. We begin by perturbing the list of doubles contained in the trace by selecting a random position in the list and resampling there. We could do this *proposal* in a variety of ways, but here, we do so by choosing a double from the list at random and resampling it (hence, *single site* trace MCMC). We then run the program on this new list of doubles; `((b,q), vs)` is the outcome, probability, and result of all calls to `random`, respectively (recalling that the list of doubles may be shorter than the number of calls to `random`). The value of these is probabilistic in the underlying monad `m`. We then use the MH criterion to decide whether to accept the new list of doubles as our trace.

MH is then easily defined as taking steps with this kernel, in the usual fashion. Note that it works for any probabilistic program whatsoever.

Expand All @@ -736,18 +733,18 @@ MH is then easily defined as taking steps with this kernel, in the usual fashion
This is provided by

```haskell
sis ::
sequentially ::
Monad m =>
-- | transformation
(forall x. m x -> m x) ->
-- | number of time steps
Int ->
Sequential m a ->
m a
sis f k = finish . composeCopies k (advance . hoistFirst f)
sequentially f k = finish . composeCopies k (advance . hoistFirst f)
```

in Control.Monad.Bayes.Sequential. You provide a natural transformation in the underlying monad `m`, and `sis` applies that natural transformation at each point of conditioning in your program. The main use case is in defining `smc`, below, but here is a nice alternative use case:
in `Control.Monad.Bayes.Sequential.Coroutine`. You provide a natural transformation in the underlying monad `m`, and `sequentially` applies that natural transformation at each point of conditioning in your program. The main use case is in defining `smc`, below, but here is a nice didactic use case:

Consider the program:

Expand Down
Loading