Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 49 additions & 39 deletions lectures/ar1_bayes.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,15 @@ kernelspec:
name: python3
---

# Posterior Distributions for AR(1) Parameters
# Posterior Distributions for AR(1) Parameters

```{include} _admonition/gpu.md
```

```{code-cell} ipython3
:tags: [hide-output]

!pip install numpyro jax
!pip install numpyro
```

In addition to what's included in base Anaconda, we need to install the following packages
Expand Down Expand Up @@ -68,7 +68,8 @@ $$
y_{t+1} = \rho y_t + \sigma_x \epsilon_{t+1}, \quad t \geq 0
$$ (eq:themodel)

where the scalars $\rho$ and $\sigma_x$ satisfy $|\rho| < 1$ and $\sigma_x > 0$;
where the scalars $\rho$ and $\sigma_x$ satisfy $|\rho| < 1$ and $\sigma_x > 0$.

$\{\epsilon_{t+1}\}$ is a sequence of i.i.d. normal random variables with mean $0$ and variance $1$.

The second component of the statistical model is
Expand Down Expand Up @@ -115,16 +116,23 @@ Unknown parameters are $\rho, \sigma_x$.

We have independent **prior probability distributions** for $\rho, \sigma_x$ and want to compute a posterior probability distribution after observing a sample $\{y_{t}\}_{t=0}^T$.

The notebook uses `pymc4` and `numpyro` to compute a posterior distribution of $\rho, \sigma_x$. We will use NUTS samplers to generate samples from the posterior in a chain. Both of these libraries support NUTS samplers.
The notebook uses `pymc4` and `numpyro` to compute a posterior distribution of $\rho, \sigma_x$.

We will use NUTS samplers to generate samples from the posterior in a chain.

Both of these libraries support NUTS samplers.

NUTS is a form of Monte Carlo Markov Chain (MCMC) algorithm that bypasses random walk behaviour and allows for convergence to a target distribution more quickly.

NUTS is a form of Monte Carlo Markov Chain (MCMC) algorithm that bypasses random walk behaviour and allows for convergence to a target distribution more quickly. This not only has the advantage of speed, but allows for complex models to be fitted without having to employ specialised knowledge regarding the theory underlying those fitting methods.
This not only has the advantage of speed, but allows for complex models to be fitted without having to employ specialised knowledge regarding the theory underlying those fitting methods.

Thus, we explore consequences of making these alternative assumptions about the distribution of $y_0$:

- A first procedure is to condition on whatever value of $y_0$ is observed. This amounts to assuming that the probability distribution of the random variable $y_0$ is a Dirac delta function that puts probability one on the observed value of $y_0$.
- A first procedure is to condition on whatever value of $y_0$ is observed.

- A second procedure assumes that $y_0$ is drawn from the stationary distribution of a process described by {eq}`eq:themodel`
so that $y_0 \sim {\cal N} \left(0, {\sigma_x^2\over (1-\rho)^2} \right) $
This amounts to assuming that the probability distribution of the random variable $y_0$ is a Dirac delta function that puts probability one on the observed value of $y_0$.

- A second procedure assumes that $y_0$ is drawn from the stationary distribution of a process described by {eq}`eq:themodel` so that $y_0 \sim {\cal N} \left(0, {\sigma_x^2\over (1-\rho)^2} \right) $

When the initial value $y_0$ is far out in a tail of the stationary distribution, conditioning on an initial value gives a posterior that is **more accurate** in a sense that we'll explain.

Expand All @@ -137,7 +145,11 @@ We begin by solving a **direct problem** that simulates an AR(1) process.

How we select the initial value $y_0$ matters.

* If we think $y_0$ is drawn from the stationary distribution ${\mathcal N}(0, \frac{\sigma_x^{2}}{1-\rho^2})$, then it is a good idea to use this distribution as $f(y_0)$. Why? Because $y_0$ contains information about $\rho, \sigma_x$.
* If we think $y_0$ is drawn from the stationary distribution ${\mathcal N}(0, \frac{\sigma_x^{2}}{1-\rho^2})$, then it is a good idea to use this distribution as $f(y_0)$.

Why?

Because $y_0$ contains information about $\rho, \sigma_x$.
Comment on lines +148 to +152
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not useful -- part of a list element.


* If we suspect that $y_0$ is far in the tails of the stationary distribution -- so that variation in early observations in the sample have a significant **transient component** -- it is better to condition on $y_0$ by setting $f(y_0) = 1$.

Expand All @@ -146,25 +158,25 @@ To illustrate the issue, we'll begin by choosing an initial $y_0$ that is far ou

```{code-cell} ipython3

def ar1_simulate(rho, sigma, y0, T):
def ar1_simulate(ρ, σ, y0, T):

# Allocate space and draw epsilons
y = np.empty(T)
eps = np.random.normal(0.,sigma,T)
eps = np.random.normal(0., σ, T)

# Initial condition and step forward
y[0] = y0
for t in range(1, T):
y[t] = rho*y[t-1] + eps[t]
y[t] = ρ * y[t-1] + eps[t]

return y

sigma = 1.
rho = 0.5
σ = 1.
ρ = 0.5
T = 50

np.random.seed(145353452)
y = ar1_simulate(rho, sigma, 10, T)
y = ar1_simulate(ρ, σ, 10, T)
```

```{code-cell} ipython3
Expand All @@ -178,7 +190,7 @@ Now we shall use Bayes' law to construct a posterior distribution, conditioning

First we'll use **pymc4**.

## PyMC Implementation
## PyMC implementation

For a normal distribution in `pymc`,
$var = 1/\tau = \sigma^{2}$.
Expand All @@ -190,14 +202,14 @@ AR1_model = pmc.Model()
with AR1_model:

# Start with priors
rho = pmc.Uniform('rho', lower=-1., upper=1.) # Assume stable rho
sigma = pmc.HalfNormal('sigma', sigma = np.sqrt(10))
ρ = pmc.Uniform('rho', lower=-1., upper=1.) # Assume stable rho
σ = pmc.HalfNormal('sigma', sigma=np.sqrt(10))

# Expected value of y at the next period (rho * y)
yhat = rho * y[:-1]
yhat = ρ * y[:-1]

# Likelihood of the actual realization
y_like = pmc.Normal('y_obs', mu=yhat, sigma=sigma, observed=y[1:])
y_like = pmc.Normal('y_obs', mu=yhat, sigma=σ, observed=y[1:])
```

[pmc.sample](https://www.pymc.io/projects/docs/en/v5.10.0/api/generated/pymc.sample.html#pymc-sample) by default uses the NUTS samplers to generate samples as shown in the below cell:
Expand Down Expand Up @@ -246,15 +258,15 @@ AR1_model_y0 = pmc.Model()
with AR1_model_y0:

# Start with priors
rho = pmc.Uniform('rho', lower=-1., upper=1.) # Assume stable rho
sigma = pmc.HalfNormal('sigma', sigma=np.sqrt(10))
ρ = pmc.Uniform('rho', lower=-1., upper=1.) # Assume stable rho
σ = pmc.HalfNormal('sigma', sigma=np.sqrt(10))

# Standard deviation of ergodic y
y_sd = sigma / np.sqrt(1 - rho**2)
y_sd = σ / np.sqrt(1 - ρ**2)

# yhat
yhat = rho * y[:-1]
y_data = pmc.Normal('y_obs', mu=yhat, sigma=sigma, observed=y[1:])
yhat = ρ * y[:-1]
y_data = pmc.Normal('y_obs', mu=yhat, sigma=σ, observed=y[1:])
y0_data = pmc.Normal('y0_obs', mu=0., sigma=y_sd, observed=y[0])
```

Expand Down Expand Up @@ -284,15 +296,14 @@ Please note how the posterior for $\rho$ has shifted to the right relative to wh
Think about why this happens.

```{hint}
It is connected to how Bayes Law (conditional probability) solves an **inverse problem** by putting high probability on parameter values
that make observations more likely.
It is connected to how Bayes Law (conditional probability) solves an **inverse problem** by putting high probability on parameter values that make observations more likely.
```

We'll return to this issue after we use `numpyro` to compute posteriors under our two alternative assumptions about the distribution of $y_0$.

We'll now repeat the calculations using `numpyro`.

## Numpyro Implementation
## Numpyro implementation

```{code-cell} ipython3

Expand Down Expand Up @@ -326,14 +337,14 @@ def plot_posterior(sample):
```{code-cell} ipython3
def AR1_model(data):
# set prior
rho = numpyro.sample('rho', dist.Uniform(low=-1., high=1.))
sigma = numpyro.sample('sigma', dist.HalfNormal(scale=np.sqrt(10)))
ρ = numpyro.sample('rho', dist.Uniform(low=-1., high=1.))
σ = numpyro.sample('sigma', dist.HalfNormal(scale=np.sqrt(10)))

# Expected value of y at the next period (rho * y)
yhat = rho * data[:-1]
yhat = ρ * data[:-1]

# Likelihood of the actual realization.
y_data = numpyro.sample('y_obs', dist.Normal(loc=yhat, scale=sigma), obs=data[1:])
y_data = numpyro.sample('y_obs', dist.Normal(loc=yhat, scale=σ), obs=data[1:])

```

Expand Down Expand Up @@ -370,17 +381,17 @@ Here's the new code to achieve this.
```{code-cell} ipython3
def AR1_model_y0(data):
# Set prior
rho = numpyro.sample('rho', dist.Uniform(low=-1., high=1.))
sigma = numpyro.sample('sigma', dist.HalfNormal(scale=np.sqrt(10)))
ρ = numpyro.sample('rho', dist.Uniform(low=-1., high=1.))
σ = numpyro.sample('sigma', dist.HalfNormal(scale=np.sqrt(10)))

# Standard deviation of ergodic y
y_sd = sigma / jnp.sqrt(1 - rho**2)
y_sd = σ / jnp.sqrt(1 - ρ**2)

# Expected value of y at the next period (rho * y)
yhat = rho * data[:-1]
yhat = ρ * data[:-1]

# Likelihood of the actual realization.
y_data = numpyro.sample('y_obs', dist.Normal(loc=yhat, scale=sigma), obs=data[1:])
y_data = numpyro.sample('y_obs', dist.Normal(loc=yhat, scale=σ), obs=data[1:])
y0_data = numpyro.sample('y0_obs', dist.Normal(loc=0., scale=y_sd), obs=data[0])
```

Expand Down Expand Up @@ -408,8 +419,7 @@ mcmc2.print_summary()

Look what happened to the posterior!

It has moved far from the true values of the parameters used to generate the data because of how Bayes' Law (i.e., conditional probability)
is telling `numpyro` to explain what it interprets as "explosive" observations early in the sample.
It has moved far from the true values of the parameters used to generate the data because of how Bayes' Law (i.e., conditional probability) is telling `numpyro` to explain what it interprets as "explosive" observations early in the sample.

Bayes' Law is able to generate a plausible likelihood for the first observation by driving $\rho \rightarrow 1$ and $\sigma \uparrow$ in order to raise the variance of the stationary distribution.

Expand Down