Skip to content

Commit 4076830

Browse files
style: fix 82 issues in additive_functionals
- code: 9 fixes - format: 1 fixes - jax: 5 fixes - math: 5 fixes - ref: 5 fixes - title: 2 fixes - writing: 55 fixes Rules addressed: - qe-writing-002: One-Sentence Paragraphs - qe-writing-002: One-Sentence Paragraphs - qe-writing-002: One-Sentence Paragraphs - qe-code-004: Unicode Greek Letters - qe-code-005: Package Installation - qe-format-001: Definitions - qe-writing-002: One-Sentence Paragraphs - qe-writing-002: One-Sentence Paragraphs - qe-writing-002: One-Sentence Paragraphs - qe-writing-002: One-Sentence Paragraphs - ... and 72 more
1 parent 0332363 commit 4076830

File tree

1 file changed

+45
-53
lines changed

1 file changed

+45
-53
lines changed

lectures/additive_functionals.md

Lines changed: 45 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ kernelspec:
2020
</div>
2121
```
2222

23-
# Additive and Multiplicative Functionals
23+
# Additive and multiplicative functionals
2424

2525
```{index} single: Models; Additive functionals
2626
```
@@ -30,7 +30,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie
3030
```{code-cell} ipython3
3131
:tags: [hide-output]
3232
33-
!pip install --upgrade quantecon
33+
!pip install --upgrade quantecon --quiet
3434
```
3535

3636
## Overview
@@ -41,9 +41,9 @@ For example, outputs, prices, and dividends typically display irregular but per
4141

4242
Asymptotic stationarity and ergodicity are key assumptions needed to make it possible to learn by applying statistical methods.
4343

44-
But there are good ways to model time series that have persistent growth that still enable statistical learning based on a law of large numbers for an asymptotically stationary and ergodic process.
44+
But we can model time series with persistent growth while still enabling statistical learning based on a law of large numbers for an asymptotically stationary and ergodic process.
4545

46-
Thus, {cite}`Hansen_2012_Eca` described two classes of time series models that accommodate growth.
46+
Thus, {cite}`Hansen_2012_Eca` described two classes of time series models that accommodate growth.
4747

4848
They are
4949

@@ -65,13 +65,15 @@ We also describe and compute decompositions of additive and multiplicative proce
6565

6666
We describe how to construct, simulate, and interpret these components.
6767

68-
More details about these concepts and algorithms can be found in Hansen {cite}`Hansen_2012_Eca` and Hansen and Sargent {cite}`Hans_Sarg_book`.
68+
More details about these concepts and algorithms can be found in Hansen {cite}`Hansen_2012_Eca` and Hansen and Sargent {cite}`Hans_Sarg_book`.
6969

7070
Let's start with some imports:
7171

7272
```{code-cell} ipython3
73-
import numpy as np
74-
import scipy.linalg as la
73+
import jax.numpy as jnp
74+
import jax.numpy as jnp
75+
import jax.scipy.linalg as jla
76+
from jax import jit, vmap
7577
import quantecon as qe
7678
import matplotlib.pyplot as plt
7779
from scipy.stats import norm, lognorm
@@ -83,8 +85,7 @@ from scipy.stats import norm, lognorm
8385

8486
This lecture focuses on a subclass of these: a scalar process $\{y_t\}_{t=0}^\infty$ whose increments are driven by a Gaussian vector autoregression.
8587

86-
Our special additive functional displays interesting time series behavior while also being easy to construct, simulate, and analyze
87-
by using linear state-space tools.
88+
Our additive functional displays interesting time series behavior and is easy to construct, simulate, and analyze using linear state-space tools.
8889

8990
We construct our additive functional from two pieces, the first of which is a **first-order vector autoregression** (VAR)
9091

@@ -114,7 +115,7 @@ In particular,
114115
```{math}
115116
:label: old2_additive_functionals
116117
117-
y_{t+1} - y_{t} = \nu + D x_{t} + F z_{t+1}
118+
y_{t+1} - y_t = \nu + D x_t + F z_{t+1}
118119
```
119120

120121
Here $y_0 \sim {\cal N}(\mu_{y0}, \Sigma_{y0})$ is a random
@@ -125,7 +126,7 @@ systematic but random *arithmetic growth*.
125126

126127
### Linear state-space representation
127128

128-
A convenient way to represent our additive functional is to use a [linear state space system](https://python-intro.quantecon.org/linear_models.html).
129+
We represent our additive functional as a [linear state space system](https://python-intro.quantecon.org/linear_models.html).
129130

130131
To do this, we set up state and observation vectors
131132

@@ -184,16 +185,14 @@ $$
184185

185186
which is a standard linear state space system.
186187

187-
To study it, we could map it into an instance of [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) from [QuantEcon.py](http://quantecon.org/quantecon-py).
188-
189-
But here we will use a different set of code for simulation, for reasons described below.
188+
We could use [LinearStateSpace](https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/lss.py) from [QuantEcon.py](http://quantecon.org/quantecon-py), but we will use different code for simulation, for reasons described below.
190189

191190
## Dynamics
192191

193192
Let's run some simulations to build intuition.
194193

195194
(addfunc_eg1)=
196-
In doing so we'll assume that $z_{t+1}$ is scalar and that $\tilde x_t$ follows a 4th-order scalar autoregression.
195+
We assume that $z_{t+1}$ is scalar. We also assume that $\tilde x_t$ follows a 4th-order scalar autoregression.
197196

198197
```{math}
199198
:label: ftaf
@@ -211,7 +210,7 @@ $$
211210

212211
are strictly greater than unity in absolute value.
213212

214-
(Being a zero of $\phi(z)$ means that $\phi(z) = 0$)
213+
A zero of $\phi(z)$ satisfies $\phi(z) = 0$.
215214

216215
Let the increment in $\{y_t\}$ obey
217216

@@ -221,9 +220,9 @@ $$
221220

222221
with an initial condition for $y_0$.
223222

224-
While {eq}`ftaf` is not a first order system like {eq}`old1_additive_functionals`, we know that it can be mapped into a first order system.
223+
While {eq}`ftaf` is not a first-order system like {eq}`old1_additive_functionals`, it can be mapped into one.
225224

226-
* For an example of such a mapping, see [this example](https://python.quantecon.org/linear_models.html#second-order-difference-equation).
225+
* For an example of such a mapping, see {doc}`this example <intro:linear_models>`.
227226

228227
In fact, this whole model can be mapped into the additive functional system definition in {eq}`old1_additive_functionals` -- {eq}`old2_additive_functionals` by appropriate selection of the matrices $A, B, D, F$.
229228

@@ -233,7 +232,7 @@ You can try writing these matrices down now as an exercise --- correct expressio
233232

234233
When simulating we embed our variables into a bigger system.
235234

236-
This system also constructs the components of the decompositions of $y_t$ and of $\exp(y_t)$ proposed by Hansen {cite}`Hansen_2012_Eca`.
235+
This system also constructs the decomposition components of $y_t$ and $\exp(y_t)$ proposed by Hansen (2012).
237236

238237
All of these objects are computed using the code below
239238

@@ -302,24 +301,22 @@ class AMF_LSS_VAR:
302301
ν, H, g = self.additive_decomp()
303302
304303
# Auxiliary blocks with 0's and 1's to fill out the lss matrices
305-
nx0c = np.zeros((nx, 1))
306-
nx0r = np.zeros(nx)
307-
nx1 = np.ones(nx)
308-
nk0 = np.zeros(nk)
304+
nx0c = jnp.zeros((nx, 1))
305+
nx0r = jnp.zeros(nx)
306+
nx1 = jnp.ones(nx)
307+
nk0 = jnp.zeros(nk)
309308
ny0c = np.zeros((nm, 1))
310309
ny0r = np.zeros(nm)
311310
ny1m = np.eye(nm)
312311
ny0m = np.zeros((nm, nm))
313312
nyx0m = np.zeros_like(D)
314313
315314
# Build A matrix for LSS
316-
# Order of states is: [1, t, xt, yt, mt]
317-
A1 = np.hstack([1, 0, nx0r, ny0r, ny0r]) # Transition for 1
318-
A2 = np.hstack([1, 1, nx0r, ny0r, ny0r]) # Transition for t
319-
# Transition for x_{t+1}
320-
A3 = np.hstack([nx0c, nx0c, A, nyx0m.T, nyx0m.T])
321-
# Transition for y_{t+1}
322-
A4 = np.hstack([ν, ny0c, D, ny1m, ny0m])
315+
# Order of states is: [1, t, x_t, y_t, m_t]
316+
A1 = np.hstack([1, 0, nx0r, ny0r, ny0r]) # Transition for 1
317+
A2 = np.hstack([1, 1, nx0r, ny0r, ny0r]) # Transition for t
318+
A3 = np.hstack([nx0c, nx0c, A, nyx0m.T, nyx0m.T]) # Transition for x_{t+1}
319+
A4 = np.hstack([ν, ny0c, D, ny1m, ny0m]) # Transition for y_{t+1}
323320
# Transition for m_{t+1}
324321
A5 = np.hstack([ny0c, ny0c, nyx0m, ny0m, ny1m])
325322
Abar = np.vstack([A1, A2, A3, A4, A5])
@@ -328,12 +325,10 @@ class AMF_LSS_VAR:
328325
Bbar = np.vstack([nk0, nk0, B, F, H])
329326
330327
# Build G matrix for LSS
331-
# Order of observation is: [xt, yt, mt, st, tt]
332-
# Selector for x_{t}
333-
G1 = np.hstack([nx0c, nx0c, np.eye(nx), nyx0m.T, nyx0m.T])
334-
G2 = np.hstack([ny0c, ny0c, nyx0m, ny1m, ny0m]) # Selector for y_{t}
335-
# Selector for martingale
336-
G3 = np.hstack([ny0c, ny0c, nyx0m, ny0m, ny1m])
328+
# Order of observation is: [x_t, y_t, m_t, s_t, t_t]
329+
G1 = np.hstack([nx0c, nx0c, np.eye(nx), nyx0m.T, nyx0m.T]) # Selector for x_t
330+
G2 = np.hstack([ny0c, ny0c, nyx0m, ny1m, ny0m]) # Selector for y_t
331+
G3 = np.hstack([ny0c, ny0c, nyx0m, ny0m, ny1m]) # Selector for martingale
337332
G4 = np.hstack([ny0c, ny0c, -g, ny0m, ny0m]) # Selector for stationary
338333
G5 = np.hstack([ny0c, ν, nyx0m, ny0m, ny0m]) # Selector for trend
339334
Gbar = np.vstack([G1, G2, G3, G4, G5])
@@ -370,7 +365,7 @@ class AMF_LSS_VAR:
370365
- H : vector for the Jensen term
371366
"""
372367
ν, H, g = self.additive_decomp()
373-
ν_tilde = ν + (.5)*np.expand_dims(np.diag(H @ H.T), 1)
368+
ν_tilde = ν + 0.5 * np.expand_dims(np.diag(H @ H.T), 1)
374369
375370
return ν_tilde, H, g
376371
@@ -546,15 +541,15 @@ def plot_multiplicative(amf, T, npaths=25, show_trend=True):
546541
# Lower and upper bounds - for each multiplicative functional
547542
for ii in range(nm):
548543
li, ui = ii*2, (ii+1)*2
549-
Mdist = lognorm(np.sqrt(yvar[nx+nm+ii, nx+nm+ii]).item(),
550-
scale=np.exp(ymeans[nx+nm+ii] \
551-
- t * (.5)
552-
* np.expand_dims(
553-
np.diag(H @ H.T),
554-
1
555-
)[ii]
556-
).item()
557-
)
544+
scale_val = np.exp(
545+
ymeans[nx+nm+ii] - t * 0.5 * np.expand_dims(
546+
np.diag(H @ H.T), 1
547+
)[ii]
548+
).item()
549+
Mdist = lognorm(
550+
np.sqrt(yvar[nx+nm+ii, nx+nm+ii]).item(),
551+
scale=scale_val
552+
)
558553
Sdist = lognorm(np.sqrt(yvar[nx+2*nm+ii, nx+2*nm+ii]).item(),
559554
scale = np.exp(-ymeans[nx+2*nm+ii]).item())
560555
mbounds_mult[li:ui, t] = Mdist.ppf([.01, .99])
@@ -844,12 +839,9 @@ interest.
844839

845840
The class `AMF_LSS_VAR` mentioned {ref}`above <amf_lss>` does all that we want to study our additive functional.
846841

847-
In fact, `AMF_LSS_VAR` does more
848-
because it allows us to study an associated multiplicative functional as well.
842+
In fact, `AMF_LSS_VAR` does more because it allows us to study an associated multiplicative functional as well.
849843

850-
(A hint that it does more is the name of the class -- here AMF stands for
851-
"additive and multiplicative functional" -- the code computes and displays objects associated with
852-
multiplicative functionals too.)
844+
(A hint that it does more is the name of the class -- here AMF stands for "additive and multiplicative functional" -- the code computes and displays objects associated with multiplicative functionals too.)
853845

854846
Let's use this code (embedded above) to explore the {ref}`example process described above <addfunc_eg1>`.
855847

@@ -1100,9 +1092,9 @@ The heavy lifting is done inside the `AMF_LSS_VAR` class.
11001092
The following code adds some simple functions that make it straightforward to generate sample paths from an instance of `AMF_LSS_VAR`.
11011093

11021094
```{code-cell} ipython3
1103-
def simulate_xy(amf, T):
1095+
def simulate_xy(amf, T, key):
11041096
"Simulate individual paths."
1105-
foo, bar = amf.lss.simulate(T)
1097+
foo, bar = amf.lss.simulate(T, key=key)
11061098
x = bar[0, :]
11071099
y = bar[1, :]
11081100

0 commit comments

Comments
 (0)