Skip to content

Commit

Permalink
Change \xi_k to \xi
Browse files Browse the repository at this point in the history
  • Loading branch information
cjb873 authored Jul 2, 2024
1 parent 427052d commit 21c214e
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions examples/ODEs/Part_9_SINDy.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
"This tutorial demonstrates the use of [sparse identification of nonlinear dynamics (SINDy)](https://arxiv.org/abs/1509.03580) in Neuromancer. \n",
"\n",
"## SINDy for System Identification\n",
"SINDy is a machine learning model that uses sparse regression techniques to estimate the dynamics that guide time derivatives $\\dot{x}_k$ from state variables $x$ at time $k$. SINDy does this by creating a library of candidate functions $\\theta$, at which each state variable in $x_k$ is evaluated, and coefficients $\\xi_k$. SINDy then uses linear regression to fit the coefficients in the equation $\\dot{x}_k = \\theta(x_k)\\xi_k$. SINDy aims to find as few nonzero coefficients as possible while still accurately describing the relationship between $x_k$ and $\\dot{x}_k$. With this, it is possible to use the values of $\\dot{x}_k$ to predict the trajectory of $x_k$ given some initial condition.\n",
"SINDy is a machine learning model that uses sparse regression techniques to estimate the dynamics that guide time derivatives $\\dot{x}_k$ from state variables $x$ at time $k$. SINDy does this by creating a library of candidate functions $\\theta$, at which each state variable in $x_k$ is evaluated, and coefficients $\\xi$. SINDy then uses linear regression to fit the coefficients in the equation $\\dot{x}_k = \\theta(x_k)\\xi$. SINDy aims to find as few nonzero coefficients as possible while still accurately describing the relationship between $x_k$ and $\\dot{x}_k$. With this, it is possible to use the values of $\\dot{x}_k$ to predict the trajectory of $x_k$ given some initial condition.\n",
"\n",
"<img src=\"figs/sindy.jpeg\" width=600/>\n",
"\n",
Expand Down Expand Up @@ -414,7 +414,7 @@
"source": [
"## SINDy system model\n",
"\n",
"Here we construct a SINDy model based on our library: $$\\dot{x}_k = \\theta(x_k^T)\\xi_k,$$ where $x_k$ is a row within the data matrix and $\\xi_k$ is the corresponding column within the matrix of coefficients. "
"Here we construct a SINDy model based on our library: $$\\dot{x}_k = \\theta(x_k^T)\\xi,$$ where $x_k$ is a row within the data matrix and $\\xi$ is the corresponding column within the matrix of coefficients. "
]
},
{
Expand All @@ -435,7 +435,7 @@
"source": [
"## Model Integration\n",
"\n",
"Then, we combine this SINDy model with a built-in Neuromancer integrator. This means that when we provide our model with a single data point $x_k$, we can receive an estimation of the next data point $x_{k+1}$, instead of an estimation of the derivative $\\dot{x_k}$ at the current time step. This saves us the step of having to actually find the true values of $\\dot{x}_k$. Now our model can be represented by the expression: $$x_{k+1} = \\text{ODESolve}(\\theta(x_k^T)\\xi_k)$$"
"Then, we combine this SINDy model with a built-in Neuromancer integrator. This means that when we provide our model with a single data point $x_k$, we can receive an estimation of the next data point $x_{k+1}$, instead of an estimation of the derivative $\\dot{x_k}$ at the current time step. This saves us the step of having to actually find the true values of $\\dot{x}_k$. Now our model can be represented by the expression: $$x_{k+1} = \\text{ODESolve}(\\theta(x_k^T)\\xi)$$"
]
},
{
Expand Down Expand Up @@ -521,8 +521,8 @@
" \n",
"$$\n",
"\\begin{align}\n",
"&\\underset{\\xi_k}{\\text{minimize}} && \\sum_{i=1}^m \\Big(Q_1||x^i_1 - \\hat{x}^i_1||_2^2 + \\sum_{k=1}^{N} Q_N||x^i_k - \\hat{x}^i_k||_2^2 \\Big) \\\\\n",
"&\\text{subject to} && x_{k+1} = \\text{ODESolve}(\\theta(x_k^T)\\xi_k) \\\\\n",
"&\\underset{\\xi}{\\text{minimize}} && \\sum_{i=1}^m \\Big(Q_1||x^i_1 - \\hat{x}^i_1||_2^2 + \\sum_{k=1}^{N} Q_N||x^i_k - \\hat{x}^i_k||_2^2 \\Big) \\\\\n",
"&\\text{subject to} && x_{k+1} = \\text{ODESolve}(\\theta(x_k^T)\\xi) \\\\\n",
"\\end{align}\n",
"$$ "
]
Expand Down Expand Up @@ -550,7 +550,7 @@
"source": [
"## Solve the problem\n",
"\n",
"We fit each of the unknown SINDy parameters $\\xi_k$ using stochastic gradient descent. In the [original SINDy paper](https://arxiv.org/abs/1509.03580), the authors fit $\\xi_k$ using the sequentially thresholded least squares regression algorithm. This algorithm consists of solving the normal equations for [least squares](https://en.wikipedia.org/wiki/Least_squares) regression to find coefficient values in $\\xi_k$. Then for some threshold $\\lambda$, set all values of $\\xi_k$ that are less than $\\lambda$ to $0$. This process is repeated until convergence. We use standard stochastic gradient descent based methods, which given proper training, will converge to the same values of $\\xi_k$ as sequentially thresholded least squares."
"We fit each of the unknown SINDy parameters $\\xi$ using stochastic gradient descent. In the [original SINDy paper](https://arxiv.org/abs/1509.03580), the authors fit $\\xi$ using the sequentially thresholded least squares regression algorithm. This algorithm consists of solving the normal equations for [least squares](https://en.wikipedia.org/wiki/Least_squares) regression to find coefficient values in $\\xi$. Then for some threshold $\\lambda$, set all values of $\\xi$ that are less than $\\lambda$ to $0$. This process is repeated until convergence. We use standard stochastic gradient descent based methods, which given proper training, will converge to the same values of $\\xi$ as sequentially thresholded least squares."
]
},
{
Expand Down

0 comments on commit 21c214e

Please sign in to comment.