References: LeVeque 2007, Finite Difference Methods for ODE and PDE. Particularly Chapters 1, 2, and 9.
Others: Morton and Mayers, Suli and Mayers. Many specialized textbooks.
Very general technique for differential equations, based on approximating functions on a grid and replacing derivatives with differences.
Example: definition of derivative. Drop limit, take
Somehow, everything we want to talk about is already present in this example:
- continuous --> discrete
- differential --> algebraic
- accuracy (Taylor series analysis, effect of terms dropped?)
- division by small
$h$ - computational issues: want more accuracy? use small
$h$ , more points, more work.
Notation: replace function
I reserve the right to also use both
For multidimensional (e.g., time dep.) problem:
How to derive finite difference rules like the example? One approach:
fit a polynomial
Fitting such a polynomial is interpolation. The monomial basis is often a poor choice [sketch motivation], instead look at the "cardinal polynomial":
These are like the "$[0; 1; 0; 0]$" vectors: zero at all grid points
except
Polynomial interpolant of degree
Very nice mathematical theory here---we've seen a bit of it---$!\exists$, etc. Also useful in practice.
When doing quadrature, we could do (rough) error analysis based on the error formula for interpolant. Can we do so here?
[Example: no, at least not in the most obvious way...]
The derivative of the derivative: use backward difference of the forward difference.
or we can say more with a few more symbols:
This will be our "work-horse" method/example.
Note: second-order accurate: replace
Suppose
u_{xx} \approx
|-2 1 | | u_1 | | alpha/h^2 |
| 1 -2 1 | | u_2 | | 0 |
| . . . | | . | | . |
1/h^2 | . . . | | . | + | . |
| . . . | | . | | . |
| 1 -2 1 | | | | 0 |
| 1 -2 | | u_m | | beta/h^2 |
(other ways to view this, e.g., as a rectangular matrix).
Evaluating a differential operator on a scalar function is approximated by a matrix multiplying a vector.
(All with initial conditions and boundary conditions as appropriate.)
Our primary example problem:
Or with a forcing:
Or the steady problem:
Example:
|-2 1 | | u_1 | | f_1 - alpha/h^2 |
| 1 -2 1 | | u_2 | | f_2 |
| . . . | | . | | . |
1/h^2 | . . . | | . | = | . |
| . . . | | . | | . |
| 1 -2 1 | | | | f_{m-1} |
| 1 -2 | | u_m | | f_m - beta/h^2 |
[discuss boundary conditions here]. That is
for vectors
Our "1 -2 1" rule was
Need three concepts: consistency, stability and convergence.
Substitute the true solution of the PDE into the discrete problem. Discrepancy is the local truncation error (LTE).
symbol:
Express LTE in "big Oh" notation in
Example: steady state.
True solution
Now Taylor expand each, and use the PDE:
Thus
LTE to zero as
Define
Global error: difference
Convergence: global error
Example Continue the steady state one. Note:
So
In this case, relationship between global error and LTE involves properties of the matrix.
Steady state problem:
Defn 2.1 from LeVeque: Finite diff.\ applied to linear BVP gives
Time dependent: small errors (LTE) at one time should not increase too much as they propagate. (More later).
(Maybe even "... of Numerical Analysis".)
Consistency plus Stability implies Convergence
Stability is the hard one, often this "theorem" needs many restrictions for particular classes of problems and methods. Also, often we play with what we mean by stability for problem/method.
If matrix is symmetric (e.g., the one above) then 2-norm is spectral radius (maximum magnitude eigenvalue).
Inverse
So we want smallest eigenvalue of
E.g., [LeVeque pg21], can show for
In particular,