Skip to content

commandlinegirl/LinearAlgebra

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Linear Algebra

Implementation of some of the matrix operations in Scala plus a random collection of notes on linear algebra.

Matrix decompositions (implemented here)

QR decomposition (A = QR)

QR decomposition takes a matrix A of independent vectors and makes them orthonormal. Two vectors i and j are orthonormal if (a) they are unit vectors, (b) the dot product of i and j is 0 if i != j and 1 of i == j. The result is a matrix Q composed of the orthonormal vectors and R - an upper triangular matrix. This implementation uses modified Gram Schmidt algorithm which is more numerically stable than the classical variation. In the classical Gram Schmidt process at each step we subtract from every newly considered vector its projections that are parallel to directions set at previous steps. In the modified process as soon as the currently considered vector is made orthogonal and normalized it can be used to subtract it's parallel projection from all further vectors. In the Householder reflections method the loss of orthogonality of the Q factor is even smaller but it is not implemented here.

LU decomposition (A = LU)

Gaussian elimination

Returns the the matrix in row echelon form. Partial pivoting is used which makes the elimination numerically stable in most cases. In partial pivoting the row with the largest absolute value is interchanged with rows holding smaller pivots before elimination continues. It is generally sufficient to reduce a round-off error.

Gauss Jordan elimination

Returns the matrix in reduced row echelon form. The elimination starts the same way as Gaussian elimination but after obtaining a row echelon form the elimination is continued up in the upper right part of the matrix.

Cayley-Hamilton theorem

Every matrix is a zero of it's characteristic polynomial.

Determinants

A matrix and its transpose have the same determinant. A matrix and its transpose have the same characteristic polynomial

Normal, unitary, and orthogonal matrix

A matrix is normal if it commutes with its conjugate transpose (A*):

Since for a real matrix a transpose is equal to its conjugate transpose:

For real matrices all orthogonal, symmetric, and skew-symmetric matrices are normal.

A complex square matrix U is unitary if its conjugate transpose U∗ is also its inverse:

For complex space all unitary matrices are normal.

Orthogonal matrix is an analogue of a unitary matrix in real space. It satisfies:

Matrix similarity

A is similar to B (A ~ B) if A and B are nxn matrices and there exists a (change of basis) matrix P such that:

Similar matrices:

  • represent the same linar operator under (potentially) different bases
  • have the same characteristic polynomial
  • have the same trace. Trace can be thought of as a property of a linear operator rather than a particular matrix.

Characteristic polynomial of a matrix

Characteristic polynomial of A is a determinant of a characteristic matrix of A (t*In-A):

  • Eigenvalues of A are roots of characteristic polynomial of A.
  • Matrix A and its transpose have the same characteristic polynomial (a matrix is similar to its transpose).

Matrix diagonalization

Two problems might appear when we try to put a matrix into a diagonal form: i. the eigenvalues might not be within a field of interest, ii. the eigenvectors belonging to the eigenvalues might not form a basis (an nxn matrix has to have n linearly independent eigenvectors). In such cases we might want to put the matrix into Jordan Canonical Form.

A linear operator T:V->V has a diagonal matrix representation iff its minimal polynomial m(t) is a product of distinct linear polynomials.

Linear functional

A linear functional on a vector space V is a linear mapping T from V into its field K of scalars (T:V->K). An example is a trace mapping T where T assigns a trace (sum of diagonal elements) to a n-square matrices in a space V.

Homomorphism, first dual space, second dual space

Let V and U be vector spaces over a field K. Then the collection of all linear mappings from V into U with the linear operations (addition and scalar multiplication) form a vector space over K. The vector space is Hom(V, U).

Dual vector space V* is a collection of all linear functionals on V (together with addition and scalar multiplication). Dual space is also a vector space of K. The dimensions of V and V* are equal. V* itself has its own dual space, called second dual space V**.

Bilinear, quadratic, sesquilinear, hermitian form

Bilinear form

Bilinear form on a finite-dimension vector space V over field K is a mapping f:VxV->K, which is linear in each of the arguments separately. Example is a dot product function on Rn. Matrix representation of f relative to a basis {ei} [1]:

Let f be a bilinear form on V, and let {e1, ..., en} be a basis of V. Suppose that u,v belong to V and suppose

Then

Thus, f is comletely determined by the n^2 values f(ei, ej).

The matrix A = (aij) where aij = f(ei, ej) is called the matrix representation of f relative to a basis {ei}. For all u,v belonging to V:

Congruent matrices represent the same bilinear form in different bases. A matrix B is congruent to matrix A if there exists an invertible matrix P such that

Quadraric form

Quadraric form is a function f:Rn->R of the form

where A = A^T (A is symmetric).

A mapping q:v->K is a quadratic form if q(v) = f(v, v) for some symmetric bilinear form f on V.

Sesquilinear form

Sesquilinear form is a generalization of a bilinear form. Bilinear form must be linear in both arguments while sesquilinear form allows one of the arguments to be "twisted" in a semilinear manner.

Hermitian form

Also called a symmetric sesquilinear form, it is a sesquilinear form h:V×V->C such that

The function f is linear in the first variable but conjugate linear in the second variable. The matrix representation of a complex Hermitian form is a Hermitian matrix, which is a matrix that is equal to its conjugate transpose. A real matrix is Hermitian iff it is symmetric.

Sylvester's law of intertia

Two symmetric square matrices of the same size have the same number of positive, negative and zero eigenvalues iff they are congruent.

Rank, kernel and nullity

Let T be a linear transformation T:V->U. The image of T (ImF) is the set of image points in U:

The kernel of T (KerT, null space) is a set of elements in V which map into 0 in U:

  • Image is a subspace of U and kernel is a subspace of V.
  • Nullity - dimKerT
  • Rank - dimImT

[1] Lipschutz, Theory and Problems of Linear Algebra, 1968

About

Implementation of matrix decompositions in Scala

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages