Eigenvalues, Eigenvectors, decomposition and operations

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors

Which vectors remain unchanged in direction after a transformation?

That is, for a matrix \(A\), what vectors \(v\) are equal to scalar multiplication by \(\lambda\) following the operation of the matrix.

\(Av=\lambda v\)

Spectrum

The spectrum of a matrix is the set of its eigenvalues.

Eigenvectors as a basis

If eigen vectors space space, we can write

\(v=\sum_i \alpha_i | \lambda_i\rangle\)

Under what circumstances do they span the entirity?

Calculating eigenvalues and eigenvectors using the characteristic polynomial

The characteristic polynomial of a matrix is a polynomial whose roots are the eigenvalues of the matrix.

We know from the definition of eigenvalues and eigenvectors that:

\(Av=\lambda v\)

Note that

\(Av-\lambda v=0\)

\(Av-\lambda Iv=0\)

\((A-\lambda I)v=0\)

Trivially we see that \(v=0\) is a solution.

Otherwise matrix \(A-\lambda I\) must be non-invertible. That is:

\(Det(A-\lambda I)=0\)

Calculating eigenvalues

For example

\(A=\begin{bmatrix}2&1\\1 & 2\end{bmatrix}\)

\(A-\lambda I=\begin{bmatrix}2-\lambda &1\\1 & 2-\lambda \end{bmatrix}\)

\(Det(A-\lambda I)=(2-\lambda )(2-\lambda )-1\)

When this is \(0\).

\((2-\lambda )(2-\lambda )-1=0\)

\(\lambda =1,3\)

Calculating eigenvectors

You can plug this into the original problem.

For example

\(Av=3v\)

\(\begin{bmatrix}2&1\\1 & 2\end{bmatrix}\begin{bmatrix}x_1\\x_2\end{bmatrix}=3\begin{bmatrix}x_1\\x_2\end{bmatrix}\)

As vectors can be defined at any point on the line, we normalise \(x_1=1\).

\(\begin{bmatrix}2&1\\1 & 2\end{bmatrix}\begin{bmatrix}1\\x_2\end{bmatrix}=\begin{bmatrix}3\\3x_2\end{bmatrix}\)

Here \(x_2=1\) and so the eigenvector corresponding to eigenvalue \(3\) is:

\(\begin{bmatrix}1\\1\end{bmatrix}\)

Traces

The trace of a matrix is the sum of its diagonal components.

\(Tr(M)=\sum_i^nm_{ii}\)

The trace of a matrix is equal to the sum of its eigenvectors.

Traces can be shown as the sum of inner products.

\(Tr(M)=\sum_i^ne_iMe^i\)

Properties of traces

Traces commute

\(Tr(AB)=Tr(BA)\)

Traces of \(1\times 1\) matrices are equal to their component.

\(Tr(M)=m_{11}\)

Trace trick

If we want to manipulate the scalar:

\(v^TMv\)

We can use properties of the trace.

\(v^TMv=Tr(v^TMv)\)

\(v^TMv=Tr([v^T][Mv])\)

\(v^TMv=Tr([Mv][v^T])\)

\(v^TMv=Tr(Mvv^T)\)

Matrix operations

Matrix powers

For a square matrix \(M\) we can calculate \(MMMM...\), or \(M^n\) where \(n\in \mathbb{N}\).

Powers of diagonal matrices

Generally, calculating a matrix to an integer power can be complicated. For diagonal matrices it is trivial.

For a diagonal matrix \(M=D^n\), \(m_{ij}=d_{ij}^n\).

Matrix exponentials

The exponential of a complex number is defined as:

\(e^x=\sum \dfrac{1}{j!}x^j\)

We can extend this definition to matrices.

\(e^X:=\sum \dfrac{1}{j!}X^j\)

The dimension of a matrix and its exponential are the same.

Matrix logarithms

If we have \(e^A=B\) where \(A\) and \(B\) are matrices then we can say that \(A\) is matrix logarithm of \(B\).

That is:

\(\log B=A\)

The dimensions of a matrix and its logarithm are the same.

Matrix square roots

For a matrix \(M\), the square root \(M^{\dfrac{1}{2}}\) is \(A\) where \(AA=M\).

This does not necessarily exist.

Square roots may not be unique.

Real matrices may have no real square root.

Matrix decomposition

Similar matrices

In hermitian, show all symmtric matrices are hermitian

For a diagonal matrix, eigenvalues are the diagonal entries?

Similar matrix:

\(M=P^{-1}AP\)

\(M\) and \(A\) have the same eigenvalues. If \(A\) diagonal, then entries are eigenvalues.

Defective and diagonalisable matrices

Diagonalisable matrices and eigendecomposition

If matrix \(M\) is diagonalisable if there exists matrix \(P\) and diagonal matrix \(A\) such that:

\(M=P^{-1}AP\)

Diagonalisiable matrices and powers

If these exist then we can more easily work out matrix powers.

\(M^n=(P^{-1}AP)^n=P^{-1}A^nP\)

\(A^n\) is easy to calculate, as each entry in the diagonal taken to the power of \(n\).

Defective matrices

Defective matrices are those which cannot be diagonalised.

Non-singular matries can be defective or not defective, for example the identiy matrix.

Singular matrices can also be defective or not defective, for example the empty matrix.

Eigen-decomposition

Consider an eigenvector \(v\) and eigenvalue \(\lambda\) of matrix \(M\).

We known that \(Mv=\lambda v\).

If \(M\) is full rank then we can generalise for all eigenvectors and eigenvalues:

\(MQ=Q\Lambda\)

Where \(Q\) is the eigenvectors as columns, and \(\Lambda\) is a diagonal matrix with the corresponding eigenvalues. We can then show that:

\(M=Q\Lambda Q^{-1}\)

This is only possible to calculate if the matrix of eigenvectors is non-singular. Otherwise the matrix is defective.

If there are linearly dependent eigenvectors then we cannot use eigen-decomposition.

Using the eigen-decomposition to invert a matrix

This can be used to invert \(M\).

We know that:

\(M^{-1}=(Q\Lambda Q^{-1})^{-1}\)

\(M^{-1}=Q^{-1}\Lambda^{-1}Q\)

We know \(\Lambda\) can be easily inverted by taking the reciprocal of each diagonal element. We already know both \(Q\) and its inverse from the decomposition.

If any eigenvalues are \(0\) then \(\Lambda\) cannot be inverted. These are singular matrices.

Spectral theorem for finite-dimensional vector spaces

Other

Commutation

We define a function, the commuter, between two objects \(a\) and \(b\) as:

\([a,b]=ab-ba\)

For numbers, \(ab-ba=0\), however for matrices this is not generally true.

Commutators and eigenvectors

Consider two matrices which share an eigenvector \(v\).

\(Av=\lambda_A v\)

\(Bv=\lambda_B v\)

Now consider:

\(ABv=A\lambda_B v\)

\(ABv=\lambda_A\lambda_B v\)

\(BAv=\lambda_A\lambda_B v\)

If the matrices share all the same eigenvectors, then the matrices commute, and \(AB=BA\).

Identity matrix and the Kronecker delta

Matrix additon and multiplication

Matrix multiplication

\(A=A^{mn}\)

\(B=B^{no}\)

\(C=C^{mo}=A.B\)

\(c_{ij}=\sum_{r=1}^na_{ir}b_{rj}\)

Matrix multiplication depends on the order. Unlike for real numbers,

\(AB\ne BA\)

Matrix multiplication is not defined unless the condition above on dimensions is met.

A matrix multiplied by the identity matrix returns the original matrix.

For matrix \(M=M^{mn}\)

\(M=MI^m=I^nM\)

Matrix addition

\(2\) matricies of the same size, that is with idental dimensions, can be added together.

If we have \(2\) matrices \(A^{mn}\) and \(B^{mn}\)

\(C=A+B\)

\(c_{ij}=a_{ij}+b_{ij}\)

An empty matrix with \(0\)s of the same size as the other matrix is the identity matrix for addition.

Scalar multiplication

A matrix can be multiplied by a scalar. Every element in the matrix is multiplied by this.

\(B=cA\)

\(b_{ij}=ca_{ij}\)

The scalar \(1\) is the identity scalar.

Transposition and conjugation

Transposition

A matrix of dimensions \(m*n\) can be transformed into a matrix \(n*m\) by transposition.

\(B=A^T\)

\(b_{ij}=a{ji}\)

Transpose rules

\((M^T)^T=M\)

\((AB)^T=B^TA^T\)

\((A+B)^T=A^T+B^T\)

\((zM)^T=zM^T\)

Conjugation

With conjugation we take the complex conjugate of each element.

\(B=\overline A\)

\(b_{ij}=\overline a_{ij}\)

Conjugation rules

\(\overline {(\overline A)}=A\)

\(\overline {(AB)}=(\overline A)( \overline B)\)

\(\overline {(A+B)}=\overline A+\overline B\)

\(\overline {(zM)}=\overline z \overline M\)

Conjugate transposition

Like transposition, but with conjucate.

\(B=A^*\)

\(b_{ij}=\bar{a_{ji}}\)

Alternatively, and particularly in physics, the following symbol is often used instead.

\((A^*)^T=A^\dagger\)

Matrix rank

Rank function

The rank of a matrix is the dimension of the span of its component columns.

\(rank (M)=span(m_1,m_2,...,m_n)\)

Column and row span

The span of the rows is the same as the span of the columns.

Types of matrices

Empty matrix

A matrix where every element is \(0\). There is one for each dimension of matrix.

\(A=\begin{bmatrix}0& 0&...&0\\0 & 0&...&0\\...&...&...&...\\0&0&...&0\end{bmatrix}\)

Triangular matrix

A matrix where \(a_{ij}=0\) where \(i < j\) is upper triangular.

A matrix where \(a_{ij}=0\) where \(i > j\) is lower triangular.

A matrix which is either upper or lower triangular is a triangular matrix.

Symmetric matrices

All symmetric matrices are square.

The identity matrix is an example.

A matrix where \(a_{ij}=a_{ji}\) is symmetric.

Diagonal matrix

A matrix where \(a_ij=0\) where \(i\ne j\) is diagonal.

All diagonal matrices are symmetric.

The identity matrix is an example.

Multilinear forms and determinants

Multilinear forms

Determinants

From invertible matrix section in endo

A matrix can only be inverted if it can be created from a combination of elementary row operations.

How can we identify if a matrix is invertible? We want to create a scalar from the matrix which tells us if this possible. We can this scalar the determinant.

For a matrix \(A\) we label the determinant \(|A|\), or \(\det A\)

We propose \(|A|=0\) when the matrix is not invertible.

So how can we identify the function we need to undertake on the matrix?

New 1

We know that linear dependence results in determinants of \(0\).

We can model this as a function on the columns of the matrix.

\(\det M = \det ([M_1, ...,M_n)\)

If there is linear depednence, for example if two columns are the same then:

\(\det ([M_1,...,M_i,...,M_i,...,M_n])=0\)

Similarly, if there is a column of \(0\) then the determinant is \(0\).

\(\det ([M_1,...,0,...,M_n])=0\)

New 2

Show linear in addition

How can we identify the determinant of less simple matrices? We can use the multilinear form.

\(\sum c_i\mathbf M_i=\mathbf 0\)

Where \(\mathbf c \ne \mathbf 0\)

Or:

\(M\mathbf c=\mathbf 0\)

Rule 1: Columns of matrices can be the input to a multilinear form

A matrix can be shown in terms of its columns. \(A=[v_1,...,v_n]\)

\(\det A=\det [v_1,...,v_n]\)

\(\det A=\sum_{k_1=1}^m...\sum_{k_n=1}\prod_{i=1}^ma_{ik_i}\det ([e_{k_1},...,e_{k_n}])\)

Multiplying a matrix by a constant multiplies the determinant by the same amount

If a whole row or columns is \(0\) then:

\(\det A=\det [v_1,...,v_i,...,v_n]\)

\(\det A'=\det [v_1,...,cv_i,...,v_n]\)

\(\det A=\det [v_1,...,v_i,...,v_n]\)

\(\det A'=\det [v_1,...,cv_i,...,v_n]\)

\(\det A'=c\det [v_1,...,v_i,...,v_n]\)

\(\det A'=c\det A\)

As a result, multiplying a column by \(0\) makes the determinant \(0\).

A matrix with a column of \(0\) therefore has determinant \(0\)

Rule 2: A matrix with equal columns has a determinant of \(0\).

\(A=[a_1,...,a_i,...,a_i,...,a_n]\)

\(D(A)=D([a_1,...,a_i,...,a_i,...,a_n])\)

We know from Result 3 that swapping columns reverses the sign. Reversing columns results in the same matrix, so the determinant must be unchanged.

\(D(A)=-D(A)\)

\(D(A)=0\)

Linear dependence

If a column is a linear combination of other columns, then the matrix cannot be inverted.

\(A=[a_1,...,\sum_{j\ne i}^{n}c_ja_j,...,a_n]\)

\(\det A=\det ([v_1,...,\sum_{j\ne i}^{n}c_jv_j,...,v_n])\)

\(\det A=\sum_{j\ne i}^{n}c_j\det ([v_1,...,v_j,...,v_n])\)

\(\det A=\sum_{j\ne i}^{n}c_j\det ([v_1,...,v_j,,...,v_j,...,v_n])\)

As there is a repeating vector:

\(\det A=0\)

Swapping columns multiplies the determinant by \(-1\)

\(A=[v_1,...,v_i+v_j,...,v_i+v_j,...,v_n]\)

We know.

\(\det A=0\)

\(\det A=\det ([a_1,...,a_i,...,a_i,...,a_n])+\det([a_1,...,a_i,...,a_j,...,a_n])+\det([a_1,...,a_j,...,a_i,...,a_n])+\det([a_1,...,a_j,...,a_j,...,a_n])\)

So:

\(\det ([a_1,...,a_i,...,a_i,...,a_n])+\det ([a_1,...,a_i,...,a_j,...,a_n])+\det([a_1,...,a_j,...,a_i,...,a_n])+\det([a_1,...,a_j,...,a_j,...,a_n])=0\)

As \(2\) of these have equal columns these are equal to \(0\).

\(\det ([a_1,...,a_i,...,a_j,...,a_n])+\det ([a_1,...,a_j,...,a_i,...,a_n])=0\)

\(\det ([a_1,...,a_i,...,a_j,...,a_n])=-\det ([a_1,...,a_j,...,a_i,...,a_n])\)

Calculating the determinant

We have

\(\det A=\sum_{k_1=1}^m...\sum_{k_n=1}\prod_{i=1}^ma_{ik_i}\det ([e_{k_1},...,e_{k_n}])\)

So what is the value of the determinant here?

We know that the determinant of the identity matrix is \(1\).

We know that the determinant of a matrix with identical columns is \(0\).

We know that swapping columns multiplies the determinant by \(-1\).

Therefore the determinants where the values of \(k\) are not all unique are \(0\).

The determinants of the others are either \(-1\) or \(1\) depending on how many swaps are required to restore to the identity matrix.

This is also shown as the Leibni formula.

\(\det A = \sum_{\sigma \in S_n}sgn (\sigma )\prod_{i=1}^na_{i,\sigma_i}\)

Properties of determinants

Identity

\(\det I = 1\)

Multiplication

\(\det (AB)=\det A \det B\)

Inverse

\(\det (M^{-1})=\dfrac{1}{\det M}\)

We know this because:

\(\det (MM^{-1})=\det I = 1\)

\(\det M \det M^{-1}=1\)

\(\det (M^{-1})=\dfrac{1}{\det M}\)

Complex cojugate

\(\det (M^*)=\overline {\det M}\)

Transpose

\(\det (M^T)=\det M\)

Addition

\(\det (A+B)=\det A + \det B\)

Scalar multiplication

\(\det cM = c^n\det M\)

Determinants and eigenvalues

The determinant is equal to the product of the eigenvalues.

Determinants of 2x2 and 3x3 matrices

The determinant of a 2x2 matrix

\(M=\begin{bmatrix}a & b\\c & d\end{bmatrix}\)

\(|M|=ad-bc\)

The determinant of a 3x3 matrix

\(M=\begin{bmatrix}a & b & c\\d & e & f\\g & h & i\end{bmatrix}\)

\(|M|=aei+bfg+cdh-ceg-dbi-afh\)