Eigenvalues, Eigenvectors, decomposition and operations

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors

Which vectors remain unchanged in direction after a transformation?

That is, for a matrix \(A\), what vectors \(v\) are equal to scalar multiplication by \(\lambda \) following the operation of the matrix.

\(Av=\lambda v\)


The spectrum of a matrix is the set of its eigenvalues.

Eigenvectors as a basis

If eigen vectors space space, we can write

\(v=\sum_i \alpha_i | \lambda_i\rangle \)

Under what circumstances do they span the entirity?

Calculating eigenvalues and eigenvectors using the characteristic polynomial

The characteristic polynomial of a matrix is a polynomial whose roots are the eigenvalues of the matrix.

We know from the definition of eigenvalues and eigenvectors that:

\(Av=\lambda v\)

Note that

\(Av-\lambda v=0\)

\(Av-\lambda Iv=0\)

\((A-\lambda I)v=0\)

Trivially we see that \(v=0\) is a solution.

Otherwise matrix \(A-\lambda I\) must be non-invertible. That is:

\(Det(A-\lambda I)=0\)

Calculating eigenvalues

For example

\(A=\begin{bmatrix}2&1\\1 & 2\end{bmatrix}\)

\(A-\lambda I=\begin{bmatrix}2-\lambda &1\\1 & 2-\lambda \end{bmatrix}\)

\(Det(A-\lambda I)=(2-\lambda )(2-\lambda )-1\)

When this is \(0\).

\((2-\lambda )(2-\lambda )-1=0\)

\(\lambda =1,3\)

Calculating eigenvectors

You can plug this into the original problem.

For example


\(\begin{bmatrix}2&1\\1 & 2\end{bmatrix}\begin{bmatrix}x_1\\x_2\end{bmatrix}=3\begin{bmatrix}x_1\\x_2\end{bmatrix}\)

As vectors can be defined at any point on the line, we normalise \(x_1=1\).

\(\begin{bmatrix}2&1\\1 & 2\end{bmatrix}\begin{bmatrix}1\\x_2\end{bmatrix}=\begin{bmatrix}3\\3x_2\end{bmatrix}\)

Here \(x_2=1\) and so the eigenvector corresponding to eigenvalue \(3\) is:



The trace of a matrix is the sum of its diagonal components.


The trace of a matrix is equal to the sum of its eigenvectors.

Traces can be shown as the sum of inner products.


Properties of traces

Traces commute


Traces of \(1\times 1\) matrices are equal to their component.


Trace trick

If we want to manipulate the scalar:


We can use properties of the trace.





Matrix operations

Matrix powers

For a square matrix \(M\) we can calculate \(MMMM...\), or \(M^n\) where \(n\in \mathbb{N}\).

Powers of diagonal matrices

Generally, calculating a matrix to an integer power can be complicated. For diagonal matrices it is trivial.

For a diagonal matrix \(M=D^n\), \(m_{ij}=d_{ij}^n\).

Matrix exponentials

The exponential of a complex number is defined as:

\(e^x=\sum \dfrac{1}{j!}x^j\)

We can extend this definition to matrices.

\(e^X:=\sum \dfrac{1}{j!}X^j\)

The dimension of a matrix and its exponential are the same.

Matrix logarithms

If we have \(e^A=B\) where \(A\) and \(B\) are matrices then we can say that \(A\) is matrix logarithm of \(B\).

That is:

\(\log B=A\)

The dimensions of a matrix and its logarithm are the same.

Matrix square roots

For a matrix \(M\), the square root \(M^{\dfrac{1}{2}}\) is \(A\) where \(AA=M\).

This does not necessarily exist.

Square roots may not be unique.

Real matrices may have no real square root.

Matrix decomposition

Similar matrices

In hermitian, show all symmtric matrices are hermitian

For a diagonal matrix, eigenvalues are the diagonal entries?

Similar matrix:


\(M\) and \(A\) have the same eigenvalues. If \(A\) diagonal, then entries are eigenvalues.

Defective and diagonalisable matrices

Diagonalisable matrices and eigendecomposition

If matrix \(M\) is diagonalisable if there exists matrix \(P\) and diagonal matrix \(A\) such that:


Diagonalisiable matrices and powers

If these exist then we can more easily work out matrix powers.


\(A^n\) is easy to calculate, as each entry in the diagonal taken to the power of \(n\).

Defective matrices

Defective matrices are those which cannot be diagonalised.

Non-singular matries can be defective or not defective, for example the identiy matrix.

Singular matrices can also be defective or not defective, for example the empty matrix.


Consider an eigenvector \(v\) and eigenvalue \(\lambda \) of matrix \(M\).

We known that \(Mv=\lambda v\).

If \(M\) is full rank then we can generalise for all eigenvectors and eigenvalues:


Where \(Q\) is the eigenvectors as columns, and \(\Lambda \) is a diagonal matrix with the corresponding eigenvalues. We can then show that:

\(M=Q\Lambda Q^{-1}\)

This is only possible to calculate if the matrix of eigenvectors is non-singular. Otherwise the matrix is defective.

If there are linearly dependent eigenvectors then we cannot use eigen-decomposition.

Using the eigen-decomposition to invert a matrix

This can be used to invert \(M\).

We know that:

\(M^{-1}=(Q\Lambda Q^{-1})^{-1}\)


We know \(\Lambda \) can be easily inverted by taking the reciprocal of each diagonal element. We already know both \(Q\) and its inverse from the decomposition.

If any eigenvalues are \(0\) then \(\Lambda \) cannot be inverted. These are singular matrices.

Spectral theorem for finite-dimensional vector spaces



We define a function, the commuter, between two objects \(a\) and \(b\) as:


For numbers, \(ab-ba=0\), however for matrices this is not generally true.

Commutators and eigenvectors

Consider two matrices which share an eigenvector \(v\).

\(Av=\lambda_A v\)

\(Bv=\lambda_B v\)

Now consider:

\(ABv=A\lambda_B v\)

\(ABv=\lambda_A\lambda_B v\)

\(BAv=\lambda_A\lambda_B v\)

If the matrices share all the same eigenvectors, then the matrices commute, and \(AB=BA\).

Identity matrix and the Kronecker delta

Matrix additon and multiplication

Matrix multiplication





Matrix multiplication depends on the order. Unlike for real numbers,

\(AB\ne BA\)

Matrix multiplication is not defined unless the condition above on dimensions is met.

A matrix multiplied by the identity matrix returns the original matrix.

For matrix \(M=M^{mn}\)


Matrix addition

\(2\) matricies of the same size, that is with idental dimensions, can be added together.

If we have \(2\) matrices \(A^{mn}\) and \(B^{mn}\)



An empty matrix with \(0\)s of the same size as the other matrix is the identity matrix for addition.

Scalar multiplication

A matrix can be multiplied by a scalar. Every element in the matrix is multiplied by this.



The scalar \(1\) is the identity scalar.

Transposition and conjugation


A matrix of dimensions \(m*n\) can be transformed into a matrix \(n*m\) by transposition.



Transpose rules






With conjugation we take the complex conjugate of each element.

\(B=\overline A\)

\(b_{ij}=\overline a_{ij}\)

Conjugation rules

\(\overline {(\overline A)}=A\)

\(\overline {(AB)}=(\overline A)( \overline B)\)

\(\overline {(A+B)}=\overline A+\overline B\)

\(\overline {(zM)}=\overline z \overline M\)

Conjugate transposition

Like transposition, but with conjucate.



Alternatively, and particularly in physics, the following symbol is often used instead.


Matrix rank

Rank function

The rank of a matrix is the dimension of the span of its component columns.

\(rank (M)=span(m_1,m_2,...,m_n)\)

Column and row span

The span of the rows is the same as the span of the columns.

Types of matrices

Empty matrix

A matrix where every element is \(0\). There is one for each dimension of matrix.

\(A=\begin{bmatrix}0& 0&...&0\\0 & 0&...&0\\...&...&...&...\\0&0&...&0\end{bmatrix}\)

Triangular matrix

A matrix where \(a_{ij}=0\) where \(i < j\) is upper triangular.

A matrix where \(a_{ij}=0\) where \(i > j\) is lower triangular.

A matrix which is either upper or lower triangular is a triangular matrix.

Symmetric matrices

All symmetric matrices are square.

The identity matrix is an example.

A matrix where \(a_{ij}=a_{ji}\) is symmetric.

Diagonal matrix

A matrix where \(a_ij=0\) where \(i\ne j\) is diagonal.

All diagonal matrices are symmetric.

The identity matrix is an example.