Solving systems of linear equations






Matrix and vector notation

We can write the above as:


What are the properties of \(\mathbf{M}\) and \(\mathbf{v}\)?

They are linear in addition and scalar multiplication.


Matrix rank

Rank function

The rank of a matrix is the dimension of the span of its component columns.

\(rank (M)=span(m_1,m_2,...,m_n)\)

Column and row span

The span of the rows is the same as the span of the columns.

Types of matrices

Empty matrix

A matrix where every element is \(0\). There is one for each dimension of matrix.

\(A=\begin{bmatrix}0& 0&...&0\\0 & 0&...&0\\...&...&...&...\\0&0&...&0\end{bmatrix}\)

Triangular matrix

A matrix where \(a_{ij}=0\) where \(i < j\) is upper triangular.

A matrix where \(a_{ij}=0\) where \(i > j\) is lower triangular.

A matrix which is either upper or lower triangular is a triangular matrix.

Symmetric matrices

All symmetric matrices are square.

The identity matrix is an example.

A matrix where \(a_{ij}=a_{ji}\) is symmetric.

Diagonal matrix

A matrix where \(a_ij=0\) where \(i\ne j\) is diagonal.

All diagonal matrices are symmetric.

The identity matrix is an example.


Inverse matrices

An invertible matrix implies that if the matrix is multiplied by another matrix, the original matrix can be recovered.

That is, if we have matrix \(A\), there exists matrix \(A^{-1}\) such that \(AA^{-1}=I\).

Consider a linear map on a vector space.


If \(A\) is invertible we can have:



If we set \(y=\mathbf 0\) then:

\(x=\mathbf 0\)

So if there is a non-zero vector \(x\) such that:

\(Ax=\mathbf 0\) then \(A\) is not invertible.

Left and right inverses

That is, for all matrices \(A\), the left and right inverses of \(B\), \(B_L^{-1}\) and \(B_R^{-1}\), are defined such that:



Left and right inversions are equal

Note that if the left inverse exists then:


And if the right inverse exists:


Let’s take the first:





Inversion of products





Inversion of a diagonal matrix




Degenerate (singular) matrices

Elementary row operations

Some operations to a matrix can be reversed to arrive at the original matrix. Trivially, multiplying by the identity matrix is reversible.

Similarly, some operations are not reversible. Such as multiplying by the empty matrix.

All matrix operations which can be reversed are combinations of \(3\) elementary row operations. These are: Swapping rows

\(T_{12}=\begin{bmatrix}0& 1&...&0\\1 & 0&...&0\\...&...&...&...\\0&0&...&1\end{bmatrix}\)

Multiplying rows by a vector

\(D_2(m)=\begin{bmatrix}1& 0&...&0\\0 & m&...&0\\...&...&...&...\\0&0&...&1\end{bmatrix}\)

Adding rows to other rows

\(L_{12}(m)=\begin{bmatrix}1& 0&...&0\\m & 1&...&0\\...&...&...&...\\0&0&...&1\end{bmatrix}\)

Gaussian elimination

Simultaneous equations

Matricies can be used to solve simultaneous equations. Condsider the following set of equations.

  • \(2x+y-z=8\)

  • \(-3x-y+2z=-11\)

  • \(-2x+y+2z=-3\)

We can write this in matrix form.


\(A=\begin{bmatrix}2 & 1&-1\\-3 & -1&2\\-2&1&2\end{bmatrix}\)

\(x=\begin{bmatrix}x \\y \\z\end{bmatrix}\)

\(y=\begin{bmatrix}8 \\-11 \\-3\end{bmatrix}\)

Augmented matrix

Consider a form for summarising these equations. This is the augmented matrix.

\((A|y)=\begin{bmatrix}2 & 1&-1&|&-8\\-3 & -1&2&|&-11\\-2&1&2&|&-3\end{bmatrix}\)

We can take this and recovery our original \(A\) and \(y\).

However we can also do things to this augmented matrix which preserve solutions to the set of equations. These are:

Undertaking combinations of these can make it easier to solve the equation. In particular, if we can arrive at the form:

\((A|y)=\begin{bmatrix}1 & 0&0&|&a\\0 & 1&0&|&b\\0&0&1&|&c\end{bmatrix}\)

The solutions for \(x,y,z\) are \(a,b,c\).

Echeleon / triangular form

We first aim for:

\((A|y)=\begin{bmatrix}a_{11} & a_{12}&a_{13}&|&a\\0 & a_{22}&a_{23}&|&b\\0&0&a_{33}&|&c\end{bmatrix}\)

If this cannot be reached there is no single solution. There may be infinite or no solutions.


Once we have the triangular form, we can easily solve.

\((A|y)=\begin{bmatrix}a_{11} & a_{12}&a_{13}&|&a\\0 & a_{22}&a_{23}&|&b\\0&0&a_{33}&|&c\end{bmatrix}\)

\((A|y)=\begin{bmatrix}1 & 0&0&|&a\\0 & 1&0&|&b\\0&0&1&|&c\end{bmatrix}\)

This process is back substitution (or forward substitution if the matrix is triangular the other way).

Matrix inversion

We can think of the inverse of a matrix as one which which takes a series of reverible operations and does these to a matrix then arriving at the identity matrix.

That is, only the three elementary row operations, and combinations of them, can transform a matrix in a way in which it can be reversed. As such All reversible matricies are combinations of the identity matrix and a series of elementary row operations. The inverse matrix is then those series of row operations, in reverse.

We can find identify an inversion by undertaking gaussian elimination. Each step done on the matrix is done to the identify matrix, reversing the process. The end result is the inverted matrix.

Instead of:

\((A|y)=\begin{bmatrix}2 & 1&-1&|&-8\\-3 & -1&2&|&-11\\-2&1&2&|&-3\end{bmatrix}\)


\((A|I)=\begin{bmatrix}2 & 1&-1&|&1&0&0\\-3 & -1&2&|&0&1&0\\-2&1&2&|&0&0&1\end{bmatrix}\)

When we solve this we get:

\((I|A^{-1})=\begin{bmatrix}1 & 0&0&|&\dfrac{3}{4}&\dfrac{1}{2}&\dfrac{1}{4}\\0& 1&0&|&\dfrac{1}{2}&1&\dfrac{1}{2}\\0&0&1&|&\dfrac{1}{4}&\dfrac{1}{2}&\dfrac{3}{4}\end{bmatrix}\)



We define a function, the commuter, between two objects \(a\) and \(b\) as:


For numbers, \(ab-ba=0\), however for matrices this is not generally true.

Commutators and eigenvectors

Consider two matrices which share an eigenvector \(v\).

\(Av=\lambda_A v\)

\(Bv=\lambda_B v\)

Now consider:

\(ABv=A\lambda_B v\)

\(ABv=\lambda_A\lambda_B v\)

\(BAv=\lambda_A\lambda_B v\)

If the matrices share all the same eigenvectors, then the matrices commute, and \(AB=BA\).

Identity matrix and the Kronecker delta

Matrix additon and multiplication

Matrix multiplication





Matrix multiplication depends on the order. Unlike for real numbers,

\(AB\ne BA\)

Matrix multiplication is not defined unless the condition above on dimensions is met.

A matrix multiplied by the identity matrix returns the original matrix.

For matrix \(M=M^{mn}\)


Matrix addition

\(2\) matricies of the same size, that is with idental dimensions, can be added together.

If we have \(2\) matrices \(A^{mn}\) and \(B^{mn}\)



An empty matrix with \(0\)s of the same size as the other matrix is the identity matrix for addition.

Scalar multiplication

A matrix can be multiplied by a scalar. Every element in the matrix is multiplied by this.



The scalar \(1\) is the identity scalar.

Basis of an endomorphism

Changing the basis

For any two bases, there is a unique linear mapping from of the element vectors to the other.

Transposition and conjugation


A matrix of dimensions \(m*n\) can be transformed into a matrix \(n*m\) by transposition.



Transpose rules






With conjugation we take the complex conjugate of each element.

\(B=\overline A\)

\(b_{ij}=\overline a_{ij}\)

Conjugation rules

\(\overline {(\overline A)}=A\)

\(\overline {(AB)}=(\overline A)( \overline B)\)

\(\overline {(A+B)}=\overline A+\overline B\)

\(\overline {(zM)}=\overline z \overline M\)

Conjugate transposition

Like transposition, but with conjucate.



Alternatively, and particularly in physics, the following symbol is often used instead.