\(m_{11}x+m_{12}y+m_{13}z=v_1\)

\(m_{21}x+m_{22}y+m_{23}z=v_2\)

\(m_{31}x+m_{32}y+m_{33}z=v_3\)

We can write the above as:

\(\mathbf{M}x=\mathbf{v}\)

What are the properties of \(\mathbf{M}\) and \(\mathbf{v}\)?

They are linear in addition and scalar multiplication.

The rank of a matrix is the dimension of the span of its component columns.

\(rank (M)=span(m_1,m_2,...,m_n)\)

The span of the rows is the same as the span of the columns.

A matrix where every element is \(0\). There is one for each dimension of matrix.

\(A=\begin{bmatrix}0& 0&...&0\\0 & 0&...&0\\...&...&...&...\\0&0&...&0\end{bmatrix}\)

A matrix where \(a_{ij}=0\) where \(i < j\) is upper triangular.

A matrix where \(a_{ij}=0\) where \(i > j\) is lower triangular.

A matrix which is either upper or lower triangular is a triangular matrix.

All symmetric matrices are square.

The identity matrix is an example.

A matrix where \(a_{ij}=a_{ji}\) is symmetric.

A matrix where \(a_ij=0\) where \(i\ne j\) is diagonal.

All diagonal matrices are symmetric.

The identity matrix is an example.

An invertible matrix implies that if the matrix is multiplied by another matrix, the original matrix can be recovered.

That is, if we have matrix \(A\), there exists matrix \(A^{-1}\) such that \(AA^{-1}=I\).

Consider a linear map on a vector space.

\(Ax=y\)

If \(A\) is invertible we can have:

\(A^{-1}Ax=A^{-1}y\)

\(x=A^{-1}y\)

If we set \(y=\mathbf 0\) then:

\(x=\mathbf 0\)

So if there is a non-zero vector \(x\) such that:

\(Ax=\mathbf 0\) then \(A\) is not invertible.

That is, for all matrices \(A\), the left and right inverses of \(B\), \(B_L^{-1}\) and \(B_R^{-1}\), are defined such that:

\(A(BB_R^{-1})=A\)

\(A(B_L^{-1}B)=A\)

Left and right inversions are equal

Note that if the left inverse exists then:

\(B_L^{-1}B=I\)

And if the right inverse exists:

\(BB_R^{-1}=I\)

Letâ€™s take the first:

\(B_L^{-1}B=I\)

\(B_L^{-1}BB_L^{-1}=B_L^{-1}\)

\(B_L^{-1}BB_L^{-1}-B_L^{-1}=0\)

\(B_L^{-1}(BB_L^{-1}-I)=0\)

\((AB)(AB)^{-1}=I\)

\(A^{-1}AB(AB)^{-1}=A^{-1}\)

\(B^{-1}B(AB)^{-1}=B^{-1}A^{-1}\)

\((AB)^{-1}=B^{-1}A^{-1}\)

\(DD^{-1}=I\)

\(D_{ii}D_{ii}^{-1}=1\)

\(D_{ii}^{-1}=\dfrac{1}{D_{ii}}\)

Some operations to a matrix can be reversed to arrive at the original matrix. Trivially, multiplying by the identity matrix is reversible.

Similarly, some operations are not reversible. Such as multiplying by the empty matrix.

All matrix operations which can be reversed are combinations of \(3\) elementary row operations. These are: Swapping rows

\(T_{12}=\begin{bmatrix}0& 1&...&0\\1 & 0&...&0\\...&...&...&...\\0&0&...&1\end{bmatrix}\)

Multiplying rows by a vector

\(D_2(m)=\begin{bmatrix}1& 0&...&0\\0 & m&...&0\\...&...&...&...\\0&0&...&1\end{bmatrix}\)

Adding rows to other rows

\(L_{12}(m)=\begin{bmatrix}1& 0&...&0\\m & 1&...&0\\...&...&...&...\\0&0&...&1\end{bmatrix}\)

Matricies can be used to solve simultaneous equations. Condsider the following set of equations.

\(2x+y-z=8\)

\(-3x-y+2z=-11\)

\(-2x+y+2z=-3\)

We can write this in matrix form.

\(Ax=y\)

\(A=\begin{bmatrix}2 & 1&-1\\-3 & -1&2\\-2&1&2\end{bmatrix}\)

\(x=\begin{bmatrix}x \\y \\z\end{bmatrix}\)

\(y=\begin{bmatrix}8 \\-11 \\-3\end{bmatrix}\)

Consider a form for summarising these equations. This is the augmented matrix.

\((A|y)=\begin{bmatrix}2 & 1&-1&|&-8\\-3 & -1&2&|&-11\\-2&1&2&|&-3\end{bmatrix}\)

We can take this and recovery our original \(A\) and \(y\).

However we can also do things to this augmented matrix which preserve solutions to the set of equations. These are:

Undertaking combinations of these can make it easier to solve the equation. In particular, if we can arrive at the form:

\((A|y)=\begin{bmatrix}1 & 0&0&|&a\\0 & 1&0&|&b\\0&0&1&|&c\end{bmatrix}\)

The solutions for \(x,y,z\) are \(a,b,c\).

We first aim for:

\((A|y)=\begin{bmatrix}a_{11} & a_{12}&a_{13}&|&a\\0 & a_{22}&a_{23}&|&b\\0&0&a_{33}&|&c\end{bmatrix}\)

If this cannot be reached there is no single solution. There may be infinite or no solutions.

Once we have the triangular form, we can easily solve.

\((A|y)=\begin{bmatrix}a_{11} & a_{12}&a_{13}&|&a\\0 & a_{22}&a_{23}&|&b\\0&0&a_{33}&|&c\end{bmatrix}\)

\((A|y)=\begin{bmatrix}1 & 0&0&|&a\\0 & 1&0&|&b\\0&0&1&|&c\end{bmatrix}\)

This process is back substitution (or forward substitution if the matrix is triangular the other way).

We can think of the inverse of a matrix as one which which takes a series of reverible operations and does these to a matrix then arriving at the identity matrix.

That is, only the three elementary row operations, and combinations of them, can transform a matrix in a way in which it can be reversed. As such All reversible matricies are combinations of the identity matrix and a series of elementary row operations. The inverse matrix is then those series of row operations, in reverse.

We can find identify an inversion by undertaking gaussian elimination. Each step done on the matrix is done to the identify matrix, reversing the process. The end result is the inverted matrix.

Instead of:

\((A|y)=\begin{bmatrix}2 & 1&-1&|&-8\\-3 & -1&2&|&-11\\-2&1&2&|&-3\end{bmatrix}\)

Take:

\((A|I)=\begin{bmatrix}2 & 1&-1&|&1&0&0\\-3 & -1&2&|&0&1&0\\-2&1&2&|&0&0&1\end{bmatrix}\)

When we solve this we get:

\((I|A^{-1})=\begin{bmatrix}1 & 0&0&|&\dfrac{3}{4}&\dfrac{1}{2}&\dfrac{1}{4}\\0& 1&0&|&\dfrac{1}{2}&1&\dfrac{1}{2}\\0&0&1&|&\dfrac{1}{4}&\dfrac{1}{2}&\dfrac{3}{4}\end{bmatrix}\)

We define a function, the commuter, between two objects \(a\) and \(b\) as:

\([a,b]=ab-ba\)

For numbers, \(ab-ba=0\), however for matrices this is not generally true.

Consider two matrices which share an eigenvector \(v\).

\(Av=\lambda_A v\)

\(Bv=\lambda_B v\)

Now consider:

\(ABv=A\lambda_B v\)

\(ABv=\lambda_A\lambda_B v\)

\(BAv=\lambda_A\lambda_B v\)

If the matrices share all the same eigenvectors, then the matrices commute, and \(AB=BA\).

\(A=A^{mn}\)

\(B=B^{no}\)

\(C=C^{mo}=A.B\)

\(c_{ij}=\sum_{r=1}^na_{ir}b_{rj}\)

Matrix multiplication depends on the order. Unlike for real numbers,

\(AB\ne BA\)

Matrix multiplication is not defined unless the condition above on dimensions is met.

A matrix multiplied by the identity matrix returns the original matrix.

For matrix \(M=M^{mn}\)

\(M=MI^m=I^nM\)

\(2\) matricies of the same size, that is with idental dimensions, can be added together.

If we have \(2\) matrices \(A^{mn}\) and \(B^{mn}\)

\(C=A+B\)

\(c_{ij}=a_{ij}+b_{ij}\)

An empty matrix with \(0\)s of the same size as the other matrix is the identity matrix for addition.

A matrix can be multiplied by a scalar. Every element in the matrix is multiplied by this.

\(B=cA\)

\(b_{ij}=ca_{ij}\)

The scalar \(1\) is the identity scalar.

For any two bases, there is a unique linear mapping from of the element vectors to the other.

A matrix of dimensions \(m*n\) can be transformed into a matrix \(n*m\) by transposition.

\(B=A^T\)

\(b_{ij}=a{ji}\)

\((M^T)^T=M\)

\((AB)^T=B^TA^T\)

\((A+B)^T=A^T+B^T\)

\((zM)^T=zM^T\)

With conjugation we take the complex conjugate of each element.

\(B=\overline A\)

\(b_{ij}=\overline a_{ij}\)

\(\overline {(\overline A)}=A\)

\(\overline {(AB)}=(\overline A)( \overline B)\)

\(\overline {(A+B)}=\overline A+\overline B\)

\(\overline {(zM)}=\overline z \overline M\)

Like transposition, but with conjucate.

\(B=A^*\)

\(b_{ij}=\bar{a_{ji}}\)

Alternatively, and particularly in physics, the following symbol is often used instead.

\((A^*)^T=A^\dagger\)