A linear form is a linear map from a vector space to a scalar from the vector space’s underlying field.

\(\hom (V, F)\)

Linear forms can be represented as matrix operators.

\(v^TM=f\)

Where \(M\) has only one column.

\(f(M)=f(v)\)

We introduce \(e_i\), the element vector. This is \(0\) for all entries except for \(i\) where it is \(1\). Any vector can be shown as a sum of these vectors multiplied by a scalar.

\(f(M)=f(\sum^m_{i=1}a_{i}e_i)\)

\(f(M)=\sum_{i=1}^mf(a_{i}e_i)\)

\(f(M)=\sum_{i=1}^ma_if(e_i)\)

\(f(M)=\sum_{i=1}^ma_if(e_i)\)

\(f(M)=\sum_{i=1}^ma_i\)

The dual space \(V^*\) of vector space \(V\) is the set of all linear forms, \(\hom(V,F)\).

\(v\in V\)

\(f\in F\)

\(av = f\)

\(bv = g\)

\((a\oplus b)v=f+g\)

\((a\oplus b)v=av + bv\)

So there is some operation we can do on two members of dual space

Linear in addition. That is, if we have two dual “things”, we can define the addition of functions as the operation which results int he outputs being added.

what about linear in scalar? same approach.

Well we define

\((c\odot a)=cav\)

The dual space forms a vector space. We can define addition and scalar multiplication on members of the dual space.

The dimension of the dual space is the same as the underlying space.

We have defined the dual space. A vector in dual space will have also have components and a basis.

\(\mathbf w=\sum_i w_i f^j\)

So how we describe the components will depend on the choice of basis.

We choose the dual basis, the basis for \(V^*\) as:

\(\mathbf e_i \mathbf f^j =\delta_i^j\)

If the basis changes, so does the dual basis.

We write the dual basis as \(e^j\)

A bilinear form takes two vectors and produces a scalar from the underyling field.

This is in contrast to a linear form, which only has one input.

In addition, the function is linear in both arguments.

\(\phi (au+x, bv+y)=\phi (au,bv)+\phi (au,y)+\phi (x,bv)+\phi (x,y)\)

\(\phi (au+x, bv+y)=ab\phi (u,v)+a\phi (u,y)+b\phi (x,v)+\phi (x,y)\)

They can be represented as:

\(\phi (u,v)=v^TMu\)

\(f(M)=f([v_1,v_2])\)

We introduce \(e_i\), the element vector. This is \(0\) for all entries except for \(i\) where it is \(1\). Any vector can be shown as a sum of these vectors multiplied by a scalar.

\(f(M)=f([\sum^m_{i=1}a_{1i}e_i,\sum^m_{i=1}a_{2i}e_i])\)

\(f(M)=\sum_{k=1}^mf([a_{1k}e_k,\sum^m_{i=1}a_{2i}e_i])\)

\(f(M)=\sum_{k=1}^m\sum^m_{i=1}f([a_{1k}e_k,a_{2i}e_i])\)

Because this in linear in scalars:

\(f(M)=\sum_{k=1}^m\sum^m_{i=1}a_{1k}a_{2i}f([e_k,e_i])\)

\(f(M)=\sum_{k=1}^m\sum^m_{i=1}a_{1k}a_{2i}e_k^TMe_i\)

\(f(M)=\sum_{k=1}^m\sum^m_{i=1}a_{1k}a_{2i}e_k^Te_i\)

\(f(M)=\sum_{k=1}^m\sum^m_{i=1}a_{1k}a_{2i}\delta_i^k\)

\(f(M)=\sum^m_{i=1}a_{1i}a_{2i}\)

\(v^TMu=f\)

If the operator is \(I\) then we have the dot product.

\(v^Tu\)

Given a metric \(M\), two vectors \(v\) and \(u\) are orthogonal if:

\(v^TMu=0\)

For example if we have the metric \(M=I\), then two vectors are orthogonal if:

\(v^Tu=0\)

If we have a bilinear form we can write the form as:

\(u^TMv\)

After a transformation \(P\) to the vectors it is:

\((Pu)^TM(Pv)\)

\(u^TP^TMPv\)

So the value of the metric will be unaffected if:

\(u^TP^TMPv=u^TMv\)

\(P^TMP=M\)

Different metrics can produce the same group. For example multiplying the metric by a constant.

\(P^TMP=M\)

The bilinear form is:

\(u^TMv\)

The transformations which preserve this are:

\(P^TMP=M\)

If the metic is \(M=I\) then the condition is:

\(P^TP=I\)

\(P^T=P^{-1}\)

These form the orthogonal group.

We use \(O\) instead of \(P\):

\(O^T=O^{-1}\)

The orthogonal group is the rotations and reflections.

The orthogonal group depends on the dimension of the vector space, and the underlying field. So we can have:

\(O(n, R)\); and

\(O(n, C)\).

\(O(n)\) means \(O(n,R)\).

The generally refer to the reals only.

The bilinear form is:

\(u^TMv\)

The transformations which preserve this are:

\(P^TMP=M\)

If the metric is:

\(M=\begin{bmatrix}-1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 1\\0 & 0 & 0 & 1\end{bmatrix}\)

Then we have the indefinite orthogonal group \(O(3,1)\)

Where \(n=m\) we have the split orthogonal group.

\(O(n,n,F)\)

The Lorentz group is the \(O(1,3)\) group.

We can do the usual \(3\) rotations, however there are additional \(3\) symmetries, making the Lorzentz group \(6\)-dimensional.

These are the Lorentz boosts.

A symmetry has:

\(t'^2 - x'2 - y'^2 - z'^2 = t^2 - x^2 - y^2 - z^2\)

We consider the case where we just boost on \(x\), so \(y = y'\) and \(z = z'\).

\(t'^2 - x'2 = t^2 - x^2\)

Or with \(c\):

\(c^2t'^2 - x'2 = ct^2 - x^2\)

\(s^2 = t^2 - x^2 - y^z - z^2\)

\(s'^2 = t'^2 - x'^2 - y'^2 - z'^2\)

\(ds^2 = s'^2 - s^2\)

\(ds^2 = (t'^2 - x'^2 - y'^2 - z'^2) - (t^2 - x^2 - y^z - z^2)\)

\(ds^2 = (t'^2 - t^2) - (x'^2 - x^2) - (y'^2 - y^2) - (z'^2 - z^2)\)

\(ds^2 = dt^2 - dx^2 - dy^2 - dz^2\)

boost: \(s^2 = c^2t^2 - x^2 - y^z - z^2\)

we want new t and x where distance is same \(c^2t'^2 - x'^2 - y^z - z^2 = c^2t^2 - x^2 - y^z -z^2\) \(c^2t'^2 - x'^2 = c^2t^2 - x^2\)

We know that both transformations are linear [WHY??], therefore \(x' = Ax + Bt\) \(t' = Cx+Dt\)

we transform to \(x' = 0\). so \(Ax + Bt = 0\)

We define \(v = \dfrac{x}{t}\)

So: \(x = vt\)

We can plug these in: \(Avt + Bt = 0\) \(Av + B = 0\) \(\dfrac{A}{B} = -v\)

A bilinear form takes two vectors and produces a scalar from the underyling field.

The function is linear in addition in both arguments.

\(\phi (au+x, bv+y)=\phi (au,bv)+\phi (au,y)+\phi (x,bv)+\phi (x,y)\)

The function is also linear in multiplication in both arguments.

\(\phi (au+x, bv+y)=ab\phi (u,v)+a\phi (u,y)+b\phi (x,v)+\phi (x,y)\)

They can be represented as:

\(\phi (u,v)=v^TMu\)

Like bilinear forms, sesquilinear are linear in addition:

\(\phi (au+x, bv+y)=\phi (au,bv)+\phi (au,y)+\phi (x,bv)+\phi (x,y)\)

Sesqulinear forms however are only multiplictively linear in the second argument.

\(\phi (au+x, bv+y)=b\phi (au,v)+\phi (au,y)+b\phi (x,v)+\phi (x,y)\)

In the first argument they are “twisted”

\(\phi (au+x, bv+y)=\bar ab\phi (u,v)+\bar a\phi (u,y)+b\phi (x,v)+\phi (x,y)\)

For the real field, \(\bar b = b\) and so the sesqulinear form is the same as the bilinear form.

We can show the sesquilinear form as \(v^*Mu\)

\(f(M)=f([v_1,v_2])\)

We introduce \(e_i\), the element vector. This is \(0\) for all entries except for \(i\) where it is \(1\). Any vector can be shown as a sum of these vectors multiplied by a scalar.

\(f(M)=f([\sum^m_{i=1}a_{1i}e_i,\sum^m_{i=1}a_{2i}e_i])\)

\(f(M)=\sum_{k=1}^mf([a_{1k}e_k,\sum^m_{i=1}a_{2i}e_i])\)

\(f(M)=\sum_{k=1}^m\sum^m_{i=1}f([a_{1k}e_k,a_{2i}e_i])\)

Because this in linear in scalars:

\(f(M)=\sum_{k=1}^m\sum^m_{i=1}a_{1k}^*a_{2i}f([e_k,e_i])\)

\(f(M)=\sum_{k=1}^m\sum^m_{i=1}a_{1k}^*a_{2i}e_k^*Me_i\)

\(f(M)=\sum_{k=1}^m\sum^m_{i=1}a_{1k}^*a_{2i}e_k^*Me_i\)

\(f(M)=\sum_{k=1}^m\sum^m_{i=1}a_{1k}^*a_{2i}e_k^*e_i\)

\(f(M)=\sum_{k=1}^m\sum^m_{i=1}a_{1k}^*a_{2i}\delta_i^k\)

\(f(M)=\sum^m_{i=1}a_{1i}^*a_{2i}\)

For bilinear forms, the transformations which preserved metrics were:

\(P^T=P^{-1}\)

For sesquilinear they are different:

\(u^*Mv\)

\((Pu)^*M(Pv)\)

\(u^*P^*MPv\)

So we want the matrices where:

\(P^*MP=M\)

The unitary group is where \(M=I\)

\(P^*P=I\)

\(P^*=P^{-1}\)

We refer to these using \(U\) instead of \(P\).

\(U^*=U^{-1}\)

The unitary group depends on the dimension of the vector space, and the underlying field. So we can have:

\(U(n, R)\); and

\(U(n, C)\).

For the \(U(n, R)\) we have:

\(U^*=U^{-1}\)

\(U^T=U^{-1}\)

This is the condition for the orthogonal group, and so we would instead write \(O(n)\).

As a result, \(U(n)\) refers to \(U(n,C)\).

A matrix where \(M=M^*\)

For matrices over the real numbers, these are the same as symmetric matrices.

\(\phi (u,v)=u^*Mv\)

\((u^*Mv)^*=v^*M^*u=v^*Mu\)

\(\phi (u,v)=\overline {\phi (v,u)}\)

\((v^*Mv)^*=v^*M^*v=v^*Mv\)

So we have:

\((v^*Mv)^*=v^*Mv\)

Which is only satisfied for reals.

If \(A\) and \(B\) are Hermitian, \(AB\) is Hermitian if and only if \(AB\) commutes.

\((AB)^*=B^*A^*=BA\)

If it commutes then

\((AB)^*=AB\)

Hermitian matrices have real eigenvalues.

\(Hv=\lambda v\)

\(v^*Hv=\lambda v^*v\)

\(v^*Hv=\lambda \)

These are also known as anti-Hermitian matrices.

\(M^*=-M\)

Pauli matrices are \(2\times 2\) matrices which are unitary and hermitian.

That is, \(P^*=P^{-1}\).

And \(P^*=P\).

The matrices are:

\(\sigma_1 =\begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix}\)

\(\sigma_2 =\begin{bmatrix} 0&-i \\ i&0 \end{bmatrix}\)

\(\sigma_3 =\begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix}\)

The identity matrix is often considered alongside these as:

\(\sigma_0 =\begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}\)

\(\sigma_i^2 =\sigma_i\sigma_i\)

\(\sigma_i^2 =\sigma_i\sigma_i^*\)

\(\sigma_i^2 =\sigma_i\sigma_i^{-1}\)

\(\sigma_i^2 =I\)

\(\det \sigma_i =-1\)

\(Tr (\sigma_i) =0\)

As the sum of eigenvalues is the trace, and the product is the determinant, the eigenvalues are \(1\) and \(-1\).

The matrix \(M\) is positive definite if for all non-zero vectors the scalar is positive.

\(v^TMv\)

We know that the outcome is a scalar, so:

\(v^TMv=(v^TMv)^T\)

\(v^TMv=v^TM^Tv\)

\(v^T(M-M^T)v=0\)

An inner product is a sesquilinear form with a positive-definite Hermitian matrix.

\( \langle u, v \rangle =u^*Hv\)

If we are using the real field this is the same as:

\( \langle u, v \rangle =u^THv\)

Where \(H\) is now a symmetric real matrix.

\( \langle v, v \rangle =v^*Hv\)

Always positive and real.

\(\langle u, v\rangle \langle v, u\rangle=|\langle u, v\rangle|^2\)

This states that:

\(|\langle u,v\rangle |^2 \le \langle u, u\rangle \dot \langle v, v\rangle \)

Consider the vectors \(u\) and \(v\). We construct a third vector \(u-\lambda v\). We know the length of any vector is non-negative. \(0\le \langle u-\lambda v, u-\lambda v\rangle\)

\(0\le \langle u, u\rangle+ \langle u, -\lambda v\rangle+\langle -\lambda v, u\rangle+ \langle -\lambda v, -\lambda v\rangle\)

\(0\le \langle u, u\rangle-\bar{\lambda }\langle u, v\rangle-\lambda { \langle v, u\rangle }+ \lambda \bar{\lambda }\langle v, v\rangle\)

We now look for a value of \(\lambda \) to simplify this equation.

\(\lambda = \dfrac{\langle u,v \rangle}{\langle v, v\rangle}\)

\(0\le \langle u, u\rangle-\dfrac{\langle v,u \rangle\langle u, v\rangle}{\langle v, v\rangle}-\dfrac{\langle u,v \rangle \langle v, u\rangle }{\langle v, v\rangle}+ \dfrac{\langle u,v \rangle}{\langle v, v\rangle}\dfrac{\langle v,u \rangle}{\langle v, v\rangle}\langle v, v\rangle\)

\(0\le \langle u, u\rangle-\dfrac{|\langle u,v \rangle|^2}{\langle v, v\rangle}\)

\(|\langle u,v \rangle|^2\ge \langle u, u\rangle\langle v, v\rangle\)

in inner product space, orthogonal projection

\(p_uv = \dfrac{<u,v>}{{v,v}}v\)

we then know that \(o=v-p_uv\) is orthogonal to \(u\).

if a set of vectors are all orthogonal, they form an orthogonal set if the set spans the vector space, it is an orthogonal basis.

can we form an orthogonal basis from a non-orthogonal basis? yes, using gram schmidt

we have \(x_1\), \(x_2\) \(x_3\) etc we want to make \(v_1\), \(v_2\) etc orthognal

\(v_1 = x_1\) \(v_2 = x_2 - p_{x_2}v_1\) \(v_3 = x_3 - p_{x_3}v_1 - p_{x_3}v_2v\)

From invertible matrix section in endo

A matrix can only be inverted if it can be created from a combination of elementary row operations.

How can we identify if a matrix is invertible? We want to create a scalar from the matrix which tells us if this possible. We can this scalar the determinant.

For a matrix \(A\) we label the determinant \(|A|\), or \(\det A\)

We propose \(|A|=0\) when the matrix is not invertible.

So how can we identify the function we need to undertake on the matrix?

We know that linear dependence results in determinants of \(0\).

We can model this as a function on the columns of the matrix.

\(\det M = \det ([M_1, ...,M_n)\)

If there is linear depednence, for example if two columns are the same then:

\(\det ([M_1,...,M_i,...,M_i,...,M_n])=0\)

Similarly, if there is a column of \(0\) then the determinant is \(0\).

\(\det ([M_1,...,0,...,M_n])=0\)

Show linear in addition

How can we identify the determinant of less simple matrices? We can use the multilinear form.

\(\sum c_i\mathbf M_i=\mathbf 0\)

Where \(\mathbf c \ne \mathbf 0\)

Or:

\(M\mathbf c=\mathbf 0\)

A matrix can be shown in terms of its columns. \(A=[v_1,...,v_n]\)

\(\det A=\det [v_1,...,v_n]\)

\(\det A=\sum_{k_1=1}^m...\sum_{k_n=1}\prod_{i=1}^ma_{ik_i}\det ([e_{k_1},...,e_{k_n}])\)

If a whole row or columns is \(0\) then:

\(\det A=\det [v_1,...,v_i,...,v_n]\)

\(\det A'=\det [v_1,...,cv_i,...,v_n]\)

\(\det A=\det [v_1,...,v_i,...,v_n]\)

\(\det A'=\det [v_1,...,cv_i,...,v_n]\)

\(\det A'=c\det [v_1,...,v_i,...,v_n]\)

\(\det A'=c\det A\)

As a result, multiplying a column by \(0\) makes the determinant \(0\).

A matrix with a column of \(0\) therefore has determinant \(0\)

\(A=[a_1,...,a_i,...,a_i,...,a_n]\)

\(D(A)=D([a_1,...,a_i,...,a_i,...,a_n])\)

We know from Result 3 that swapping columns reverses the sign. Reversing columns results in the same matrix, so the determinant must be unchanged.

\(D(A)=-D(A)\)

\(D(A)=0\)

If a column is a linear combination of other columns, then the matrix cannot be inverted.

\(A=[a_1,...,\sum_{j\ne i}^{n}c_ja_j,...,a_n]\)

\(\det A=\det ([v_1,...,\sum_{j\ne i}^{n}c_jv_j,...,v_n])\)

\(\det A=\sum_{j\ne i}^{n}c_j\det ([v_1,...,v_j,...,v_n])\)

\(\det A=\sum_{j\ne i}^{n}c_j\det ([v_1,...,v_j,,...,v_j,...,v_n])\)

As there is a repeating vector:

\(\det A=0\)

\(A=[v_1,...,v_i+v_j,...,v_i+v_j,...,v_n]\)

We know.

\(\det A=0\)

\(\det A=\det ([a_1,...,a_i,...,a_i,...,a_n])+\det([a_1,...,a_i,...,a_j,...,a_n])+\det([a_1,...,a_j,...,a_i,...,a_n])+\det([a_1,...,a_j,...,a_j,...,a_n])\)

So:

\(\det ([a_1,...,a_i,...,a_i,...,a_n])+\det ([a_1,...,a_i,...,a_j,...,a_n])+\det([a_1,...,a_j,...,a_i,...,a_n])+\det([a_1,...,a_j,...,a_j,...,a_n])=0\)

As \(2\) of these have equal columns these are equal to \(0\).

\(\det ([a_1,...,a_i,...,a_j,...,a_n])+\det ([a_1,...,a_j,...,a_i,...,a_n])=0\)

\(\det ([a_1,...,a_i,...,a_j,...,a_n])=-\det ([a_1,...,a_j,...,a_i,...,a_n])\)

We have

\(\det A=\sum_{k_1=1}^m...\sum_{k_n=1}\prod_{i=1}^ma_{ik_i}\det ([e_{k_1},...,e_{k_n}])\)

So what is the value of the determinant here?

We know that the determinant of the identity matrix is \(1\).

We know that the determinant of a matrix with identical columns is \(0\).

We know that swapping columns multiplies the determinant by \(-1\).

Therefore the determinants where the values of \(k\) are not all unique are \(0\).

The determinants of the others are either \(-1\) or \(1\) depending on how many swaps are required to restore to the identity matrix.

This is also shown as the Leibni formula.

\(\det A = \sum_{\sigma \in S_n}sgn (\sigma )\prod_{i=1}^na_{i,\sigma_i}\)

\(\det I = 1\)

\(\det (AB)=\det A \det B\)

\(\det (M^{-1})=\dfrac{1}{\det M}\)

We know this because:

\(\det (MM^{-1})=\det I = 1\)

\(\det M \det M^{-1}=1\)

\(\det (M^{-1})=\dfrac{1}{\det M}\)

\(\det (M^*)=\overline {\det M}\)

\(\det (M^T)=\det M\)

\(\det (A+B)=\det A + \det B\)

\(\det cM = c^n\det M\)

The determinant is equal to the product of the eigenvalues.

\(M=\begin{bmatrix}a & b\\c & d\end{bmatrix}\)

\(|M|=ad-bc\)

\(M=\begin{bmatrix}a & b & c\\d & e & f\\g & h & i\end{bmatrix}\)

\(|M|=aei+bfg+cdh-ceg-dbi-afh\)

The special orthogonal group, \(SO(n,F)\), is the subgroup of the orthogonal group where \(|M|=1\).

As a result it includes only the rotation operators, not the flip operators.

\(SO(3)\) is rotations in 3d space.

\(SO(2)\) is rotations in 2d space.

The orthogonal group has determinants of \(-1\) or \(1\).

\(O^T=O^{-1}\)

\(\det (O^T)=\det (O^{-1})\)

\(\det O=\dfrac{1}{\det O}\)

\(\det O=\pm 1\)

The special unitary group, \(SU(n,F)\), is the subgroup of \(U(n,F)\) where the determinants are \(1\).

That is, \(|M|=1\)

The determinant of the unitary matrices is:

\(\det U^*=\det U^{-1}\)

\((\det U)^*=\dfrac{1}{\det U} \)

\((\det U)^*\det U = 1 \)

\(||\det U||= 1\)

The special linear group, \(SL(n,F)\), is the subgroup of \(GL(n,F)\) where the determinants are \(1\).

That is, \(|M|=1\)

These are endomorphisms, not forms.

\(M^*M=MM^*\)

All symmetrix matrices are normal

All hermetitian matrices (inc subset symmetric) are normal

Normal matrix never defective