Linear maps

Homomorphisms of vector spaces

Linear maps

Homomorphisms between vector spaces

Homomorphisms map between algebras, preserving the underlying structure.

A homomorphism vetween vector space \(V\) and vector space \(W\) can be described as:

\(\hom (V, W)\)

Homomorphism between vector spaces must preserve the group-like structure of the vector space.

\(f(u+v)=f(u)+f(v)\)

The homomorphism must also preserve scalar multiplication.

\(f(\alpha v)=\alpha f(v)\)

A linear map (or function) is a map from one input to an output which preserves addition and scalar multiplication.

That is if function \(f\) is linear then:

\(f(aM+bN)=af(M)+bf(N)\)

Alternative names for homomorphisms

Vector spaces homomorphisms are also called linear maps or linear functions.

Homomorphisms form a vector space

If we can can show that scalars can act on morphisms, then we can shwn that morphisms on a vector space are themselves a vector space.

Scalars can act on morphisms, and so morphisms of vector spaces are themselves vector spaces.

Dimensions of homomorphisms

We can identify the dimensionality of this new vector space from the dimensions of the original vector spaces.

\(\dim (\hom(V, W))=\dim V \dim W\)

The pseudo-inverse

The definition of the inverse is that:

\(MM^{-1}=I\)

\(M^{-1}M=I\)

We also have:

\(MM^{-1}M=M\)

\(M^{-1}MM^{-1}=M^{-1}\)

The inverse of a homomorphism

Generally we don’t have inverses of homomorphisms as the number of dimensions are different.

We can, however, find a matrix \(M^+\) which satisfies:

\(MM^+M=M\)

\(M^+MM^+=M^+\)

This is the pseudo-inverse.

Linear and affine functions

Linear maps

Linear maps can be written as:

\(v=Mu\)

These go through the origin. That is, if \(u=0\) then \(v=0\).

Affine function

Affine functions are more general than linear maps. They can be written as:

\(v=Mu+c\)

Where \(c\) is a vector in the same space as \(v\).

Affine functions where \(c\ne 0\) are not linear maps. They are not homomorphisms which preserve the structure of the vector space.

If we multiply \(u\) by a scalar \(s\), then \(v\) will not increase by the same proportion.

Singular value decomposition

The singular value decomposition of \(m\times n\) matrix \(M\) is:

\(M=U\Sigma V^*\)

Where:

  • \(U\) is a unitary matrix (\(m\times m\))

  • \(\Sigma\) is a diagonal matrix with non-negative real numbers (\(m\times n\))

  • \(V\) is a unitary matrix (\(n\times n\))

\(\Sigma\) is unique. \(U\) and \(V\) are not.

Properties

\(M^*M=U\Sigma^2 U^*\)

\((M^*M)^{-1}=V\Sigma^{-2} V^*\)

Calculating the SVD

The SVD is generally calculated iteratively.

Identity matrix and the Kronecker delta

The Kronecker delta

The Kronecker delta is defined as:

p\(\delta_{ij}=0\) where \(i\ne j\)

\(\delta_{ij}=1\) where \(i=j\)

We can use this to define matrices. For example for the identity matrix:

\(I_{ij}=\delta_{ij}\)

Identity matrix

A square matrix where every element is \(0\) except where \(i=j\). There is one for each square matrix.

\(I=\begin{bmatrix}1& 0&...&0\\0 & 1&...&0\\...&...&...&...\\0&0&...&1\end{bmatrix}\)