Matrix Theory: Linear transformations and Basis vectors

Symmetric Matrices

A symmetric matrix looks like this:

  A=  \left( \begin{array}{cccc}  a & d & n & w \\  d & b & h & e \\  n & h & c & i \\  w & e & i & d \end{array} \right)\

Notice how the values are reflected across the diagonal a-b-c-d; this holds true for any symmetric matrix.

Diagonal Matrices

The following is a diagonal matrix.

  D=  \left( \begin{array}{cccc}  a & 0 & 0 & 0 \\  0 & b & 0 & 0 \\  0 & 0 & c & 0 \\  0 & 0 & 0 & d \end{array} \right)\

Notice how all the off-diagonal elements (elements not part of the diagonal) are zero; this holds true for any diagonal matrix.
Let’s take a (short) digression and look at a widely-used way of viewing matrices – as linear transformations.

Linear Transformations


Here is a simple question: find me the matrix P which when multiplied with the vector \overrightarrow{r, \theta} gives me the rotated vector \overrightarrow{r,\left(\theta + \alpha\right)}.

  P.\left( \begin{array}{c}  r cos \theta \\  r sin \theta \end{array} \right)  =  \left( \begin{array}{c}  r cos \left(\theta+\alpha\right) \\  r sin \left(\theta+\alpha\right) \end{array} \right)

This is how I do it. Expand the trigonometric expressions for cos \left(\theta+\alpha\right) and sin \left(\theta+\alpha\right), so that you get:

  P.\left( \begin{array}{c}  r cos \theta \\  r sin \theta \end{array} \right)  =  \left( \begin{array}{c}  r cos \theta.cos \alpha - r sin \theta.sin \alpha \\  r sin \theta.cos \alpha + r cos \theta.sin \alpha \end{array} \right)

A little inspection gives the answer:

  P =  \left( \begin{array}{cccc}  cos \alpha & -sin \alpha \\  sin \alpha & cos \alpha \end{array} \right)\

The matrix P is a linear transformation; why it is linear and what its properties are I’ll leave to the formal definitions in books. It suffices to say that you can look at a matrix as if it were a map or a function to which you could pass in a vector and get a new vector out of it.
As in the above example, all matrices have geometrical descriptions: the above one is a rotation vector, for instance. For the purposes of this series, I shall concentrate for the moment on the type of transformation given by a diagonal matrix.

Basis Vectors

Again, let’s look at an example in 2 dimensions.
Here’s an arbitrary diagonal matrix:

  A=  \left( \begin{array}{cc}  2 & 0 \\  0 & 5 \end{array} \right)\

I choose two very special vectors, the normalised x- and y-vectors:

  x_1 = \left( \begin{array}{cc}  1 \\  0 \end{array} \right)\  x_2 = \left( \begin{array}{cc}  0 \\  1 \end{array} \right)\

If we multiply D with x1 and x2 separately, we get:

  A.x_1 = \left( \begin{array}{cc}  2 \\  0 \end{array} \right)\  = 2.  \left( \begin{array}{cc}  1 \\  0 \end{array} \right)\  \\  A.x_2 = \left( \begin{array}{cc}  5 \\  0 \end{array} \right)\  = 5.  \left( \begin{array}{cc}  0 \\  1 \end{array} \right)\

Well, nothing earth-shattering, but this gives me an opportunity to introduce the idea of basis vectors. Basis vectors are essentially vectors which are used to define a coordinate space. In 2D space, for example, we normally use the two basis vectors [1,0] as the x-axis and [0,1] as the y-axis, but there is nothing stopping us from choosing any two arbitrary vectors to be able to specify vectors in 2D space. Well, not entirely arbitrary, they cannot be linearly dependent, for example; you can find out more about the precise requirements for defining well-formed basis vectors in books, but for now it suffices to note that most arbitrary sets of vectors can be chosen to be basis vectors.

The other idea I want you to take away from the above example is that the diagonal matrix merely stretched (or squashed, depending upon the contents of the diagonal) the basis vectors by a factor. The basis vectors still point in their original directions: only their magnitudes has changed. If I were to denote any basis vector as x, and \lambda as the factor, then I could have written:

  A.x = \lambda .x

This is the definition of an eigenvector x for a matrix A. An eigenvector, when multiplied by a matrix, is only scaled by a number. It does not undergo rotation, projection or any other transformation. The scaling number itself is termed the eigenvalue.
Thus, in the example above, the basis vectors [1,0]T and [0,1]T are the eigenvectors of the matrix A.

In the next installment, I shall introduce similarity transformations and how it is a standard way of solving the eigenvector problems in many numerical computing packages.