Matrix Theory: Basis change and Similarity transformations

Basis Change

Understand that there is nothing extremely special about the standard basis vectors [1,0] and [0,1]. All 2D vectors may be represented as linear combinations of these vectors. Thus, the vector [7,24] may be written as:

  \left( \begin{array}{cccc}  7 \\  24 \end{array} \right)\  =  7.  \left( \begin{array}{cccc}  1 \\  0 \end{array} \right)\  +  24.  \left( \begin{array}{cccc}  0 \\  1 \end{array} \right)\


You may take linear combinations of any other reasonable (linearly independent) set of vectors, and still be able to express the same [7,24] as linear combinations of those. The diagram below shows an example. The vector [1,1] in standard basis can also be represented by an alternate basis [1,1], [-1,1]. Since, in the new basis, the vector lies along the B1 basis vector [1,1] itself, it can simply be expressed as:

  \left( \begin{array}{cccc}  1 \\  1 \end{array} \right)\  =  1.  \left( \begin{array}{cccc}  1 \\  1 \end{array} \right)\  +  0.  \left( \begin{array}{cccc}  -1 \\  1 \end{array} \right)\

In the above example, the new basis vectors are orthogonal; they need not be. Let’s deduce the relation between the point as expressed between two bases. This leads us to the basis matrix, which is simply the matrix composed of the basis vectors of basis B. If a vector expressed in basis B is multiplied with the basis matrix of B, the result is the vector in the standard basis.
We can use the previous example to illustrate this point. In the basis B = {[1,1], [-1,1]}, the vector VB is [1,0]. If we multiply the two, we get:

  B.V_B =  \left( \begin{array}{cccc}  1 & -1\\  1 & 1\end{array} \right)\  .  \left( \begin{array}{cccc}  1 \\  0 \end{array} \right)\  =  \left( \begin{array}{cccc}  1 \\  1 \end{array} \right)\

Then, for a basis matrix B, a vector VB is related to that same vector VS in standard basis as:

  B.V_B = V_S \\

Simply multiplying the basis matrix of B with a vector expressed in basis B gives us the vector in standard basis.

Similarity transformations

Let’s explore basis changes further. We already know how to relate vectors expressed in two different bases. Now we’ll explore how linear transformations can be expressed in different bases.
Roughly, linear transformations map vectors in n-space to other vectors in n-space; I wrote about this here.
Let a transformation TS, applied to a vector xS, produce a vector yS in standard basis S. That is:

  T_S.x_S = y_S

Furthermore, let xS be represented in some other basis B by xB, and let them be related by the basis matrix CB(basis matrix of B) as:

  C_B.x_B = x_S

Similarly, let yS be represented in some other basis B by yB, and let them be related by the same basis matrix CB as:

  C_B.y_B = y_S

The question we ask is a simple one: what is the transformation TB that maps xB to yB? Essentially, if we know a transformation matrix in one basis, what would that same transformation look like in a different basis?
The algebra is pretty simple. We have:

  C_B.x_B = x_S \\  C_B.y_B = y_S \\   T_S.x_S = y_S

Substituting the expressions for xS and yS in the above, we get:

  T_S.C_B.x_B = C_B.y_B \\  {C_B}^{-1}.T_S.C_B.x_B = y_B \\  \left({C_B}^{-1}.T_S.C_B\right).x_B = y_B \\  \\  T_B = {C_B}^{-1}.T_S.C_B

Now, notice that we have not constrained TS to be anything special, apart from the dimensionality requirements for matrix multiplication. This implies that this identity may be applied to any matrix. TB and TS are said to be similar to each other, because they are essentially the same matrix, only expressed in terms of different bases.

Why did I go to the trouble of introducing these concepts? In the next post, I shall talk about how we tie all these up together to make the calculation of eigenvectors/eigenvalues easier. Hopefully, that will also shed some light on the nature of these beasts.