# Matrix Theory: An essential proof for eigenvector computations

I’ve avoided proofs unless absolutely necessary, but the relation between the same eigenvector expressed in two different bases, is important.
Given that AS is the linear transformation matrix in standard basis S, and AB is its counterpart in basis B, we can write the relation between them as:

$A_B = C^{-1}A_SC\\* A_S = CA_BC^{-1}$

where C is the similarity transformation. We’ve seen this relation already; check here if you’ve forgotten about it.
Continue reading Matrix Theory: An essential proof for eigenvector computations

# Matrix Theory: Diagonalisation and Eigenvector Computation

$A= \left( \begin{array}{cc} 2 & 0 \\ 0 & 5 \end{array} \right)\$

The operation it performed on basis vectors of the standard basis S was one of scaling, and scaling only. When operated on by a linear transformation matrix, if a vector is only scaled as a consequence, that vector is an eigenvector of the matrix, and the scalar is the corresponding eigenvalue. This is just the definition of an eigenvector, which I rewrite below:

$A.x = \lambda x$

# Matrix Theory: Basis change and Similarity transformations

### Basis Change

Understand that there is nothing extremely special about the standard basis vectors [1,0] and [0,1]. All 2D vectors may be represented as linear combinations of these vectors. Thus, the vector [7,24] may be written as:

$\left( \begin{array}{cccc} 7 \\ 24 \end{array} \right)\ = 7. \left( \begin{array}{cccc} 1 \\ 0 \end{array} \right)\ + 24. \left( \begin{array}{cccc} 0 \\ 1 \end{array} \right)\$

# Matrix Theory: Linear transformations and Basis vectors

### Symmetric Matrices

A symmetric matrix looks like this:

$A= \left( \begin{array}{cccc} a & d & n & w \\ d & b & h & e \\ n & h & c & i \\ w & e & i & d \end{array} \right)\$

Notice how the values are reflected across the diagonal a-b-c-d; this holds true for any symmetric matrix.
Continue reading Matrix Theory: Linear transformations and Basis vectors

# Eigenvector algorithms for symmetric matrices: Introduction

My main aim in this series of posts is to describe the kernel — or the essential idea — behind some of the simple (and not-so-simple) eigenvector algorithms. If you’re manipulating or mining datasets, chances are you’ll be dealing with matrices a lot. In fact, if you’re starting out with matrix operations of any sort, I highly recommend following Professor Gilbert Strang’s lectures on Linear Algebra, particularly if your math is a bit rusty.

I have several reasons for writing this series. My chief motivation behind trying to understand these algorithms has stemmed from trying to do PCA (Principal Components Analysis) on a medium size dataset (20000 samples, 56 dimensional). I felt (and still feel) pretty uncomfortable about calling LAPACK routines and walking away with the output without trying to understand what goes on inside the code that I just called. Of course, one cannot really dive into the thick of things without understanding some of the basics: in my case, after watching a couple of the lectures, I began to wish that I had better mathematics teachers in school.
Continue reading Eigenvector algorithms for symmetric matrices: Introduction