Tag Archives: matrix

Parallelisation : Writing a linear matrix algorithm for Map-Reduce

There are multiple ways to skin matrix multiplication. If you begin to think about it, there are probably 4 or 5 ways in which you could approach matrix multiplication. In this post, we look at another, easier, way of multiplying two matrices, and attempt to build a MapReduce version of the algorithm. Before we dive into the code itself, we’ll quickly review the actual algebraic process we’re trying to parallelise. Continue reading Parallelisation : Writing a linear matrix algorithm for Map-Reduce

Parallelisation : Refactoring a recursive block matrix algorithm for Map-Reduce

I’ve recently gotten interested in the parallelisation of algorithms in general; specifically, the type of algorithm design compatible with the MapReduce model of programming. Given that I’ll probably be dealing with bigger quantities of data in the near future, it behooves me to start think about parallelisation, actively. In this post, I will look at the matrix multiplication algorithm which uses block decomposition, to recursively compute the product of two matrices. I have spoken of the general idea here; you may want to read that first for the linear algebra groundwork, before continuing on with this post. Continue reading Parallelisation : Refactoring a recursive block matrix algorithm for Map-Reduce

Matrix Theory: An essential proof for eigenvector computations

I’ve avoided proofs unless absolutely necessary, but the relation between the same eigenvector expressed in two different bases, is important.
Given that AS is the linear transformation matrix in standard basis S, and AB is its counterpart in basis B, we can write the relation between them as:

A_B = C^{-1}A_SC\\*  A_S = CA_BC^{-1}

where C is the similarity transformation. We’ve seen this relation already; check here if you’ve forgotten about it.
Continue reading Matrix Theory: An essential proof for eigenvector computations

Eigenvector algorithms for symmetric matrices: Introduction

My main aim in this series of posts is to describe the kernel — or the essential idea — behind some of the simple (and not-so-simple) eigenvector algorithms. If you’re manipulating or mining datasets, chances are you’ll be dealing with matrices a lot. In fact, if you’re starting out with matrix operations of any sort, I highly recommend following Professor Gilbert Strang’s lectures on Linear Algebra, particularly if your math is a bit rusty.

I have several reasons for writing this series. My chief motivation behind trying to understand these algorithms has stemmed from trying to do PCA (Principal Components Analysis) on a medium size dataset (20000 samples, 56 dimensional). I felt (and still feel) pretty uncomfortable about calling LAPACK routines and walking away with the output without trying to understand what goes on inside the code that I just called. Of course, one cannot really dive into the thick of things without understanding some of the basics: in my case, after watching a couple of the lectures, I began to wish that I had better mathematics teachers in school.
Continue reading Eigenvector algorithms for symmetric matrices: Introduction

Guiding MapReduce-based matrix multiplications with Quadtree Segmentation

I’ve been following the Linear Algebra series of lectures from MIT’s OpenCourseWare site. While watching Lecture 3 (I’m at Lecture 6 now), Professor Strang enumerates 5 methods of matrix multiplication. Two of those provided insights I wish my school teachers had provided me, but it was the fifth method which got me thinking.
The method is really a meta-method, and is a way of breaking down multiplication of large matrices in a recursive fashion. To demonstrate the idea, here’s a simple multiplication between two 2×2 matrices.
Continue reading Guiding MapReduce-based matrix multiplications with Quadtree Segmentation