There are multiple ways to skin matrix multiplication. If you begin to think about it, there are probably 4 or 5 ways in which you could approach matrix multiplication. In this post, we look at another, easier, way of multiplying two matrices, and attempt to build a MapReduce version of the algorithm. Before we dive into the code itself, we’ll quickly review the actual algebraic process we’re trying to parallelise. Continue reading Parallelisation : Writing a linear matrix algorithm for Map-Reduce

# Tag Archives: snail-mapreduce

# Parallelisation : Refactoring a recursive block matrix algorithm for Map-Reduce

I’ve recently gotten interested in the parallelisation of algorithms in general; specifically, the type of algorithm design compatible with the MapReduce model of programming. Given that I’ll probably be dealing with bigger quantities of data in the near future, it behooves me to start think about parallelisation, actively. In this post, I will look at the matrix multiplication algorithm which uses block decomposition, to recursively compute the product of two matrices. I have spoken of the general idea here; you may want to read that first for the linear algebra groundwork, before continuing on with this post. Continue reading Parallelisation : Refactoring a recursive block matrix algorithm for Map-Reduce