# ThoughtWorks Tech Radar : Mechanical Sympathy

My take on Mechanical Sympathy (from the ThoughtWorks Technology Radar), which I presented at the Sheraton Bangalore, is based off the content below.

“The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.” – Henry Petroski

“Premature optimisation is the root of all evil.” – Donald Knuth

The Hibernian Express is the first transatlantic fiber-optic communications cable to be laid in 10 years, at a cost of \$300 million. The current speed record for establishing transatlantic communication is 65 milliseconds. The Hibernian Express will reduce that. By all of 6 milliseconds. If that isn’t a very expensive optimisation, I do not know what is.

In almost all cases, the code that we write is abstracted away from the internals of the hardware. This is a desirable and necessary thing. However, particular domains require applications to operate under a set of exacting constraints. Recent interest in Ultra-Low Latency Trading in the HFT arena typically requires order volumes of over 5000 orders a second with order and execution report round trip times of 100 microseconds. In such cases, tailoring your architecture to handle concurrency is no longer an idle option, it is a necessity. Even for more prosaic applications, it is not uncommon to need low latency data structures.

Usually, requiring low latency boils down to minimising time spent in concurrency management with respect to actual logic processing. Today’s programming languages provide a variety of constructs to model concurrent operations. Locks, mutexes, memory barriers, to name a few. Even at the opcode level, you may use CAS operations, which are cheaper than locks. However, to move to the upper end of the curve, to get to really low latency, many designers eschew all of these constructs.

One good example is the Disruptor, which is a high performance concurrency framework for Java. In a series of excellent articles, Martin Thompson, one of the authors of the Disruptor framework, discusses techniques to reduce latency by write combining, writing lock free algorithms, and the Single Writer principle.

Even if lock contention is an issue, there are other ways of reducing latency. One example is when a team working to increase the performance of their custom JMS implementation, which wrote their custom implementation of the JDK Executor interface – the Executor interface is responsible for firing off Runnable jobs, by the way. This resulted in an improvement by a factor of 10.

One of the more explicit forms of mechanical sympathy is when you rewrite software to execute on specially designed hardware. GPUs and FPGAs are commonplace in financial computing.

Indirectly, this form of thinking also seems to have influenced the design of single-threaded servers with asynchronous I/O. In a multi-threaded server, you, or rather the server, are faced constantly with having to switch contexts between threads. With a single thread model, latency is greatly reduced.

# ThoughtWorks Tech Radar : Agile Analytics

My take on Agile Analytics from the ThoughtWorks Technology Radar, which I presented at the Sheraton Bangalore today, is based off the following document.

Patient: Will I survive this risky operation?
Surgeon: Yes, I’m absolutely sure that you will survive the operation.
Patient: How can you be so sure?
Surgeon: Well, 9 out of 10 patients die in this operation, and yesterday my ninth patient died.

Andrew Lang, a Scottish writer and collector of folk tales, once remarked that many people use statistics as a drunken man uses lamp-posts…for support rather than illumination. Even so, we have come a long way from the 9th century, when Al Kindi used statistics to decipher encrypted messages and developed the first code breaking algorithm, in Baghdad – incidentally, he was instrumental in introducing the base 10 Indian numeral system to the Islamic and the Christian world.

1654 – Pascal and Fermat create the mathematical theory of probability,
1761 – Thomas Bayes proves Bayes’ theorem,
1948 – Shannon’s Mathematical Theory of Communication defines capacity of communication channels in terms of probabilities. Bit of a game changer, that one. All our designs of communication networks and error-correction algorithms stem from insights found in that work.

Today, we realise that the pace at which we collect data far exceed our capability to make sense of it. Data is everywhere, *literally*. The blood cells in your body trying to determine whether that molecule is an oxygen molecule or not? That is data. Your build breaking? That is data. You’re running a static analysis tool to check your test coverage? Yeah, that is data analysis.
Unfortunately, we are at that point where our opinions about whether a piece of data is relevant to analysis, form far too slowly. How slowly? Well, human reflexes take milliseconds, while CPUs and GPUs function on the order of nanoseconds. That is six orders of magnitude. And that is how slow we are.

This, we cannot afford to be. In the past century, data collection was the bottleneck. Datasets larger than a few kilobytes were unheard of. Now, we are playing in gigabyte territory. When I was consulting with a telecommunications company, a few months back, all calls through their network would generate upwards of 600 MB of data per day.

Volume is not the only dimension of this deluge of data. The rate of flow of incoming data gives us pause too. Think of the stock markets, imagine having to make decisions based on data, which within a few minutes (or even a few seconds), will become obsolete. Analytics is not a goal in itself. It is merely an aid to decision-making. Given the speed at which new data is collected, and the speed at which old data fades into obsolescence, we must be prepared to deal with incomplete, fast-flowing data.

Think of it as a stream from which you scoop a handful of water to determine the level of bacteria in the water. You only have limited information from a single sample, but, if you sample from multiple points upstream and downstream, you’ll finally get a fairly correct answer to your question.

Agile Analytics conjures up images of iterations, collaborating with customers, and fast feedback, when working on DW/BI projects. Indeed, this is what Ken Collier talks about in his book Agile Analytics. However, I wish to tackle a different angle. Hal Varian, Chief Economist at Google says believes that the dream job of this decade, is that of a statistician. Everyone has data. It’s harder to get opinions about the data. It’s harder to, as he says, “tell a story about this data”.
We’re at a moment in the software industry where lots of things have begun to intersect with our field of interest. Statistics is one of them. Assume you are a software engineer, and have more than a peripheral interest in this field. What do you do?

Learn classical statistics. Learn Bayesian statistics. You probably hated those textbooks, so don’t use them; there are tons of more useful educational resources on the Web. Get into machine learning. Understand that machine learning is not some super-exotic field of study. I’ll risk a limb and say that Machine Learning is just More Statistics under a trendy name.
Get a acquainted with a few languages and libraries. R, NumPy, Julia. In fact, I’m super-excited by Julia because of it offers native building blocks for distributed computation. Read a few papers on real-world distributed systems.

I do not talk about this because you’ll be building a distributed analytics engine from scratch (though you could). You will, through study of the subjects above, gain a much deeper understanding of why you should be analysing something, and also how such systems are built, You’re all, regardless of your previous background, engineers.

You will also encounter a lot of literature concerning visualisation while doing this. Visualisation is one of those things we don’t really pay much attention too, until we really need it. Bars, graphs, colours: anything in lieu of numbers, that can give us some visual indication of what’s going on. Health check pages, for example, are a useful way of integrating diagnostic information of a system.

# ThoughtWorks Tech Radar : GPGPU

My take on GPGPUs from the ThoughtWorks Technology Radar, which I presented at the Sheraton Bangalore today, is based off the following document.

Seth Lloyd, a mechanical professor at MIT, once asked what the fastest computer in the universe would look like. Throwing aside concerns of fabrication, a circuit is only as fast as the speed at which you can flip a bit from 0 to 1, or vice versa. The faster the circuit, the more energy it consumes. Plugging in theoretical numbers, Lloyd came to the conclusion that a reasonably sized computer running at that speed would not look like one of the contraptions in front of you. In fact, it would become, to put not too fine a point, a black hole.

Well, we are somewhat far away from that realisation, but the fact is that most of us do not realise the potential that exists on each and every one of our laptops and desktops. Allow me an example. The ATI Radeon 5870 GPU, codenamed Osprey, packs enough processing units to support 30,000 threads. To make not too fine a point, I can take this room, and all of you in it, and replicate it 3000 times, and have each of you do a calculation, and this chunk of sand and solder would still be faster. And smaller.

We are at the point that vendors have begun to release GPUs which are specifically not designed for graphics processing. I hesitate to term such units as GPUs; take for example the NVidia Tesla. The Tesla is not even capable of outputting to video, by default. Yet, it powers the Tianhe-1, the second fastest supercomputer on the planet. Again, take the Titan supercomputer. Running on close to 20,000 Tesla GPUs, it is a public access supercomputer, meaning that should you feel the need to do some terascale research, you can log onto it, right now.

Today, there are multiple streams of vendor-specific GPU technology. They are all based on the venerable C99 standard, with language extensions. In case of NVidia, it is CUDA; in case of AMD, it is the Stream Processing SDK. However, the portable option which works across GPUs, as well as CPUs, and is gaining traction is OpenCL.

Computing on the GPU requires a programming model not unlike the well-known MapReduce model, that is Stream Processing. It requires you to create a computational kernel, in effect a function, which is then applied to blocks of data. There are other constraints on the kernel code that you can write. Essentially, to take advantage of stream processing, look for problems which involve high compute intensity, and near-total data parallelism.

Bioinformatics, Computational Finance, Medical Imaging, Molecular Dynamics, Weather and Climate Forecasting…anywhere you have a ton of data waiting to be crunched, GPU computing is a perfect fit. Even Hadoop has support for CUDA at this moment. GPUs are now ubiquitous, I’d probably risk calling them commodity hardware at this point. They sit in almost all of your machines, powering your displays, rendering your games. Never have developers been privy to so much power within so little space. And it’s not even a black hole yet. So, go forth and compute!

# Two-phase commit : Indistinguishable state failure scenario

I’ll review the most interesting failure scenario for the 2PC protocol. There are excellent explanations of 2PC out there, and I won’t bother too much with the basic explanation. The focus of this post is a walkthrough of the indistinguishable state scenario, where neither a global commit, nor a global abort command can be issued.

# Parallelisation : Writing a linear matrix algorithm for Map-Reduce

There are multiple ways to skin matrix multiplication. If you begin to think about it, there are probably 4 or 5 ways in which you could approach matrix multiplication. In this post, we look at another, easier, way of multiplying two matrices, and attempt to build a MapReduce version of the algorithm. Before we dive into the code itself, we’ll quickly review the actual algebraic process we’re trying to parallelise.

# Parallelisation : Refactoring a recursive block matrix algorithm for Map-Reduce

I’ve recently gotten interested in the parallelisation of algorithms in general; specifically, the type of algorithm design compatible with the MapReduce model of programming. Given that I’ll probably be dealing with bigger quantities of data in the near future, it behooves me to start think about parallelisation, actively. In this post, I will look at the matrix multiplication algorithm which uses block decomposition, to recursively compute the product of two matrices. I have spoken of the general idea here; you may want to read that first for the linear algebra groundwork, before continuing on with this post.

# A Story about Data, Part 2: Abandoning the notion of normality

Continuing on with my work, I was just about to conclude the non-normal data of the distribution. However, I remembered reading about different transformations that can be applied to data to make it more normal. Are any such transformations likely to have any effect on the normality (or the lack thereof) of the score data?
I’d read about the Box-Cox family of transformations: essentially proceeding through powers and their inverses, in the quest to improve normality. I decided to try it, using the Jarque-Bera statistic as a measure of the normality of the data.

# A Story about Data, Part 1: The shape of the data

Note about the visualisations: All of the plotting was done with Basis-Processing. You’ll find its source here.

The current dataset that I’m working comes from the education domain. Roughly, there are 29000 records, each record lists the following:

• Location of the student’s school
• Language of the student
• Student’s score before intervention
• Student’s score after intervention

# Interacting with Graphs : Mouse-over and lambda-queuer

In the previous post, I described how I’d put together a basic system to drive data selection/exploration through a queue. While generating more graphs, it became evident that the code for mouseover interaction followed a specific pattern. More importantly, using Basis to plot stuff, mandated that I look at the inverse problem; namely, determining the original point from the point under the mouse pointer. In this case, it was pretty simple, since I’m only dealing with 2D points. Here’s a video of how it looks like. The example shows the exploration of a covariance matrix.

# Driving data visualisation over a queue using RabbitMQ and lambda-queuer

One of the things which has bothered me ever since I took the dive into visualisation is the problem of interactivity. The aim of interacting with a visualisation is to drill down or explore areas of the visualisation which are (or seem) interesting. Put another way, we are essentially filtering the data from a visual standpoint. In most cases, mouse interactions may be sufficient. But what if I wanted to be able to filter the data programmatically and have the result reflected in the visualisation?

One way is to simply re-run the code which generates the visualisation each time we use a different filter. This is the simplest, and, in many cases, enough. In this case, the modification to the code is made in an offline fashion. What if we wanted to do the same, but while the program is running? This describes my attempt at one such implementation. Albeit still somewhat primitive, we’ll see where it ends up. For the purposes of demonstration, I used the Parallel Coordinates visualisation, which is available on GitHub. I’ll continue using Processing through Ruby-Processing for this description.

# A jQuery-based build radiator for Jenkins

Partly to play around with the Jenkins Remote API, and partly to kickstart a build radiator setup at my current consulting engagement, I wrote a quick radiator page which might serve as a foundation for further experiment by the team. I’ve seen several build radiators built using multiple technologies – Java, Python, etc.; I specifically chose HTML/jQuery for this effort because a modern browser seems to be the only piece of software universally available on any machine one might care to set up.
The code is up at GitHub, and here is the obligatory screenshot

It uses JSONP so as to allow cross-site GET requests, else Chrome’s Same-Origin policy complains like a spoiled brat.

# Installing the Basis gem (updated for v0.5.1+)

You can use Ruby-Processing in two ways.

Use the jruby-complete.jar that Ruby-Processing ships, the Gems-in-a-Jar approach. In this mode, all gems you install will be packaged as part of the JAR.

If you’re following the first approach, first head to the location where the jruby-complete.jar is located, for Ruby-Processing. There, do this:

java -jar jruby-complete.jar -S gem install basis-processing --user-install


Alternatively, if you’re using a conventional JRuby installation, do this:

sudo jruby -S gem install basis-processing


# A guide to using Basis (updated for v0.6.0+)

This is a quick tour of Basis. Find the source for Basis on GitHub. Installing Basis is pretty simple; just grab it as a gem for your JRuby installation. Brief notes on the installation can be found here.

UPDATE: Starting from version 0.6.0, Basis allows you to specify axis labels. Additionally, you can specify arrays of points instead of plotting points one at a time. When you do this, you can also specify a corresponding legend string, which will show up in a legend guide. See below for more details.

UPDATE: Starting from version 0.5.9, you can turn grid lines on or off. Additionally, the matrix operations implementation has been ported to use the Matrix class in Ruby’s stdlib.

UPDATE: Starting from version 0.5.8, you can customise axis labels, draw arbitrary shapes/text/plot custom graphics at any point in your coordinate system. See below for more details.

UPDATE: With version 0.5.7, experimental support has been added for drawing objects which aren’t points. Interactions with such objects is currently not supported. Additional support for drawing markers/highlighting in custom bases is now in.

UPDATE: Starting from version 0.5.1, Basis has been ported to Ruby 1.9.2, because of the kd-tree library dependency. Currently, there are no plans of maintaining Basis compatibility with Ruby 1.8.x. As an aside, I personally recommend using RVM to manage the mess of Ruby/JRuby installations that you’re likely to have on your machine.

UPDATE: Basis has hit version 0.5.0 with experimental support for mouseover interactivity. More work is incoming, but the demo code below is up-to-date, for now. The code below should be the same as demo.rb on GitHub.

# Basis: Plotting arbitrary coordinate systems in Ruby-Processing

One of the first things I realised while working on visualisations in Processing is that a lot of the work required in setting up coordinate systems and plotting them is somewhat of a chore. Specifically, for things like parallel coordinates, multiple axes, each with its own scaling, I initially ended up with some pretty ugly custom code for each case. I did look around in the Libraries section of the Processing website, but didn’t find anything specific to manipulating and plotting coordinate axes.

# Data interactions in parallel coordinates: 40x-60x speedup

This is an update on the visualisation post on parallel coordinates. Understanding the Processing model made me realise that it probably wasn’t a good idea to draw all the samples each time draw() was called. Of course, every refreshed call of draw() does not clear away the previous frame’s graphics, so that just makes it easier. In the end, I went and explicitly drew only the samples which were under the current mouse position.

The speedup is obvious and massive: whereas the previous version worked well with only 300 samples, the current one processes 18000 samples without breaking a sweat. At 29,000 samples, there is a bit of a slowdown, but only just a bit, you wait 1 second for the highlighting instead of 6-7.
Here’s the new video, using 18k samples. Notice how much denser the mesh is.

# Data interactions in parallel coordinates

Processing is growing on me. Inspired by the different and (very) interesting data visualisation examples I’ve seen, I decided to take a shot at interacting with the parallel coordinates that I generated here. Of course, I had to reduce the number of samples for this demonstration; it’d slow to a unholy crawl otherwise. For this video, I’ve taken 300 samples. The interaction is essentially a mouse-hover highlighting of any sample(s) under it. I fiddled with the colors a bit, but decided that a white-on-greyscale scheme would show up better.
Of course, I still haven’t gotten around to labeling the axes. This I’ll probably pick up next. But as the video demonstrates, there’s a lot to Processing than meets the eye.

PS: By the way, the actual demonstration ends around the halfway mark; I was trying to figure out how to stop the bloody recorder.

# Getting ActiveRecord to behave nicely with Ruby-Processing in JRuby

Really, all I wanted to do was use Processing from Ruby. jashkenas has kindly written a gem which does just that. There was only a slight wrinkle: I wanted to pull my data from MySQL through ActiveRecord. Well, JRuby makes this process slightly more interesting than usual, so I document the process here. To start off with, install the gem with:

sudo gem install ruby-processing

Go into the directory where the ruby-processing gem is installed, and from there into {ruby-processing.gem.dir}/lib/core. In my case, this was /usr/lib/ruby/gems/1.8/gems/ruby-processing-1.0.9/lib/core.
Once inside there, you’ll find a file named jruby-complete.jar. Get rid of it, because we’ll be replacing it with a fresh (and different) version of jruby-complete.jar. Download the 1.6.3 version of JRuby-complete jar file. Rename it to jruby-complete.jar and put it in place of the jarfile we just deleted.

One step remains: this jarfile does not contain the activerecord-jdbcmysql-adapter gem. Install that with:

java -jar jruby-complete.jar -S gem install  activerecord-jdbcmysql-adapter --user-install

You’re good to go now. One more thing, just remember to replace the ActiveRecord adapter string with “jdbcmysql” and allow usage of that gem in your code with:

require 'arjdbc'

.

# Parallel Coordinate visualisation of 28k, 5-dimensional data

This is the visualisation of the same dataset that I’ve been working on for a while, for exploring different data mining and visualisation techniques. Currently, the axes aren’t labelled, and the color coding is for different categories. Looks like a really interesting way to explore the data.

# A detour through data visualisation

I should have seen it coming. Text communicates well – up to a point. All the current analyses I’ve been working on, starting from Self-Organising Maps to Decision Trees, are very well served by good, solid visualisation. My current need is a way to visualise data structures effectively, even if it is merely a bunch of nodes which can be expanded/collapsed to show more information. Additionally, it would be nice (not necessary) for this to happen interactively, but I don’t mind a command line-driven approach. In fact, I prefer the command line; makes it easier to drive it through a scripting language like Ruby.
So far, I’ve looked at Processing, d3.js and ProtoVis. I like the idea of d3.js: the data-driven approach makes a lot of sense, but I think I need to refresh quite a bit of CSS-fu to take advantage of its capabilities. Apart from visualisation derived from data mining algorithms, showing the data as-is in an aesthetic manner is also a worthy goal at this point. In particular, the parallel coordinates visualisation caught my eye.
Oh well, at least I know what I’ll be doing for the next few days.

# Decision Trees

Continuing on with my exploration of the data mining landscape, I extracted out a decision tree out of the data under scrutiny. It is the same data (20,000 samples, 56 samples), but the dimensions I’ve chosen for partitioning are a bit different from the raw attributes. I’ve conflated the 56 dimensions into a single number, since this is a test score we’re talking about, and I’m not sure modelling the indivdual responses for constructing the tree would be the best idea. I’m not really looking for close fits, buckets or bins should be adequate for discretising the response space.
Accordingly, I’ve partitioned the pre-scores as EXCELLENT, GOOD, AVERAGE, etc. The attribute that I’m attempting to predict is also a score, but this is the post-score. Well, not really the score itself, the improvement in score is a more sensible metric to attempt to guess.

I had a problem with trying to visualise the data, but I’ve been able to make do with indenting the different levels of decision nodes; this should be fine till I really need to use a visualisation library. I think the code could use a bit of work – probability notation does not lend itself easy to elegant variable naming. I’ll probably write a lot more on this topic once I’m into Bayesian Nets.

# Matrix Theory: An essential proof for eigenvector computations

I’ve avoided proofs unless absolutely necessary, but the relation between the same eigenvector expressed in two different bases, is important.
Given that AS is the linear transformation matrix in standard basis S, and AB is its counterpart in basis B, we can write the relation between them as:

$A_B = C^{-1}A_SC\\* A_S = CA_BC^{-1}$

where C is the similarity transformation. We’ve seen this relation already; check here if you’ve forgotten about it.

# Matrix Theory: Diagonalisation and Eigenvector Computation

$A= \left( \begin{array}{cc} 2 & 0 \\ 0 & 5 \end{array} \right)\$

The operation it performed on basis vectors of the standard basis S was one of scaling, and scaling only. When operated on by a linear transformation matrix, if a vector is only scaled as a consequence, that vector is an eigenvector of the matrix, and the scalar is the corresponding eigenvalue. This is just the definition of an eigenvector, which I rewrite below:

$A.x = \lambda x$

# Matrix Theory: Basis change and Similarity transformations

### Basis Change

Understand that there is nothing extremely special about the standard basis vectors [1,0] and [0,1]. All 2D vectors may be represented as linear combinations of these vectors. Thus, the vector [7,24] may be written as:

$\left( \begin{array}{cccc} 7 \\ 24 \end{array} \right)\ = 7. \left( \begin{array}{cccc} 1 \\ 0 \end{array} \right)\ + 24. \left( \begin{array}{cccc} 0 \\ 1 \end{array} \right)\$

# Matrix Theory: Linear transformations and Basis vectors

### Symmetric Matrices

A symmetric matrix looks like this:

$A= \left( \begin{array}{cccc} a & d & n & w \\ d & b & h & e \\ n & h & c & i \\ w & e & i & d \end{array} \right)\$

Notice how the values are reflected across the diagonal a-b-c-d; this holds true for any symmetric matrix.

# Eigenvector algorithms for symmetric matrices: Introduction

My main aim in this series of posts is to describe the kernel — or the essential idea — behind some of the simple (and not-so-simple) eigenvector algorithms. If you’re manipulating or mining datasets, chances are you’ll be dealing with matrices a lot. In fact, if you’re starting out with matrix operations of any sort, I highly recommend following Professor Gilbert Strang’s lectures on Linear Algebra, particularly if your math is a bit rusty.

I have several reasons for writing this series. My chief motivation behind trying to understand these algorithms has stemmed from trying to do PCA (Principal Components Analysis) on a medium size dataset (20000 samples, 56 dimensional). I felt (and still feel) pretty uncomfortable about calling LAPACK routines and walking away with the output without trying to understand what goes on inside the code that I just called. Of course, one cannot really dive into the thick of things without understanding some of the basics: in my case, after watching a couple of the lectures, I began to wish that I had better mathematics teachers in school.

# Guiding MapReduce-based matrix multiplications with Quadtree Segmentation

I’ve been following the Linear Algebra series of lectures from MIT’s OpenCourseWare site. While watching Lecture 3 (I’m at Lecture 6 now), Professor Strang enumerates 5 methods of matrix multiplication. Two of those provided insights I wish my school teachers had provided me, but it was the fifth method which got me thinking.
The method is really a meta-method, and is a way of breaking down multiplication of large matrices in a recursive fashion. To demonstrate the idea, here’s a simple multiplication between two 2×2 matrices.

For my graduation project, I’d written a machine vision/2D image algorithms system called IRIS. We’d used it to drive robots around, integrating sonar and visual data. However, that’s not what reawakened my interest in taking a re-look at the IRIS code. Currently, playing around with data sets has had me rifling through books and equations I liked looking at in college. It is almost like a second education, and I think it only right that I get IRIS up and running, if only to steal some code from it (even though it is in C++, and I’m currently doing my investigations using Ruby).

With that said, I dug into my old SourceForge account, where (to my somewhat irrational surprise) the code was still untouched. However, that code will probably not compile as-is. Even though it had been compiled under Linux, it had dependencies on drivers for hardware like the sonar systems and the webcam. I’m still not exactly sure if I want to get all those dependencies resolved; they aren’t my primary focus at this point. So I stripped off whatever was not required and pushed the clean, compiling source to GitHib here.

# Tests increase our Knowledge of a System: A Probabilistic Proof

This was an old proof that was up on my old blog, but since I’m no longer posting to that, I’m reposting it here for posterity. Also, rewriting the equations in LaTeX, now that I have installed a plugin for that.

I present a simple mathematical device to prove that tests improve our understanding of code. It does not really matter if this is code written by the test author himself or is legacy. To do this, some simplification of the situation is necessary.

# Playing around with Self Organising Maps

(Click the image to see the evolution of the SOM)

The image above was generated off 200 samples of a large data set. Sample vectors were 56-dimensional bit strings. The similarity measure used was the Hamming Distance. Brighter green represents values at a higher Hamming Distance with respect to zero.
The (very dirty) code is up at Github here.
Unrelated: I’ve been watching Leonard Susskind’s lectures on Statistical Mechanics; they’re a tour de force.

# A pipeline for adaptive bitrate video encoding

I’ve been working on something unusual lately, namely, building a pipeline for encoding video files into formats suitable for HTTP Live Streaming. The actual job of encoding into different formats at different bit rates and resolutions is done using a combination of ffmpeg and x264. To me, the interesting part lies in how we have tried to speed up the process, using the venerable Map-Reduce approach. Before I dive into the details, here’s a quick review of the basic idea of HLS.

Put very simply, adaptive streaming serves video content in multiple qualities, allowing the streaming client choice in selecting which quality to use depending upon the bandwidth constraint on the consumer side. This choice is not a one-time choice, depending upon the encode cut duration, the client can switch to higher or lower resolutions dynamically throughout the entire playback of the video stream.
How is this accomplished?

# Filesystems: my current reading list

Stuff I’m reading now specific to filesystems…reading Linux kernel source requires a stout heart if you’ve never done it before. And a bit of a shift in mindset: it’s not all objects anymore

# iOS AppDev Patterns: Linked Content Cursors

I’ve already talked about the Content Cursor pattern. This post is an extension of that idea to increase the flexibility of layout across sections.

To understand the problem, let us revisit a page from our hypothetical iPad magazine.
Here’s the layout of the page in portrait mode.

…and here is the same page in landscape mode.

The first important thing to notice here is that the two Politics sections have changed in position and/or size. More specifically, the upper Politics section has morphed into a tall rectangle, while the lower one has stretched horizontally.

# iOS AppDev Patterns: Content Cursor

iOS offers only the most barebones approach to placing content in a view, namely by specifying absolute coordinates. Of course, one can use autoresizing to make sure the positions of these contents are modified proportionally, but the initial positioning of a content block needs specification of the exact x- and y-coordinate of the top left of this ‘box’. This can render the layout inflexible, tedious and brittle. Every small change in position of a block has ripple effects on the position of succeeding blocks.

Content Cursor solves this problem.

# iOS AppDev Patterns: Asynchronous Image Loader

In a content-heavy application (a news or a magazine app for example), textual content takes precedence over images in terms of loading/rendering. An acceptable solution is to load/render text and request images from the server/cache in a concurrent fashion. I use the term ‘server’ in a very loose fashion, a more appropriate term is probably ‘content source’, since we can retrieve this information from anything ranging from our own servers to Twitter/RSS feeds.

There are a few considerations when implementing a solution like this:

Load request throttling: You’re likely to have several images spread across pages. It is not prudent to let 50 concurrent requests fire for 50 images. You want to throttle your requests to a reasonable number. A simple example of throttling your request is shown later.

Memory management: You want to gracefully handle the situation in which the loader is able to retrieve the image, but it the frame on which it is supposed to display the image (however it is implemented) has already been deallocated (for whatever reason).

# Facial Expressions as Volumetric Deformation and Displacements

I was working through The Artist’s Complete Guide to Facial Expression by Gary Faigin. Pretty thorough book, and the facial landmarks for different expressions are well detailed. Except that I was still having problems inventing expressions. I mean, yeah, expressions can vary a lot, but I think what I was looking for was a system using which one could ‘generate’ expressions on demand, without having to copy from something.
The problem I see with copying is that I’d end up copying the facial characteristics of the subject I’m copying as well, which hardly bodes well for the (imaginary) face I’ve cooked up. And yes, it wouldn’t affect my drawing that much, but I was still unsatisfied with that idea.
This afternoon, an idea struck me: what if I used the same system I use to generate poses to generate expressions? If you really think about it, the face is just a bunch of muscles with fat over them with the layering of skin. If I could figure out the forms at the appropriate resolution, these forms could be blobs which could be displaced/deformed at will to generate expressions. Boundaries of jostling blobs basically become potential creases and wrinkles. Blobs which get squeezed on all sides by other forms bulge out.
The idea seems to be working.

# Inventing poses

It’s not easy imagining them. But it can get better with practice. And that’s what I’ve been doing. An excellent accompanying read is Force : Dynamic Life Drawing for Animators. It’s one of those books that’s perfect for kicking you out of a rut, inspiring you to loosen your strokes. It’s worth several rereads; I’ve only skimmed through some parts of it, and they are immediately useful.

# Unforgotten

Yeah, I’m on a holiday now. And suddenly, it seems that there’s a ton of things to do.

Focusing a lot on improving my drawing/shading technique. I’ve gone back to pencils, sworn off digital painting till I get some experience working with colors in real life. Not that prevents me from messing around with Painter. Been doing a few drawing sessions with people at work, after work; so far, the results seem encouraging, i.e., no rotten tomatoes yet.

I used to hate having to get to work to scan my drawings, now I completely detest it. Firstly, I’m too lazy to take a stroll down there while I’m on vacation. Secondly…well, I think the first reason is good enough. So I went over to the local Croma store and snagged a HP Deskjet J610a. Primarily using it for scanning, but taking a printout should come in handy too. Installation was a snap, though the scan of the white areas of the paper have a slight sepia tone. Nothing a little color correction can’t handle.

As a side effect of this increased drawing output, I went ahead and cleaned up the site after a (very) long time. Installed Plogger today to give the new stuff I’ve been drawing a better presentation, threw away the old home page, et cetera. Well, at least this evening wasn’t a total waste…

Also found the time to run all my books through the MyBookDroid app: not bad so far. The books I’m currently reading are:

• Figure Drawing: Design and Invention – Michael Hampton
• Castles: Their Construction and History – Sidney Toy

Continuing work on Exo next week; I want to try out an easier way of weaving IL without tacking in IL for every aspect that is specified.

Been trying out this very funny game called Magicka. I highly recommend giving it a try, it’s hilarious, drops fantasy cliches left, right and center; and the narrator keeps insisting that the player’s mentor ‘Vlad’ is by no means a vampire.

Checked out the new Incarna character creation system in EVE Online; they did a nice job of it; should be fun when they introduce in-station ambulation.