IRIS, Reloaded

For my graduation project, I’d written a machine vision/2D image algorithms system called IRIS. We’d used it to drive robots around, integrating sonar and visual data. However, that’s not what reawakened my interest in taking a re-look at the IRIS code. Currently, playing around with data sets has had me rifling through books and equations I liked looking at in college. It is almost like a second education, and I think it only right that I get IRIS up and running, if only to steal some code from it (even though it is in C++, and I’m currently doing my investigations using Ruby).

With that said, I dug into my old SourceForge account, where (to my somewhat irrational surprise) the code was still untouched. However, that code will probably not compile as-is. Even though it had been compiled under Linux, it had dependencies on drivers for hardware like the sonar systems and the webcam. I’m still not exactly sure if I want to get all those dependencies resolved; they aren’t my primary focus at this point. So I stripped off whatever was not required and pushed the clean, compiling source to GitHib here.

I still need to look at what its main function is doing, it’s something to do with VariationTester, and I think its a stereovision logic, but I’m not going to delve into it right now. Presented below are some of the things IRIS can do. I’m probably going to work on it in my spare time.

NOTE: Some of the image links for the IRIS 3D outputs are dead. I stole the HTML from the very old (and now, nonexistent) site, so I’ll have to rework them. But yeah, I’ll fix those, as well as put up more documentation, soon.

Outputs from the IRIS-3D engine


The left image was taken using an inexpensive digital camera
with default settings. The right image shows the result of histogram
equalisation using IRIS-XT.


The left image was taken using a Logitech webcam
with default settings. The right image shows the result of quadtree
segmentation using IRIS-XT. Click on the right image to see more
results for different levels.


The left image is the original image. The right image
shows the result of contrast stretching using IRIS-XT.

The left image is the original image. The right image
shows the result of using the Canny edge detector. Click on the images
to see their enlarged versions.


The left image is an artificial test image for checking
the proper functioning of the Hough Transform module with
default settings. The right image shows the Hough Transform itself.
Click on the images to see their enlarged versions.


The left image was taken using a Logitech webcam
with default settings. The right image shows the Hough Transform
itself. Click on the images to see their enlarged versions.


The left image is an artificial test image for checking
the proper functioning of the Generalised Hough Transform (GHT) module
with default settings. The right image shows the Hough Transform itself.
Click on the images to see their enlarged versions.


The left image is the Aqua image. The GHT module
was trained to recognise circles. The right image shows the Hough
Transform itself. Click on the images to see their enlarged versions.

Outputs from the IRIS-3D engine



The left image shows one of the 16 images of the subject.
The right image shows the result of uncalibrated 3D reconstruction
by the IRIS-3D’s space carving engine. Click on the left image
to view all the 16 images


The left image shows one of the two images of the scene
(stereo pair). The right image shows the result of uncalibrated
depth recovery using IRIS-3D’s constant-window stereovision engine.
Click on the left image to view both the images. Click on the right image
to view the depth maps recovered at different resolutions.


The left image shows one of the two images of the scene
(stereo pair). The right image shows the result of
depth recovery using IRIS-3D’s realtime stereovision engine. Click on
the left image to view both the images. Click on the right image to
view the depth maps recovered at different resolutions.