Remember when stereoscopic images were all the rage? Crowds of people would gather around printed images of a random menagerie of colored dots and swirls in the hope that some 3D scene would emerge from the chaos. Arguments might sometimes break out over whether there was anything to see at all or whether it was just some big conspiracy reminicsent of the EmperorÂ’s new clothes.
How the images work:
Fig. 1: Two eyes focusing on a virtual object
The reason the images can generate 3D scenes is a result of the sophisticated pattern and feature recognition that goes on within the human eyes and brain. When we use our eyes to see the world around us, the brain takes the two-dimensional images from each eye and analyzes them by looking for common features in each side. When the same object is seen by both eyes, the brain combines them together into a single whole.
The depth dimension is factored into the scene by measuring the differences between the two sides; a distant object will not move much between the left and the right but a near one will. You can see this effect by closing one eye and fixing on some object, then switching to the other eye and back again. The brain is so sophisticated at recognizing these differences that it will not only calculate the distance to an object but also its three-dimensional shape as well.
Stereoscopic images make use of this visual circuitry by giving the brain enough common features in the left and right eye-images so that it can recognize the depth information in the scene. It all hinges on a simple trick.
The images are viewed by looking into the distance, through the image to some point on the other side (see Fig. 1). While the eyes are looking through the image, the lens must actually focus on the image, something it is usually not asked to do. Making this work involves some trial and error.
In order for the brain to recognize a rendered object, the patterns at point Â‘AÂ’ and point Â‘BÂ’ must match. This is the trick we take advantage of when generating stereoscopic images. The scene is generated by placing pairs of patterns on a flat two-dimensional image in such a way to fool the brain into seeing depth. The depth (Â‘dÂ’) is controlled by varying the width (Â‘wÂ’) between the matching patterns in the image.
The first step is to define a reasonable model of the scene. For the purposes of this article this has been simplified:
var nWidth = 100;
var nHeight = 30;
var aScene = new Array()
// paint a uniform background of depth factor (DF) 3
for ( i = 0; i < nHeight; i++ )
var aLine = new Array();
for ( j = 0; j < nWidth; j++ ) aLine.push(3);
The scene is a two-dimensional array of distance factors. The term Â‘depth factorÂ’ is used to indicate that while there is a correlation between depth factor and the apparent distance to the object, the relationship is not linear. In the code above I've created a scene with a uniform background of 3.
The next task is to place some objects in the scene. In the example below, I've drawn some text onto a raised rectangular plaque. For the purposes of brevity I haven't included the code for the drawString function. For further details, have a look at the demo code.
// draw a rectangle at DF 2
for ( i = 7; i < 28; i++ ) for ( j = 10; j < nWidth-15; j++ ) aScene[i][j] = 2;
// render some text onto aScene at DF 1
var str = document.getElementById('theText').value;
var nWordWidth = wordWidth(str);
drawString(str, Math.floor((nWidth - nWordWidth - nMargin)/2), 12, aScene,1);
Now to render the scene stereoscopicallyÂ…
Our eyes sit horizontally on our faces so this means that the depth information will only be recognized if it is rendered horizontally. This adds a convenient simplification to the algorithm as it allows us to break the image up into a set of horizontal scan lines which can be rendered independently. Objects are painted onto the scan line in order of depth starting with the nearest and ending with the furthest away.
March 27, 2003
Revised: December 2, 2004