Cubby Multiscreen
**Volume Number: 16 (2000)**

Issue Number: 12

Column Tag: QuickDraw 3D Tricks
# Cubby: Multiscreen Desktop VR Part III

*By Tom Djajadiningrat and Maarten Gribnau*

*Reading an input sprocket device and calibrating Cubby*

### Introduction

In this month's final episode of our 'Cubby: Multiscreen Desktop VR' trilogy we explain how you read the InputSprocket driver from part II, how you use it as input for the cameras from part I and how you calibrate the input device so that it leads to the correct head position.

### Relating the Virtual Cubby to the Real World

Before we can talk calibration we need to establish how the virtual Cubby relates to the real world. We have made life easy for ourselves by choosing the same orientation for the coordinate system of the real Cubby as the virtual Cubby. We also made the dimensions of the virtual Cubby in QuickDraw3D units equal to the dimensions of the real world Cubby in millimetres. This is determined by the constants kHalfEdgeLength and kEdgeLength which are half an edge length and a whole edge length of Cubby respectively. You can find these constants in MyDefines.h (Listing 1). The Cubby we built has an edge length of 195mm and so we've given our virtual Cubby an edge length of 195 QuickDraw3D units. In a sense the scale of the virtual Cubby is arbitrary. As long as everything is scaled equally (the background planes, the model, the lights and the camera) you end up with the same perspectives. If you like you can create a virtual Cubby with an edge length of 1. However, we think that the way we do it here has one major advantage: the coordinates in QuickDraw 3D units that you see during debugging are meaningful because you can directly relate them to sizes in millimetres in the real world. For example, if you end up with a camera position that is 20000 QuickDraw 3D units from the origin, you know that something has gone haywire because 20 metres is well out of range of the tracker.

**Listing 1: MyDefines.h**

// the width and half the width of a Cubby edge in QD3D units
#define kEdgeLength 195
#define kHalfEdgeLength 195/2.0

We use these constants for three things:

- setting the size of the background planes
- scaling the model read from disk to Cubby's display space
- setting the area cut out of the view plane

**Setting the size of the background planes**

Listing 2 shows the creation of the background planes. Each background plane is a polygon. The constant kEdgeLength is used to indicate the vertices of a polygon.

**Listing 2: DisplaySpace.c**

DisplaySpace
TQ3GeometryObject thePolygonZ0, thePolygonX0, thePolygonY0;
TQ3PolygonData theData;
long i ;
TQ3Vertex3D theVerticesZ0[4] = {
0, 0, 0, nil,
kEdgeLength, 0, 0, nil,
kEdgeLength, kEdgeLength, 0, nil,
0, kEdgeLength, 0, nil};
TQ3Vertex3D theVerticesX0[4] = {
0, 0, kEdgeLength, nil,
0, 0, 0, nil,
0, kEdgeLength, 0, nil,
0, kEdgeLength, kEdgeLength, nil};
TQ3Vertex3D theVerticesY0[4] = {
0, 0, kEdgeLength, nil,
kEdgeLength, 0, kEdgeLength, nil,
kEdgeLength, 0, 0, nil,
0, 0, 0, nil};
// create new polygon objects.
// four corners per polygon.
theData.numVertices = 4;
// point to our array of vertices
// for the polygon in the Z=0 plane.
theData.vertices = theVerticesZ0;
// polygon itself has no attributes.
theData.polygonAttributeSet = nil;
// create the polygon.
thePolygonZ0 = Q3Polygon_New(&theData);
// point to our array of vertices
// for the polygon in the X=0 plane.
theData.vertices = theVerticesX0;
// create the polygon.
thePolygonX0 = Q3Polygon_New(&theData);
// point to our array of vertices
// for the polygon in the Y=0 plane.
theData.vertices = theVerticesY0;
// create the polygon.
thePolygonY0 = Q3Polygon_New(&theData);

**Scaling the model**

We want to scale and translate the model that was read from disk so that it fits within and is centred within Cubby's display space. This is accomplished through the procedure ScaleModelToDisplaySpace within the source file ReadModelAndScaleIt.c (Listing 3). It should look pretty familiar to you as similar procedures are found in most examples from the QuickDraw 3D SDK. Basically, what we do here is calculate the bounding box of the model and use the dimensions of the bounding box to scale and translate the model so that it fits and is centred within Cubby's display space. Let's look at the code in detail.

The first thing we do is call GetModelBoundingBox after which theViewBBox holds the bounding box. From the bounding box we can work out its dimensions along the three world axes. The next thing is a check to see whether all these dimensions are smaller or equal than kQ3RealZero. This could happen if the file read from disk was a single point. We want to avoid a bounding box of which all three dimensions are smaller than zero as that would give scaling problems later. So if it happens we set the dimensions of the bounding box to a very small number (0,0001). We work out the bounding box' centre theBBoxCenter by calling Q3Point3D_AffineComb with the box' minimum and maximum corners. We also calculate theBBoxDiagonal, the length of the diagonal of the bounding box. We then calculate theDisplaySpaceDiagonal, the length of the diagonal of Cubby's display space. The ratio of these two lengths gives us the scale factor for the model so that its bounding box fits within Cubby's display space. Note how we have a fiddle factor kScaleFineTune. This factor makes sure that the model ends up slightly smaller than Cubby's display space as it does not look very good when the model touches the background planes. We chose kScaleFineTune = 0.75 though you can of course change it should you prefer a tighter or looser fit. We can now work out the required transformation matrices. The first one, theTransMatrix1, translates the centre of the model to the origin. The second one, theScaleMatrix, scales the model around the origin. The third matrix, theTransMatrix2, translates the model to the centre of the display space. Finally, we concatenate these three matrices in the order that they were created. The resulting matrix in the fModelMatrix field of gDoc is submitted for rendering in SubmitOneView of the source file Rendering.c.

**Listing 3: ReadModelAndScaleIt.c**

ScaleModelToDisplaySpace
void ScaleModelToDisplaySpace(DocumentPtr inDoc)
{
#define kScaleFineTune 0.75
TQ3BoundingBox theViewBBox;
float theXSize, theYSize, theZSize;
float theWeights[2] = { 0.5, 0.5 };
TQ3Point3D thePoints[2];
TQ3Point3D theBBoxCenter;
TQ3Vector3D theDiagonalVector;
float theBBoxDiagonal;
float theDisplaySpaceDiagonal;
float theScale;
TQ3Matrix4x4 theTransMatrix1,
theTransMatrix2,
theScaleMatrix;
// Get the bounding box of the model.
GetModelBoundingBox(inDoc, &theViewBBox);
// Work out the dimensions along the axes.
theXSize = theViewBBox.max.x - theViewBBox.min.x;
theYSize = theViewBBox.max.y - theViewBBox.min.y;
theZSize = theViewBBox.max.z - theViewBBox.min.z;
// If we have a point model, then the 'theViewBBox'
// would end up being a 'singularity' at the location
// of the point. As this bounding 'box' is used in
// scaling the model, this could give problems.
if (theXSize <= kQ3RealZero &&
theYSize <= kQ3RealZero &&
theZSize <= kQ3RealZero)
{
// Set the corners of the bounding box to a very small number.
theViewBBox.max.x += 0.0001;
theViewBBox.max.y += 0.0001;
theViewBBox.max.z += 0.0001;
theViewBBox.min.x -= 0.0001;
theViewBBox.min.y -= 0.0001;
theViewBBox.min.z -= 0.0001;
}
// Work out the centre of the bounding box.
thePoints[0] = theViewBBox.min;
thePoints[1] = theViewBBox.max;
Q3Point3D_AffineComb( thePoints, theWeights,
2, &theBBoxCenter);
// the length of the diagonal of the bounding box.
Q3Point3D_Subtract( &theViewBBox.max,
&theViewBBox.min,
&theDiagonalVector);
theBBoxDiagonal= Q3Vector3D_Length(&theDiagonalVector);
// the length of the diagonal of the display space.
theDisplaySpaceDiagonal = sqrt(3 * pow(kEdgeLength,2));
// The scale to make the model fit within the space.
// kScaleFineTune = 1 gives a tight fit. The smaller you
// make it, the smaller the model ends up
// within the display space.
theScale = kScaleFineTune * ( theDisplaySpaceDiagonal/
theBBoxDiagonal);
// First, we set up a matrix for translating
// the centre of the model to the origin.
Q3Matrix4x4_SetTranslate ( &theTransMatrix1,
-theBBoxCenter.x,
-theBBoxCenter.y,
-theBBoxCenter.z);
// Second we set up a matrix for scaling
// the model to fit Cubby's space.
Q3Matrix4x4_SetScale ( &theScaleMatrix,
theScale,
theScale,
theScale);
// Third, we set up a matrix for translating the model
// to the middle of Cubby's display space.
Q3Matrix4x4_SetTranslate ( &theTransMatrix2,
kEdgeLength/2.0,
kEdgeLength/2.0,
kEdgeLength/2.0);
// Finally, concatenate the matrices.
Q3Matrix4x4_Multiply( &theTransMatrix1,
&theScaleMatrix,
&inDoc->fModelMatrix);
Q3Matrix4x4_Multiply( &inDoc->fModelMatrix,
&theTransMatrix2,
&inDoc->fModelMatrix);
}

**Setting the view plane area**

We need to indicate the size of the area that each view plane camera cuts out of its view plane. We need to do this only once during initialization of each camera since the size of this area does not change during execution of our application. The size of the area that is cut out of the view plane is determined by the camera parameters halfWidthAtViewPlane and halfHeightAtViewPlane. Since the area we wish to cut out is square and since a background polygon completely fills it, these camera parameters both equal kHalfEdgeLength (Listing 4).

**Listing 4: ViewCreation.c**

newViewPlaneCamera
// The aspect ratio of these parameters should equal
// that of the paneWidth and paneHeight.
// In Cubby's case the screens are square so the aspect ratio = 1.
thePerspectiveData.halfWidthAtViewPlane = kHalfEdgeLength;
thePerspectiveData.halfHeightAtViewPlane = kHalfEdgeLength;

Now that we know how the virtual Cubby relates to the real world, we can turn our attention to calibration.

###
Calibration Procedure

**Why calibrate?**

To ascertain that we correctly measure the head-position of the user we need to calibrate the head-tracker. Calibration would not be necessary if we could place the origin of the base-unit of the tracker at the origin of Cubby and the sensor in the middle of the user's eye. Since neither is possible we need to take the offsets in position into account. Also, since we wish to mount the Dynasight base unit above the display space under an angle to provide the best coverage of the user's work space, we need to take this rotation into account. What we are looking for is the matrix which transforms the InputSprocket coordinates that we get from the tracker to the QuickDraw 3D world coordinate system. So how do we work out this transformation matrix?

**Conversion from InputSprocket units to millimetres**

The numbers that we read from our Dynasight InputSprocket driver into our application are not in millimetres. The driver works with a coordinate system in which the axes run from the minimum number kISpAxisMinimum to the maximum number kISpAxisMaximum. A look in InputSprocket.h tells us:

#define kISpAxisMinimum 0x00000000U
#define kISpAxisMaximum 0xFFFFFFFFU

Unlike most game applications, we are not interested in relative movements but in absolute positions. In order to calculate what the coordinates from the InputSprocket driver correspond to in the real world we need to agree upon the conversion factor. What we have done is to set kISpAxisMaximum to 10 metres. Though in a sense our choice is arbitrary we would like to think this is an informed choice for two reasons. First, it is well above the maximum range of the Dynasight (6m) even with the largest possible reflective target (75 mm diameter). Second, it gives us plenty of resolution. kISpAxisMaximum / 10 metres= 4,294,967,295 / 10,000 millimetres = 429496.7295 steps/mm. Tracking technology would have to improve pretty dramatically for us to run out of resolution.

So if:

4,294,967,295 InputSprocket units = 10 metres

then

1 InputSprocket unit = 10,000 mm/4,294,967,295 = 2.328306437080797e-6 mm

This is our multiplication factor to convert from InputSprocket units to millimetres.

**Working out the transformation matrix**

The first thing we do is to convert the Dynasight InputSprocket coordinates to millimetres. For this we create a matrix theScaleMatrix.

We then need to express the Cubby coordinate system in Dynasight InputSprocket coordinates. The easiest way would be to measure points on Cubby's main axes, but since these are out of sight of the Dynasight we have to use points on the top edges of Cubby's screens. We use the points P, Px and Pz (Figure 1). In the real-world coordinate system in millimetres P={0, dy, 0}, Px={195, dy, 0} and Pz={0, dy, 195}, where dy is the distance in millimetres above Cubby's ground plane. In our case dy is 204mm, though your setup will of course be different. Though we are working on integrating a perky animated paper clip within the Cubby application which will talk you through te calibration procedure, for the moment we take the following approach. We place the sensor consecutively at P, Px and Pz, measure the Dynasight InputSprocket coordinates of these points by means of our separate little app DynasightReader and jot down the readings. We then fill in these coordinates where the variables P, Px and Pz are initialized in CalibrationMatrix (Listing 5). For a real app you would of course have to think of a more elegant way to capture the Dynasight InputSprocket coordinates.

**Figure 1.** The points to work out the calibration matrix.

Using these three points we then calculate the vectors PPx and PPz and normalize them into Xnorm and Znorm respectively. Using the cross-product of these vectors, we calculate a unit vector Ynorm which is perpendicular to both Xnorm and Znorm. Now we have an orthonormal base formed by the unit vectors Xnorm, Ynorm and Znorm which specifies the orientation of Cubby. The rotation matrix to rotate from orientation of the Dynasight InputSprocket coordinate system into the orientation of the Cubby coordinate system can be constructed by taking these unit vectors and using them as the columns of the rotation matrix. To save some typing we first set the rotation matrix to a 4x4 identity matrix and then substitute the top left 3x3 matrix with the orthonormal base.

Now work out the vector PO from P to the origin of Cubby expressed in the Dynasight InputSprocket coordinate system by multiplying the unit vector in the y-direction Ynorm by the distance from P to the origin in Dynasight coordinates. By adding the vector PO to point P we find the origin expressed in the Dynasight InputSprocket coordinate system. This is of course a very informative point but we could not measure it directly since there is no direct line of sight to the Dynasight.

What we are ultimately looking for is a calibration matrix which when applied to the origin measured in raw Dynasight units gives us {0,0,0}. So what we do is to rotate the origin using the rotation matrix theRotMatrix we just created and then use the resulting point temp to create a translation matrix theTransMatrix which translates the point temp to the origin of the Dynasight coordinate system.

Finally, we concatenate the three matrices. First we scale, then we rotate, and finally we translate. The resulting matrix theCalMatrix is returned.

**Listing 5: Calibration.c**

CalibrationMatrix
TQ3Matrix4x4 CalibrationMatrix()
{
// These coordinates in millimetres were measured with the DynasightReader app.
TQ3Point3D P = {-0.7, -144.6, 218.15};
TQ3Point3D Px = {136.2, -94.75, 351.55};
TQ3Point3D Pz = {-138.45, -93.45, 343.75};
TQ3Matrix4x4 theTransMatrix, theRotMatrix, theCalMatrix;
// Vertical distance from P to ground plane in mm.
float dy = 204;
TQ3Vector3D PPx, PPz, Xnorm, Ynorm, Znorm, PO;
TQ3Point3D temp, O;
// parallel to X, 195mm
Q3Point3D_Subtract(&Px, &P, &PPx) ;
// X normalized
Q3Vector3D_Normalize(&PPx, &Xnorm) ;
// parallel to Z, 195mm
Q3Point3D_Subtract(&Pz, &P, &PPz) ;
// Z normalized
Q3Vector3D_Normalize(&PPz, &Znorm) ;
// Y normalized
Q3Vector3D_Cross(&Znorm, &Xnorm, &Ynorm) ;
// Start with setting the rotation matrix to identity
// so that we do not have to fill in all 16 fields.
Q3Matrix4x4_SetIdentity(&theRotMatrix) ;
// The columns of the rotation matrix become
// the vectors of the orthonormal base of the Cubby
// coordinate system in the Dynasight coordinate system.
theRotMatrix.value[0][0] = Xnorm.x;
theRotMatrix.value[0][1] = Ynorm.x;
theRotMatrix.value[0][2] = Znorm.x;
theRotMatrix.value[1][0] = Xnorm.y;
theRotMatrix.value[1][1] = Ynorm.y;
theRotMatrix.value[1][2] = Znorm.y;
theRotMatrix.value[2][0] = Xnorm.z;
theRotMatrix.value[2][1] = Ynorm.z;
theRotMatrix.value[2][2] = Znorm.z;
// Work out the vector PO (from P to the origin of Cubby)
// expressed in the Dynasight coordinate system.
// For this you need the distance dy from P to the
// origin of Cubby in mm.
Q3Vector3D_Scale(&Ynorm, -dy, &PO) ;
// Now we can work out the origin of Cubby
// expressed in the Dynasight coordinate system.
Q3Point3D_Vector3D_Add(&P, &PO, &O);
// Rotate the origin into the Cubby coordinate system...
Q3Point3D_Transform(&O, &theRotMatrix, &temp);
// and use the resulting point to create the translation matrix.
Q3Matrix4x4_SetTranslate( &theTransMatrix,
-temp.x,
-temp.y,
-temp.z);
// Concatenate the three matrices:
// first we rotate, then we translate.
Q3Matrix4x4_Multiply( &theRotMatrix,
&theTransMatrix,
&theCalMatrix);
return theCalMatrix;
}

### Integration of InputSprocket Code with Cubby

In the previous episode (Gribnau and Djajadiningrat, 2000), we introduced the basics of communication between InputSprocket applications and drivers. Now, we will show how Cubby configures and reads data from InputSprocket drivers. For Cubby, it doesn't matter if the data comes from our Dynasight driver or from any other InputSprocket driver.

**Initializing InputSprocket**

Cubby, like any other InputSprocket application, needs to initialize InputSprocket before. Listing 6 shows the ISp_Initialize routine that does the initialization within the Cubby application. First of all, InputSprocket is loaded with a call to ISpStartup. Then, Cubby asks InputSprocket to create new virtual elements based on its input needs with the ISpElement_NewVirtualFromNeeds call. As shown in our previous episode, InputSprocket defines every input device in terms of its elements (e.g. buttons, axes, directional pads, etc.). An application can ask InputSprocket to create new elements based on its needs. With the needs and the elements, Cubby can call ISpInit. In response, InputSprocket will initialize the drivers and the drivers will auto-configure to Cubby's needs. This means that the drivers will try to find an optimal match between the needs of Cubby and the elements that they have.

Listing 6: ISpCubby.c

ISp_Initialize
OSStatus ISp_Initialize(
OSType creator)
{
OSStatus err;
err = ISpStartup();
if (err) {
return err;
}
// Setup the input sprocket elements
err = ISpElement_NewVirtualFromNeeds(
kNeed_NeedCount,
gNeeds, gInputElements, 0);
if (err) {
return err;
}
// Initialize InputSprocket and give it our needs
err = ISpInit(
kNeed_NeedCount, // number of needs
gNeeds, // array of needs
gInputElements, // array of references to virtuals
creator, // creator code
'0015', // subcreator code
0, // flags (should be 0)
kResourceID_setl, // a set list resource
0); // reserved
if (err) {
return err;
}
// We leave keyboard and mouse disabled for ISp
return noErr;
}

Listing 7 shows the needs of Cubby. Actually, it is an array of three structures of type ISpNeed. There are three needs since Cubby needs to know the position of the head of the user in three dimensions. Therefore, the need array has a need entry for every coordinate of the head position. Each need structure field is filled with appropriate values. The need for the x coordinate for instance, sets the name field to "Head X". Another important field is the icon field. This is set to the resource id of the icon that we want to be shown in the configuration dialog box. Furthermore, the element type (kISpElementKind_Axis) and the element label (kISpElementLabel_Axis_XAxis) are set.

**Listing 7: ISpCubby.c**

Cubby's need array
static ISpNeed gNeeds[kNeed_NeedCount] =
{
{ "\pHead X", kResourceID_needIconBase+kNeed_HeadX, 0,
0, kISpElementKind_Axis, kISpElementLabel_Axis_XAxis,
0, 0, 0, 0 },
{ "\pHead Y", kResourceID_needIconBase+kNeed_HeadY, 0,
0, kISpElementKind_Axis, kISpElementLabel_Axis_YAxis,
0, 0, 0, 0 },
{ "\pHead Z", kResourceID_needIconBase+kNeed_HeadZ, 0,
0, kISpElementKind_Axis, kISpElementLabel_Axis_ZAxis,
0, 0, 0, 0 },
};

Configuration

The drivers might do a good job during auto-configuration. However, the major advantage of InputSprocket is that it allows users to reconfigure the mapping of application needs to driver elements. Listing 8 shows that one call suffices to start the configuration process. The ISpConfigure call brings the configuration dialog box on the screen if the Dynasight device (or any other device with an InputSprocket driver) is hooked up (Figure 2). Internally, InputSprocket starts the configuration process and negotiates with the driver as was explained in last month's episode. Cubby is not involved in this process which is nice because it does not have to know which device element is delivering data for each coordinate. The pull-down menus in the configuration dialog box allow users to select the elements that need to be mapped to a coordinate. If no InputSprocket device is available, InputSprocket will notify the user with a different dialog box (shown in Figure 3).

**Listing 8: ISpCubby.c**

ISp_ShowConfigureDialog
OSStatus ISp_ShowConfigureDialog(void)
{
return ISpConfigure(nil);
}

**Figure 2.** The configuration dialog box with the Dynasight selected.

**Figure 3.** The dialog box shown if there are no InputSprocket devices.

**Reading data from InputSprocket drivers**

After initialization and configuration, Cubby can start to read data from the elements. Listing 9 shows how Cubby reads the x-coordinate of the head position. Reading the other coordinates follows the same procedure. First we check whether there are events waiting for the axis element with a call to ISpElement_GetNextEvent. In other words: we check whether the x-coordinate has changed since the last time this routine was called. If there was no error and an event was found, the raw axis value in InputSprocket coordinates is extracted with the ISpElement_GetSimpleState call. As was mentioned above, the raw coordinate needs to be converted to millimeters to be useful in Cubby. This is done by calling ISp_AxisToCubbyAxis. Finally, the events for the axis element are flushed and the new coordinate is stored.

**Listing 9: ISpCubby.c**

ISp_GetHeadX
void ISp_GetHeadX(ISpCubbyState *cubbyState)
{
OSStatus error = noErr;
ISpElementEvent event;
Boolean wasEvent;
ISpAxisData axisValue;
float xValue = cubbyState->headX;
// We check the axis, to see if it was moved, if so, we use that value
error = ISpElement_GetNextEvent(
gInputElements[kNeed_HeadX], sizeof(event),
&event, &wasEvent);
if (!error && wasEvent) {
error = ISpElement_GetSimpleState(
gInputElements[kNeed_HeadX], &axisValue);
if (!error) {
xValue = ISp_AxisToCubbyAxis(axisValue);
}
ISpElement_Flush(gInputElements[kNeed_HeadX]);
}
cubbyState->headX = xValue;
}

Listing 10 shows the conversion from raw coordinates to coordinates in millimeters. The raw coordinates are assumed to be in the format described above where kISpAxisMaximum equals 10 meters. Our ISp_AxisToCubbyAxis routine converts them to millimeters in three steps. First they are scaled to a value between 0 and 1. Then a mid-point in introduced. The InputSprocket coordinates are positive only but axis values usually have a positive as well as negative values. Therefore, coordinates are transformed to fit between -1 and 1. Finally, a multiplication by 10.000 will give us the desired millimeters (assuming the driver follows the same convention). Now the coordinates are ready to be used to drive Cubby's cameras.

**Listing 10: ISpCubby.c**

ISp_AxisToCubbyAxis
float ISp_AxisToCubbyAxis(
ISpAxisData axis)
{
float value;
static float r = kISpAxisMaximum - kISpAxisMinimum;
value = (axis - kISpAxisMinimum) / r; // value between 0 and 1
value *= 2; // value between 0 and 2
value -= 1; // value between -1 and 1
value *= 10000;
// value is now between -10000 and 10000
return value;
}

###
Applying the Calibration Matrix

Now that we can read the raw coordinates from the Dynasight and have the calibration matrix, all that remains is to transform the raw coordinates into a calibrated head-position by applying the calibration matrix. This happens in AdjustCameras of ViewPlaneCamera.c (Listing 6). Using the QuickDraw 3D call Q3Point3D_Transform we apply the calibration matrix fCalMatrix to the head position H in raw coordinates and end up with a calibrated head position C in the Cubby coordinate system.

**Listing 11: ViewPlaneCamera.c**

AdjustCameras
// apply the calibration matrix to convert the
// raw head position to a camera position
// in world coordinates.
Q3Point3D_Transform( &H,
&inDoc->fCalMatrix,
&C);

### Troubleshooting

If things are behaving erratically make sure you have checked all the troubleshooting hints in the previous episodes. Here is one more possible pitfall.

If you've actually got round to building a Cubby setup you may find that lining up the three projections drives you completely up the wall. Well, it is indeed awkward but there is one thing which can make it unnecessarily difficult. Even though the QuickDraw3D panes may be square (228x228 pixels) this does not necessarily mean that the projected images are square. Somewhere in the chain of scan converter and projector you may loose some 'squareness'. If the projections of the panes are not square, you will never be able to make them line up. If this is the case, have a look whether your scan converter or projector has a control for the aspect ratio of the image. Most scan converters and projectors do. If not, you can always change the aspect ratio of the QuickDraw 3D panes in the code. The easiest thing to do here is to point the projectors at a wall and fiddle with the settings until the resulting aspect ratio is exactly 1:1. To lend you a hand with lining up the projections, we provide an optional 'lining up' texture with edge markings and centre lines (Figure 4). You can activate it in MyDefines.h by setting kCubePlanesTexture to 500. Your screen should then look like Figure 5.

**Figure 4.** Lining up texture with edge markings and centre lines

**Figure 5.** The lining up texture activated.

###
Conclusion

In three articles we have covered quite a lot of ground. You should by now have a pretty good idea of how virtual reality systems based on multiple head-tracked displays such as CAVE and Cubby work. In the process you have learned quite a few nifty techniques. You have learned how to create and read an InputSprocket driver for an input device with three degrees of freedom. You now know how to support multiple views and how to mirror images. And if your matrix skills were a bit rusty, they should all be nice and shiny by now. Once you have a true 3D display such as Cubby the possibilities are virtually endless. We hope to be back with more information. In the meantime, please check out the Cubby web pages starting at http://www.io.tudelft.nl/id-studiolab/cubby/index.html.

### References

- Djajadiningrat, J.P. & Gribnau, M.W. (2000). Cubby: Multiscreen Desktop VR Part I. Multiple views and mirroring images in QuickDraw 3D. MacTech.
- Gribnau, M.W. & Djajadiningrat, J.P. (2000). Cubby: Multiscreen Desktop VR Part II. How to create an InputSprocket driver for a 3D input device. MacTech.

Last month, when **Maarten** threatened to withdraw his indispensable contribution to this month's final episode of Cubby, **Tom** hurried to explain to him that the trick in crafting 'about the author' notes is to spend at least as much time on it as on the article itself. He is disappointed that his suggestion for an article with guidelines for writing 'about the author' notes was received with little enthusiasm by MacTech's editors.