TweetFollow Us on Twitter

OpenGL For Mac Users - Part 2

Volume Number: 15 (1999)
Issue Number: 1
Column Tag: Power Graphics

OpenGL For Mac Users - Part 2

by Ed Angel, University of New Mexico

Advanced capabilities: architectural features and exploiting hardware

Introduction

In Part 1, we developed the basics of OpenGL and argued that the OpenGL API provides an efficient and easy to use interface for developing three-dimensional graphics applications. However, if an API is to be used for developing serious applications, it must be able to exploit modern graphics hardware. In this article, we shall examine a few of the advanced capabilities of OpenGL.

We shall be concerned with three areas. First, we shall examine how to introduce realistic shading by defining material properties for our objects and adding light sources to the scene. Then we shall consider the mixing of geometric and digital techniques afforded by texture mapping. We shall demonstrate these capabilities through the color cube example that we developed in our first article. We will then survey some of the advanced features of OpenGL, concentrating on three areas: writing client-server programs for networked applications, the use of OpenGL buffers, and the ability to tune performance to available hardware.

Figure 1 is a more detailed view of the pipeline model that we introduced in Part 1. Geometric objects such as polygons are defined by vertices that travel down the geometric pipeline while discrete entities such as bits and picture elements (pixels) travel down a parallel pipeline. The two pipelines converge during rasterization (or scan conversion). Consider what happens to a polygon during rasterization. First, the rasterizer must compute the interior points of the polygon from the vertices. The visibility of each point must be determined using the z or depth buffer that we discussed in Part 1. If a point is visible, then a color must be determined for it. In the simple model that we used in Part 1, a color either was assigned to an entire polygon or was interpolated across the polygon using the colors at the vertices. Here we shall consider two other possibilities that can be used either alone or together. We can assign colors based on light sources and material properties that we can assign to the polygon. Second we can use the pixels from the discrete pipeline to determine or alter the color, a process called texture mapping. Once a color has been determined, we can place the point in the frame buffer, place it in one of the other buffers, or use the other buffers and tables to modify this color.


Figure 1. Pipeline Model.

In Part 1, we developed a sequence of programs that displayed a cube in various ways. These programs demonstrated the structure of most OpenGL programs. We divided our programs into three parts. The main function sets up the OpenGL interface with the operating system and defines the callback functions for interaction. It will not change in our examples here. The myinit function defines user parameters. The display callback typically contains the graphical objects. Our examples here will modify these two functions.

Lights and Materials

In simple graphics applications, we assign colors to lines and polygons that are used to color or shade the entire object. In the real world, objects do not appear in constant colors. Rather colors change gradually over surfaces due to the interplay between the light illuminating the surface and the absorbtion and scattering properties of the surface. In addition, if the material is shiny such as a metallic surface, the location of the viewer will affect what shade she sees. Although physically-based models of these phenomena can be complex, there are simple approximate models that work well in most graphical applications.


Figure 2. The Phong Shading Model.

OpenGL uses the Phong shading model, which is based on the four vectors shown in Figure 2. Light is assumed to arrive from either a point source or a source located infinitely far from the surface. At a point on the surface the vector L is the direction from the point to the source. The orientation of the surface is determined by the normal vector N. Finally, the model uses the angle of a perfect reflector, R, and the angle between R and the vector to the viewer V. The Phong model contains diffuse, specular, and ambient terms. Diffuse light is scattered equally in all directions. Specular light reflects in a range of angles close to the angle of a perfect reflection, while ambient light models the contribution from a variety of sources and reflections too complex to calculate individually. The Phong model can be computed at any point where we have the required vectors and the local absorbtion coefficients. For polygons, OpenGL applies the model at the vertices and computes vertex colors. To color (or shade) a vertex, we need the normal at the vertex, a set of material properties, and the light sources that illuminate that vertex. With this information, OpenGL can compute a color for the entire polygon. For a flat polygon, we can simply assign a normal to the first vertex and let OpenGL use the computed vertex color for the entire face, a technique called flat shading. If we want the polygon to appear curved, we can assign different normals to each vertex and then OpenGL will interpolate the computed vertex colors across the polygon. This later method is called smooth or interpolative shading. For objects composed of flat polygons, flat shading is more appropriate.

Let's again use the cube with the vertex numbering in Figure 3. We use the function quad to describe the faces in terms of the vertices

Listing 1: quad.c

GLfloat vertices[8][3] = {{-1.0,-1.0,-1.0}, {1.0,-1.0,-1.0}, 
	{-1.0,1.0.-1.0}, {1.0,1.0,-1.0},{-1.0,-1.0,1.0}, 
	{1.0,-1.0,1.0},{-1.0,1.0,1.0},{1.0,1.0,1.0}};

void quad(int a, int b, int c, int d)
{
		glBegin(GL_QUAD)
			glVertex3fv(vertices[a]);
			glVertex3fv(vertices[b]);
			glVertex3fv(vertices[c]);
			glVertex3fv(vertices[d]);
		glEnd();
}


Figure 3. Cube Vertex Labeling.

To flat shade our cube, we make use of the six normal vectors, each of which points outward from one of the faces. Here is the modified cube function

Listing 2: Revised cube.c with normals for shading

Glfloat face_normals[6][3] = {-1.0,0.0,0.0},{0.0,-1.0, 0.0},
	{0.0,0.0-1.0},(1.0,0.0,0.0},{0.0,1.0,0.0},{0.0,0.0,1.0}};

void cube()
{
	glNormal3fv(face_normals[2]);
	quad(0, 2, 3, 1);
	glNormal3fv(face_normals[4]);
	quad(2, 6, 7, 3);
	glNormal3fv(face_normals[0]);
	quad(0, 4, 6, 2);
	glNormal3fv(face_normals[3]);
	quad(1, 3, 7, 5);
	glNormal3fv(face_normals[5]);
	quad(4, 5, 7, 6);
	glNormal3fv(face_normals[1]);
	quad(0, 1, 5, 4);
}

Now that we have specified the orientation of each face, we must describe the light source(s) and the material properties of our polygons. We must also enable lighting and the individual light sources. Suppose that we require just one light source. We can both describe it and enable it within myinit . OpenGL allows each light source to have separate red, green and blue components and each light source consists of independent ambient, diffuse and specular sources. Each of these sources is configured in a similar manner. For our example, we will assume our cube consists of purely diffuse surfaces, so we need only worry about the diffuse components of the light source. Here is a myinit for a white light and a red surface:

Listing 3: Revised myint.c with lights and materials

void myinit()
{
	GLfloat mat_diffuse[]={1.0, 0.0, 0.0, 1.0};
	GLfloat light_diffuse[]={1.0, 1.0, 1.0, 1.0};
	GLfloat light0_pos[4] = { 0.5, 1.5, 2.25, 0.0 };

	glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
	glLightfv(GL_LIGHT0, GL_POSITION, light0_pos);

	/* define material properties for front face of all polygons */

	glMaterialfv(GL_FRONT, GL_DIFFUSE, mat_diffuse);

	glEnable(GL_LIGHTING);		/* enable lighting */
	glEnable(GL_LIGHT0);			/* enable light 0 */
  glEnable(GL_DEPTH_TEST);	/* Enable hidden-surface-removal */
	glClearColor(1.0, 1.0, 1.0, 1.0);

}

Both the light source and the material have RGBA components. The light source has a position in four-dimensional homogeneous coordinates. If the last component is one, then the source is a point source located at the position given by the first three components. If the fourth component is zero, the source is a distant parallel source and the first three components give its direction. This location is subject to the same transformations as are vertices for geometric objects. Figure 4 shows the resulting image


Figure 4. Red Cube with Diffuse Reflections.

Texture Mapping

While the capabilities of graphics systems are measured in the millions of shaded polygons per second that can be rendered, the detail needed in animations can require much higher rates. As an alternative, we can "paint" the detail on a smaller number of polygons, much like a detailed label is wrapped around a featureless cylindrical soup can. Thus, the complex surface details that we see are contained in two-dimensional images, rather than in a three-dimensional collection of polygons. This technique is called texture mapping and has proven to be a powerful way of creating realistic images in applications ranging from games to movies to scientific visualization. It is so important that the required texture memory and mapping hardware are a significant part of graphics hardware boards.

OpenGL supports texture mapping through a separate pixel pipeline that processes the required maps. Texture images (arrays of texture elements or texels) can be generated either from a program or read in from a file. Although OpenGL supports one through four-dimensional texture mapping, to understand the basics of texture mapping we shall consider only two-dimensional maps to three-dimensional polygons as in Figure 5.


Figure 5. Texture Mapping a Pattern to a Surface.

We can regard the texture image as continuous with two-dimensional coordinates s and t. Normally, these coordinate range over (0,1) with the origin at the bottom-left corner of the image. If we wish to map a texture image to a three-dimensional polygon, then the rasterizer must match a point on the polygon with both a point in the frame buffer and a point on the texture map. The first map is defined by the various transformations that we discussed in Part 1. We determine the second map by assigning texture coordinates to vertices and allowing OpenGL to interpolate intermediate values during rasterization. We assign texture coordinates via the function glTexCoord which sets up a present texture coordinate as part of the graphics state.

Consider the example of a quadrilateral. If we want to map the entire texture to this polygon, we can assign the four corners of the texture to the vertices

Listing 4: Assigning texture coordinates

glBegin{GL_QUADS);
   glTexCoord2f(0.0, 0.0);
   glVertex3fv(a);
   glTexCoord2f(1.0, 0.0);
   glVertex3fv(b);
   glTexCoord2f(1.0, 1.0);
   glVertex3fv(c);
   glTexCoord2f(0.0, 1.0);
   glVertex3fv(d);
glEnd();

Figure 6 shows a checkerboard texture mapped to our cube. If we assign the texture coordinates over a smaller range, we will map only part of the texture to the polygon and if we change the order of the texture coordinates we can rotate the texture map relative to the polygon. For polygons with more vertices, the application program must decide on the appropriate mapping between vertices and texture coordinates, which may not be easy for complex three-dimensional objects. Although OpenGL will interpolate the given texture map, the results can appear odd if the texture coordinates are not assigned carefully. The task of mapping a single texture to an object composed of multiple polygons in a seamless manner can be very difficult, not unlike the real world difficulties of wallpapering curved surfaces with patterned rolls of paper.

Like other OpenGL features, texture mapping first must be enabled (glEnable(GL_TEXTURE)). Although texture mapping is a conceptually simple idea, we must also specify a set of parameters that control the mapping process. The major practical problems with texture mapping arise because the texture map is really a discrete array of pixels that often come from images. How these images are stored can be hardware and application dependent. Usually, we must specify explicitly how the texture image is stored (bytes/pixel, byte ordering, memory alignment, color components). Next we must specify how the mapping in Figure 5 is to be carried out. The basic problem is that we want to color a point on the screen but this point when mapped back to texture coordinates normally does not map to an s and t corresponding to the center of a texel. One simple technique is to have OpenGL use the closest texel. However, this strategy can lead to a lot of jaggedness (aliasing) in the resulting image. A slower alternative is have OpenGL average a group of the closest texels to obtain a smoother result. These options are specified through the function glTexParameter. Another issue is what to do if the value of s or t is outside the interval (0,1). Again using glTexParameter, we can either clamp the values at 0 and 1 or use the range (0,1) periodically. The most difficult issue is one of scaling. A texel, when projected onto the screen, can be either much larger than a pixel or much smaller. If the texel is much smaller, then many texels may contribute to a pixel but will be averaged to a single value. This calculation can a very time consuming and results in the color of only a single pixel. OpenGL supports a technique called mipmapping that allows a program to start with a single texture array and form a set of smaller texture arrays that are stored. When texture mapping takes place, the appropriate array -- the one that matches the size of a texel to a pixel - -is used. The following code sets up a minimal set of options for a texture map and defines a checkerboard texture.

Listing 5: minimum texture map setup in myint.c

void myinit()
{
   GLubyte image[64][64][3];
	 int i, j, c;
	 for(i=0;i<64;i++) for(j=0;j<64;j++)
     {
       /* Create an 8 x 8 checkerboard image of black and white texels */
		   c = ((((i&0x8)==0)^((j&0x8))==0))*255;
		   image[i][j][0]= (GLubyte) c;
	     image[i][j][1]= (GLubyte) c;
	     image[i][j][2]= (GLubyte) c;
	  }
    glEnable(GL_DEPTH_TEST); /*Enable hidden-surface removal */
	  glClearColor(1.0, 1.0, 1.0, 1.0);
    glEnable(GL_TEXTURE_2D);  /* Enable texture mapping */
    glTexImage2D(GL_TEXTURE_2D,0,3,64,64,0,GL_RGB,
          GL_UNSIGNED_BYTE, image); /* Assign image to texture */
          /* required texture parameters */
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,
          GL_CLAMP); 
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,
          GL_CLAMP);
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,
          GL_NEAREST);
	  glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,
          GL_NEAREST);
}

First we create a 64 x 64 image to be used for our texture. We enable texture mapping and then pick image as the texture map. The other parameters in glTexImage2D give the size of the texture map, specify how it is stored and that it will be applied to the red, green and blue components. The four calls to glTexparameterf are required to specify how values of s and t outside (0,1) are to be handled (clamped) and that we are willing to use the nearest texel (for speed) rather than a filtered value. Figure 6 shows the cube with both texture mapping and shading. Note that in this mode texture mapping modifies the color determined by the shading calculation. Alternately, we can have the texture completely determine the color (decaling).


Figure 6. Texture Mapped Cube.

Clients and Servers

In many applications we must convey three-dimensional graphical information over a network, either to show the graphics remotely or to make use of hardware available on a remote machine. Web applications often fall into this category. Although we could send two-dimensional images across a network, the volume of data can be huge as compared to the compact description provided by three-dimensional geometry. Furthermore, in distributed interactive applications that involve manipulating large graphical databases, we would prefer to not have to send the database over the network repeatedly in response to small changes, such a change in viewing parameters.

In OpenGL, we regard the hardware with the display and the rendering engine as a graphics server and the program that defines and controls the graphics as a client. In what is called immediate-mode graphics, entities are sent to the graphics server as soon as they are defined in a program and there is no memory of these entities in the system. We used this mode in our sample cube programs. To redisplay the cube, we had to reexecute the code defining it. Thus, when we rotated the cube, we had to both alter the model-view matrix and reexecute the cube code defining its surfaces. In retained-mode graphics, graphical entities are defined and placed in structures called display lists, which are kept in the graphics server. For example, if we want to define a quadrilateral, store it on the server, and display it, we wrap its description between a glBeginList and a glEndList:

Listing 4: Display list for a quadrilateral

glBeginList(myQuad, GL_COMPILE_AND_DISPLAY);
  glBegin(GL_QUADS);
    glVertex3fv(a);
    glVertex3fv(b);
    glVertex3fv(c);
    glVertex3fv(d);
  glEnd();
glEndList();

In this example myQuad is an integer identifier for our retained object and the flag GL_COMPILE_AND_DISPLAY indicates that we want to define the list and display it on the server. If we only want to place a display list on the server without rendering it, we use the flag GL_COMPILE. Most OpenGL functions can appear inside a display as can other code. Similarly, to put a cube on the server, we need only surround any of our previous cube code with a glBeginList and a glEndList.We might also use display lists to put a character set on the server. In general, any objects we intend to redisplay are good candidates for display lists.

Once a display list is on the server, we can have it rendered by the command such as

glExecuteList(myCube)

Suppose that we wish to rotate the object and then redisplay it such as in our rotating cube example from Part 1. Within the display callback we would see code such as

GlRotatef(axis_x, axis_y, axis_z, theta)
GlExecuteList(myCube);

In terms of the traffic between the client program and the graphics server, we would be sending only the rotation matrix and function calls but not the object, as it is already stored on the server. If we did the same example using a complex object with thousands of vertices, once the object was placed on the server, further manipulation would require no more network traffic than the manipulation of a single quadrilateral or our cube.

OpenGL plays a vital, but often invisible, role in three-dimensional Web applications using VRML (Virtual Reality Modeling Language). VRML is based on the Open Inventor data base model. Open Inventor is an object-oriented graphics system built on top of OpenGL's rendering capabilities. VRML applications are client-server based with the rendering done on the client end. Thus, a VRML browser must be able to render databases that contain geometry and attributes that looks like OpenGL entities. Consequently, an obvious way to build a VRML browser is as an OpenGL application where the VRML server places display lists on the graphics server.

Buffers

OpenGL provides access to a variety of buffers. These include

  • Color buffers (including the frame buffer).
  • Depth Buffer.
  • Accumulation Buffer.
  • Stencil Buffer.

Many of these buffers have conventional uses, such as the frame buffer and the depth buffer, but all can be read from and written into by user programs and thus their uses are unlimited. In addition, the OpenGL architecture contains a variety of tables associated with these buffers and a variety of tests that can be performed on data as it is read from or written into buffers. Note that because these buffers are part of the graphics system, in an implementation in which these buffers and the associated tables are in separate hardware, once data have been moved to these buffers, we can achieve extremely high data rates as the system processor and bus are no longer involved.

For each type of buffer, we shall consider a few possible uses without going into coding details. We have seen that we can use multiple color buffers for double buffering. We can also use them for stereo viewing by rendering the left and right eye views into LEFT and RIGHT buffers and synching their display with special glasses that alternate presenting the images to the left and right eyes. For stereo animations, we can use four color buffers: FRONT_LEFT, FRONT_RIGHT, BACK_LEFT and BACK_RIGHT. More generally, we can use color buffers that are not being displayed to enhance the power of the geometric pipeline. For example, shadows are projections of objects from the perspective of the light source. We can do an off screen rendering with the camera at the light source into one of the buffers and then carefully composite this image with the standard rendering. Another application is to render each object in a different color into an off-screen buffer. We can use the location of the mouse to point into this buffer and the color read at this location is an identifier for the object, a simple way of doing interactive object selection or picking.

The depth buffer is used in conjunction with many of the applications of the color buffers. One interesting use is for combining translucent and opaque polygons in a single scene. In our example of blending in Part 1, we turned off hidden-surface removal because all the polygons were translucent. If some of the polygons are opaque, unless the user program sends the polygons down the pipeline in the correct order, no combination of enabling hidden-surface removal and blending will produce a reasonable image. Consider what happens if we render opaque polygons first, and then make the depth buffer read-only for translucent polygons. Translucent polygons behind opaque polygons will be hidden, while those in front will be blended.

Color buffers typically have limited resolution, such as one byte per color component. Consequently, doing arithmetic calculations with color buffers can be subject to loss of color resolution. The accumulation buffer has sufficient depth that we can add multiple images into it without losing resolution. Obvious uses of such a buffer include image compositing and blending. To composite n images, we can add them individually into the accumulator buffer and then read out the result while scaling each color value by 1/n. If we had tried to add these images into the frame buffer, we would have risked overflowing the color values, which are typically are stored 8 bits/component. If we tried to scale the colors before we added them into the frame buffer, we would have lost most, if not all, of our color resolution. For example, if n=8, we could loose three bits per color component. With an accumulator buffer, we can trade resolution for an increase in compositing time.

Less obvious, but easy to implement, applications of the accumulation buffer include digital filtering of images, scene antialiasing, depth of field images, and motion blur. Consider, for example, the antialiasing problem for polygons. As polygons are rasterized, the rasterizer computes small elements called fragments which are at most the size of a pixel. Each fragment is assigned a color that can determine the color of the corresponding pixel in the frame buffer. Generally, if the fragment is small or fragments from multiple polygons lie on the same pixel we will see jagged images if we make binary decisions as to whether or not a given fragment completely determines the color of a pixel. One solution is to use the alpha channel we discussed to Part 1 to allow small amounts of color from multiple fragments to blend together. Unfortunately, this method can be very slow. An alternative is to render the scene multiple times into the accumulation buffer, each time with the viewer shifted very slightly. Each image will contain slightly different aliasing artifacts that will be averaged out by the accumulation process.

The stencil buffer allows us to draw pixels based on the corresponding values in the stencil buffer. Thus, we can create masks in the stencil buffer that we can use to do things such as creating windows into scenes or for placing multiple images in different parts of the frame buffer. What makes this buffer more interesting is that we can change its values as we render. This capability allows us to write programs that can determine if an object is in shadow or color objects differently if they are sliced by a plane.

Performance Tuning

OpenGL supports a well-defined architecture and as such can be implemented in hardware, software or a combination of the two. Because there are both geometric and discrete pipelines, not only are there a wide range of implementation strategies, where bottlenecks arise depends on both the application and how the programmer chooses to use the OpenGL architecture. Consequently, it is difficult to make "one size fits all" solutions to performance issues. We can survey a few possibilities.

Defining geometric objects with polygons is simple but can lead to many function calls. For example, our cube with vertex colors, normals and texture coordinates required 108 OpenGL function calls: six faces each requiring a glBegin and glEnd, four vertices per face, each requiring a glVertex, glColor, glNormal and glTexCoord. One way to avoid this problem if the object is to be drawn multiple times is to use display lists. Another is to use vertex arrays, a feature added in OpenGL 1.1. In myinit, we can now set up and enable arrays that contain all the required information (colors, normals, vertices, texture coordinates). A single OpenGL call

glDrawElements(GL_QUADS, 24, GL_UNSIGNED_BYTE, cubeIndices);

will render the six quadrilateral faces) whose indices are stored as unsigned bytes in the array cubeIndices, which contains the 24 vertices in the order they appear in our cube function above.

In the geometric pipeline lighting calculations can be very expensive, especially if there are multiple sources, as we have to do an independent calculation for each source. A large part of the computation is that of the vectors in the Phong model. If a polygon is small relative to the distance to a viewer, the view vector will not change significantly over the face of the polygon. If we use

GlLightModeli(LIGHT_MODEL_LOCAL_VIEWER, GL_FALSE);

we allow the implementation to take advantage of this situation. We can also tell OpenGL whether we wish light calculations to be done on only one side of surface through

GlLightModeli(LIGHT_MODEL_TWO_SIDE, GL_FALSE);

We can also automatically eliminate (or cull) all polygons which are not facing the viewer by enabling culling

(GlCullFace (GL_BACK)).

Texture calculations can also be very time consuming and are subject to aliasing problems. OpenGL allows us to decide if we want to filter a texture to get a smoother image or just use the closest texel which is faster. Perspective projections can be a problem for texture mapping as the interpolation is more complex. In many situations, either the error is small enough that we do not care about it or the image is changing so rapidly that we cannot notice the error. In these situations, we can tell OpenGL not to correct for perspective.

More generally, we have to worry about both polygons and pixels, which are passing through two very different pipelines. Depending on both the implementation and the application, either pipeline can be the bottleneck. Often performance tuning, involves deciding which algorithm best matches the hardware and making creative use of the many features available in OpenGL. With OpenGL supported directly in the hardware of many new graphics cards, many graphics applications programmers are rethinking how to create images. For example, with the large amounts of texture memory included on these cards, in many situations we can generate details through textures rather than through geometry. Often, we can also avoid lighting calculations by storing some carefully chosen textures.

Conclusion

With over 200 functions in the API, we have only scratched the surface of what we can do with OpenGL. The major omission in these two articles is how we can define various types of curves and surfaces. Nevertheless, you should have a fair idea of the range of functionality supported by the OpenGL architecture. In the future, the CAD and animation communities will not only use OpenGL as their standard API but also start making use of features particular to OpenGL, such as the accumulation and stencil buffers.

The advantages of OpenGL are many. It is close to the hardware but still easy to use to write application programs. It is portable and supports a wide variety of features. It is the only graphics API that I have seen in my 15 years in the field that is used by animators, game developers, CAD engineers and researchers on supercomputers. Personally, I routinely use OpenGL on a PowerMac 6100, a PowerBook, an SGI Infinite Reality Engine and a variety of PCs, rarely having to change my code moving among these systems. There is not much more that can I ask of an API.

Sources and URLs

OpenGL is administered through an Architectural Review Board. The two major sources for on-line information on OpenGL are the OpenGL organization http://www.opengl.org and Silicon Graphics Inc http://www.sgi.com/Technology/OpenGL. You can find pointers to code, FAQ, standards documents and literature at these sites. I keep the sample code from my book at ftp://ftp.cs.unm.edu under pub/angel/BOOK.

OpenGL is available for most systems. For Mac users, there is an implementation from Conix Enterprises http://www.conix3d.com that includes support for GLUT and for hardware accelerators. There is a free OpenGL-like API called Mesa http://www.ssec.wisc.edu/~brianp/Mesa.html that can be compiled for most systems, including linux, and will run almost all OpenGL applications. There is a linux version available from Metro Link http://www.metrolink.com. You can obtain the code for GLUT and many examples at http://reality.sgi.com/opengl/glut3/glut3.html.

Bibliography and References

  • Angel, Edward. Interactive Computer Graphics: A top-down approach with OpenGL. Addison-Wesley, Reading, MA, 1997.
  • Architectural Review Board. OpenGL Reference Manual, Second Edition, Addison-Wesley, 1997.
  • Kilgard, Mark, OpenGL Programming for the X Window System, Addison-Wesley, 1996.
  • Neider, Jackie, Tom Davis and Mason Woo. The OpenGL Programming Guide, Second Edition. Addison-Wesley, Reading, MA, 1997.
  • Wright, Richard Jr. and Michael Sweet, The OpenGL Superbible, Waite Group Press, Corte Madera, CA, 1997.

Acknowledgements

I would like to thank Apple Computer, Silicon Graphics, and Conix Enterprises for the hardware and software support that enabled me to write my OpenGL textbook.


Ed Angel is a Professor of Computer Science and Electrical and Computer Engineering at the University of New Mexico. He is the author of Interactive Computer Graphics: A top-down with OpenGL (Addison-Wesley, 1997). You can find out more about him at http://www.cs.unm.edu/~angel or write him at angel@cs.unm.edu.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Apple Pro Video Formats 2.0.1 - Updates...
Apple Pro Video Formats brings updates to Apple's professional-level codes for Final Cut Pro X, Motion 5, and Compressor 4. Version 2.0.1: Support for the following professional video codecs Apple... Read more
Maya 2015 - Professional 3D modeling and...
Maya is an award-winning software and powerful, integrated 3D modeling, animation, visual effects, and rendering solution. Because Maya is based on an open architecture, all your work can be scripted... Read more
EtreCheck 2.2 - For troubleshooting your...
EtreCheck is a simple little app to display the important details of your system configuration and allow you to copy that information to the Clipboard. It is meant to be used with Apple Support... Read more
OmniOutliner Pro 4.2 - Pro version of th...
OmniOutliner Pro is a flexible program for creating, collecting, and organizing information. Give your creativity a kick start by using an application that's actually designed to help you think. It's... Read more
VLC Media Player 2.2.1 - Popular multime...
VLC Media Player is a highly portable multimedia player for various audio and video formats (MPEG-1, MPEG-2, MPEG-4, DivX, MP3, OGG, ...) as well as DVDs, VCDs, and various streaming protocols. It... Read more
Nisus Writer Pro 2.1.1 - Multilingual wo...
Nisus Writer Pro is a powerful multilingual word processor, similar to its entry level products, but brings new features such as table of contents, indexing, bookmarks, widow and orphan control,... Read more
Tinderbox 6.2.0 - Store and organize you...
Tinderbox is a personal content management assistant. It stores your notes, ideas, and plans. It can help you organize and understand them. And Tinderbox helps you share ideas through Web journals... Read more
OmniOutliner 4.2 - Organize your ideas,...
OmniOutliner is a flexible program for creating, collecting, and organizing information. Give your creativity a kick start by using an application that's actually designed to help you think. It's... Read more
calibre 2.25.0 - Complete e-library mana...
Calibre is a complete e-book library manager. Organize your collection, convert your books to multiple formats, and sync with all of your devices. Let Calibre be your multi-tasking digital librarian... Read more
Things 2.5.4 - Elegant personal task man...
Things is a task management solution that helps to organize your tasks in an elegant and intuitive way. Things combines powerful features with simplicity through the use of tags and its intelligent... Read more

Lifeline... (Games)
Lifeline... 1.1 Device: iOS Universal Category: Games Price: $2.99, Version: 1.1 (iTunes) Description: Lifeline is a playable, branching story of survival against all odds. Using your iPhone, iPad, or Apple Watch, you will help... | Read more »
Pandemic: The Board Game Has Gone Univer...
Don't let the virus win! Now you can download Pandemic: The Board Game, by F2Z Digital Media, for all of your iOS devices. The app is based on the fantastic board game by Z-man games. As employees of the CDC, you and your friends will have to work... | Read more »
Get Ready to Read Bloomberg Business on...
Fans of Bloomberg Business will soon be able to get all their news on the Apple Watch. The app lets you get the top headlines on your main screen and bookmark stories to read later. Using the motion detection in the Apple Watch, the headlines are... | Read more »
Watch This Homerun is Batting for the Ap...
Eyes Wide Games' Watch This Homerun is purportedly the first sports game coming to the Apple Watch, where you'll be up to bat as the pitcher tries to out-manuever you with fastballs, curveballs, and changeups. Using one-touch controls you can try to... | Read more »
Field Trip Can Take You on a Guided Tour...
Field Trip, by Google’s Niantic Labs, is an exploration app that gives you details about the awesome places you can discover wherever you find yourself. The app can show you local history, delicious restraunts, the best places to shop, and places to... | Read more »
Watch Your Six - SPY_WATCH is Infiltrati...
SPY_WATCH, by Bossa Studios, is a new game designed for the Apple Watch. Runmor has it your spy agency has fallen out of favor. To save it, you'll need to train-up a spy and send them on missions to earn you a stunningly suspicious reputation and... | Read more »
Both Halo: Spartan Assault and Halo: Spa...
Halo: Spartan Assault and Halo: Spartan Strike, by Microsoft, have officially landed on the App Store. Spartan Assault pits you against the Covenant with missions geared to tell the story of the origin of Spartan Ops. In Spartan Strike you'll delve... | Read more »
The Apple Watch Could Revolutionize the...
It’s not here yet but there’s that developing sneaky feeling that the Apple Watch, despite its price tag and low battery life, might yet change quite a lot about how we conduct our lives. While I don’t think it’s going to be an overnight... | Read more »
Mad Skills Motocross 2 Version 2.0 is He...
Mad Skills Motocross 2 fans got some good news this week as Turborilla has given the game its biggest update yet. Now you'll have access to Versus mode where you can compete against your friends in timed challenges. Turborilla has implemented a... | Read more »
Kids Can Practice Healthy Living With Gr...
Bobaka is releasing a new interactive book called Green Riding Hood  in May. The app teaches kids about yoga and organic style of life through mini-games and a fun take on the classic Little Red Riding Hood fairy tale. | Read more »

Price Scanner via MacPrices.net

Sale! 15-inch Retina MacBook Pros for up to $...
 MacMall has 15″ Retina MacBook Pros on sale for up to $255 off MSRP. Shipping is free: - 15″ 2.2GHz Retina MacBook Pro: $1794.99 save $205 - 15″ 2.5GHz Retina MacBook Pro: $2244.99 save $255 Adorama... Read more
New 2015 MacBook Airs on sale for up to $75 o...
Save up to $75 on the purchase of a new 2015 13″ or 11″ 1.6GHz MacBook Air at the following resellers. Shipping is free with each model: 11" 128GB MSRP $899 11" 256GB... Read more
Clearance 13-inch Retina MacBook Pros availab...
B&H Photo has leftover 2014 13″ Retina MacBook Pros on sale for up to $250 off original MSRP. Shipping is free, and B&H charges NY sales tax only: - 13″ 2.6GHz/128GB Retina MacBook Pro: $1129... Read more
Clearance 2014 MacBook Airs available startin...
B&H Photo has clearance 2014 MacBook Airs available for up to $200 off original MSRP. Shipping is free, and B&H charges NY sales tax only: - 11″ 128GB MacBook Air: $729 $170 off original MSRP... Read more
16GB iPad mini 3 on sale for $349, save $50
B&H Photo has the 16GB iPad mini 3 WiFi on sale for $349 including free shipping plus NY sales tax only. Their price is $50 off MSRP, and it’s the lowest price available for this model. Read more
Mac minis on sale for up to $75 off, starting...
MacMall has Mac minis on sale for up to $75 off MSRP including free shipping. Their prices are the lowest available for these models from any reseller: - 1.4GHz Mac mini: $459.99 $40 off - 2.6GHz Mac... Read more
Taichi Temple First Tai Chi Motion Sensor App...
Zhen Wu LLC has announced the official launch of Taichi Temple 1.0, the first motion sensor app for Tai Chi, offering a revolutionary new way to de-compress, relax and exercise all at the same time.... Read more
CleanExit – Erase your Hard Drive Quickly, Se...
CleanExit works on both Macs and PCs, securely and permanently deleting all files from any type of hard drive, flash-based drive or camera media card making the files permanently unrecoverable.... Read more
250 iPhone 6 Tips eBook Released for $1.99
Bournemouth, UK based iOS Guides has released 250 iPhone 6 Tips, a new eBook available in the iBookstore that reveals a wealth of tips and tutorials for iPhone 6 and iPhone 6 Plus. Priced at $1.99,... Read more
TigerText Introduces First Secure Enterprise...
TigerText, a provider of secure, real-time messaging for the enterprise, has announced the launch of TigerText for the Apple Watch. TigerText for the Apple Watch enables users to securely send and... Read more

Jobs Board

*Apple* Solutions Consultant - Retail Sales...
**Job Summary** As an Apple Solutions Consultant (ASC) you are the link between our customers and our products. Your role is to drive the Apple business in a retail Read more
*Apple* Solutions Consultant - Retail Sales...
**Job Summary** As an Apple Solutions Consultant (ASC) you are the link between our customers and our products. Your role is to drive the Apple business in a retail Read more
DevOps Software Engineer - *Apple* Pay, iOS...
**Job Summary** Imagine what you could do here. At Apple , great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring Read more
*Apple* Pay - Site Reliability Engineer - Ap...
**Job Summary** Imagine what you could do here. At Apple , great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring Read more
Sr. Technical Services Consultant, *Apple*...
**Job Summary** Apple Professional Services (APS) has an opening for a senior technical position that contributes to Apple 's efforts for strategic and transactional Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.