TweetFollow Us on Twitter

Bitmap Graphics

Volume Number: 14 (1998)
Issue Number: 8
Column Tag: Yellow Box

Bitmap Graphics

by by R. D. Warner
Edited by Michael Rutman

Demonstrated in Ray Tracing Algorithm Code Snippet

Overview -- Why Use Bitmaps?

There are times when the programmer needs control over individual pixels, plus all the speed that is possible. Rendering in 3D is one case where individual pixel values are calculated using compute-intensive algorithms. Lighting calculations are performed for each pixel (often recursively) and every little coding trick eventually is used in the pursuit of speed.

Pswraps is one technique available to Rhapsody developers. The "wraps" are C function calls that perform one to many PostScript operations in one interprocess message to the Window Server. So it is an efficient technique, and it is possible to write to individual pixels...sort of. For instance, one can define a rectangle with area of one pixel and the appropriate color, and draw that. This means one function call to draw each pixel. Display postscript is not really geared to individual pixel manipulation.

A slightly lower level approach is to use bitmaps. A data buffer is populated dependent on various color schemes (monochrome, RGB, CYMK, HSB) and other factors, such as whether or not the alpha channel is used. Then that data is rendered all at one time to whatever NSView has focus locked. One can expect a boost in performance using this approach and indeed that is the case. The pswraps code is simpler to use, but slower.

From games to medical imaging to paint programs, the need for high-speed pixel manipulations is the driving force in the decision to use bitmaps. This article uses a simple 3D rendering application as test code for comparing performance of a pswraps implementation against a bitmap.

Included in this article is a code snippet from Physics Lab. It is ported to the Rhapsody environment but this code should basically work fine in OPENSTEP 4.2 also. This program uses a ray tracer algorithm to render a typical scene composed of various graphics primitives (such as spheres, planes, cylinders, etc.) into an NSView object. There are many algorithms for achieving shading effects on 3D objects and ray tracing is one of the most popular ones for achieving photorealistic lighting effects. Rendering is the term used for the process of generating the 3D scene based on mathematical models. The NSView object is one of several in the Rhapsody AppKit that can display an image on the screen.

The application also includes a second NSView alongside the first where a custom renderer such as a subclassed ray tracer, can be displayed (disabled in screen shots). This is useful in visualization projects where one wants to see how something would be seen "normally" plus how it would appear using the custom viz algorithms. Specifically, this will be used for the visualization of field phenomena such as gravity -- one view displays the collection of objects as they would normally be seen with visible light, and the other view will make a visible representation of the field interactions between and inside the objects. (See Figures 1-3)

One will learn something about NSImage classes in this article too. They support bitmaps plus many other types of image representations. In fact it is necessary to understand them before advancing further.

Figure 1. Main window in Physics Lab with second NSView disabled.

Figure 2. Window for orienting the view volume.

Figure 3. Preferences Panel.

NSImage Classes

Description

An NSImage is a lightweight object compared to an NSView. An NSImage can draw directly to the screen, or be used (as it is in this code) as a container for NSImageRep subclass objects. The container composites the image to an NSView...which draws to the screen. There are a number of configurations for using the various view-related objects in AppKit, so this is only one approach. This approach is especially useful if one has multiple NSImage objects drawing to different areas within an NSView. Perhaps a collage-type scene where individual images (each one represented by an NSImage) over time are moved around within the rectangle defined by the NSView.

The NSImage contains an NSMutableArray object that can contain numerous NSImageRep subclass objects. NSMutableArray replaces the NXList found in NEXTSTEP 3.3. It is a dynamic array for containing groups of objects. Each representation is for the same image. For example, one can have representations that are at various resolution levels, or using different formats such as TIFF and EPS. Different color schemes can be used as mentioned earlier and one can even make custom representations to render images from other types of source information.

A representation can be initialized in a variety of ways, somewhat dependent on the representation type. The most straightforward ways are to initialize the NSImage from a file (typically in the application bundle) or from an NSData/NSMutableData object created dynamically, and let it create the appropriate representations (which it manages), based on the header information in the data. The image can also come from the pasteboard, from an existing NSView, or raw bitmap data. Reading in a file of an unsupported format type requires either a custom representation or a user-installed filter service.

The real magic of NSImage classes comes from its ability to select the best image for the current display device, from its list of representations. This logic algorithm is somewhat customizable by the programmer. There are several flags that can be set to modify the selection criteria and priorities (see below). The algorithm naturally only can select from the NSImageReps that are managed by the NSImage in question. Thus the selection process can also be controlled indirectly by the types of representations added. The steps in the selection algorithm below are followed only until the first match is established.

Selection Algorithm

  1. Choose a color representation for a color device, and gray-scale for a monochrome device.
  2. Choose a representation with a matching resolution or if there is no match, then choose the one with the highest resolution. Note that setPrefersColorMatch:NO will cause the NSImage to try a resolution match before a color match.
  3. Resolutions that are multiples of the device depth are considered matches by default. Choose the one closest to the device. Note that setMatchesOnMultipleResolution:NO will cause only exact matches to be considered.
  4. The resolution matching discriminates against EPS representations since they do not have defined resolutions. The setUsesEPSOnResolutionMismatch:YES will cause the NSImage to select an EPS representation (if one exists) if no exact resolution match can be found.
  5. Choose the representation with the specified bits per sample that matches the depth of the device. Note that the number of samples per pixel may be different from the device (an RGB no-alpha color representation has three samples per pixel, while a monochrome monitor would have one sample per pixel, for example).
  6. Choose the representation with the highest bits per sample.

Abbreviated Method Quick Reference (based on online documentation)

-(id)initWithContentsOfFile:(NSString*)filename

Initializes the NSImage with the contents of filename and reads it at this time. If it successfully creates one or more image representations, it returns self. Otherwise the receiver is released and nil is returned.

-(id)initWithData:(NSData*)data

Initializes the NSImage with the contents of data. If it successfully creates one or more image representations, it returns self. Otherwise the receiver is released and nil is returned.

-(void)addRepresentation:(NSImageRep*)imageRep

Adds imageRep to receiver's list of managed representations. Any representation added in this way is retained by the NSImage, and released when it is removed.

-(void)removeRepresentation:(NSImageRep*)imageRep

Removes and releases imageRep.

-(NSArray*)representations   

Returns an array of all the representations managed by the NSImage.

-(void)lockFocus
-(void)lockFocusOnRepresentation:(NSImageRep*)imageRep
-(void)unlockFocus

Using the approach outlined in this code one does not have to bother with lockFocus/unlockFocus pairs. This is because the NSImage performs its drawing to the NSView from within the NSView's own drawRect: method. That method is called indirectly by the NSView when it receives a display message from within the code. Focus is automagically locked on the NSView as part of this whole process, so it does not need to be done by the programmer.

These methods are mentioned here because one will use them in certain circumstances. If the NSImage draws directly to the screen instead of to an NSView, these may be needed. To force the NSImage to determine its best representation, or to test in advance that an NSImage can actually interpret its image data, one may need to lock the focus. In the last case, if the NSImage cannot understand the data it was given, it will raise an exception when it fails to lock focus; the exception can be trapped by the code to determine that the image file was the wrong format, garbled, or in general not displayable.

Exceptions are a programming technique within the OPENSTEP/Rhapsody environment that effectively take the place of error return codes and code for checking the return codes. See the documentation for NSException objects for more details. The act of locking focus does a variety of things but conceptually it determines what object will be rendering the image information to the screen, and it establishes the coordinate system that will be used to position the images.

-(void)compositeToPoint:(NSPoint*)aPoint 
      operation:(NSCompositingOperation)op

The aPoint argument specifies the lower-left corner in the NSView coordinate system as to where the NSImage shall be composited. The op argument can take one of currently fifteen enumerated values. The NSCompositeCopy is common if one simply wants to copy the NSImage image into the NSView. If the alpha channel is used, NSCompositeSourceOver would be used (source laid over destination with a specified opacity). NSCompositeClear will clear or erase the affected destination area. NSCompositeXOR does a logical exclusive-OR operation between the source and destination data.

-(void)dissolveToPoint:(NSPoint*)aPoint 
      fraction:(float)aFloat

Composites the image to the location specified by aPoint, but it uses the dissolve operator. The aFloat argument ranges between 0.0 and 1.0 and indicates how much of the resulting composite will be derived from the NSImage. A slow dissolve can be accomplished by repeatedly using this method with an ever-increasing fraction until it reaches 1.0. Note that each dissolve operation must be performed between the source data and the original destination data, not the cumulative result of each dissolve (keep a copy of the original data somewhere and do the dissolve operation off-screen -- then flush it to the screen). Note that a slow dissolve is not possible on a printer, but one can do a single dissolve operation and print the results of that.

-(void)setFlipped:(BOOL)flag
-(BOOL)isFlipped

When working with raw bitmap data one may find it necessary to flip the coordinate system y-axis. Thus (0,0) is in the upper-left corner instead of the lower-left corner. It depends how the algorithm generates the data, but if images are being displayed on the screen upside down, this is how to fix it:

setFlipped:YES
-(void)setSize:(NSSize)aSize
-(NSSize)size

With raw data in particular, one wants to explicitly make the image representation (since the structure of the data needs to be described), then add the representation to the list of managed representations for the NSImage. Contrast this approach to the situation where one initializes an NSImage with the contents from a file, and the appropriate representation(s) are created automatically. When working with raw bitmap data the image representation must manually be created and told how many bytes per pixel there are, whether or not there is an alpha channel, etc. Since the NSImage did not create the image representation it does not know the size of the image. Use setSize: to tell it.

-(NSData*)TIFFRepresentation

Returns a data object with the data in TIFF format for all representations, using their default compressions.

Example Uses of an NSImage

  1. Draw same image repeatedly to an NSView (offscreen cache).
  2. Have optimized images for various types of displays, or printer.
  3. Read file in one format and write to another (e.g., read bitmap, write TIFF).
  4. Draw to a bounded rectangle within an NSView.
  5. Manipulate individual pixels.
  6. Read image from an NSView, perform filtering operation, draw image back to an NSView.
  7. Draw to an NSView using a variety of logical operators other than simple copy.
  8. Make existing image in NSView dissolve into new image.

NSBitmapImageRep Classes

Description

There are currently four subclasses of the NSImageRep class: NSBitmapImageRep, NSCachedImageRep, NSCustomImageRep, and NSEPSImageRep. For manipulating raw bitmap data, use the bitmap image rep. The class has a data buffer of type: unsigned char*. There are numerous initializers, but for raw bitmap data such as that which will be generated in this example, use the initWithBitmapDataPlanes:::::::::: method. All the many arguments are used to describe the structure of the data.

It takes a lot of arguments to canonically describe raw bitmap data. For example, there is a "meshed" mode where the data color components are grouped together in memory. An RGB pixel would typically have three bytes together, one for each color component--possibly a fourth byte if the alpha channel is used. There is another mode called "planar" where each color component is grouped in a separate plane. This means in the previous example all the red bytes would come first, then the green, then the blue, then the alpha.

Other arguments define the width and height of the image in pixels, plus the number of bits per pixel and the total number of bytes in the image. There are instances where the total number of bytes is different from the number of pixels times the number of bits per pixel such as when the pixels are aligned on word boundaries and the pixel bits are less than that. So it may seem to the programmer that more arguments are being provided than is necessary, but this is not the case. The programmer can also specify which color space is used and whether or not the alpha channel is present.

Currently, no less than nine color spaces are supported, including both calibrated (device independent) and device-specific versions. The code snippet in this article uses the RGB color space with no alpha, one byte per color component, and three bytes per pixel.

One of the arguments to the initializers is a pointer to a data buffer. This may be set to NULL. If so, the method will calculate the size of the buffer (based on the arguments given it) and allocate it. The buffer is thus owned by the instance, and freed when it is freed. The getBitmapDataPlanes or bitmapData methods are used as accessors to get a pointer to the buffer that was allocated, so that it can be populated. The other approach is to allocate the memory for the buffer before initializing the representation. If this is the case then one does not have to use the data method to retrieve a pointer since one already knows it. The instance does not own the buffer and it is the programmer's responsibility to explicitly free it.

Abbreviated Method Quick Reference (based on online documentation)

+(NSArray*)imageRepsWithData:(NSData*)bitmapData

Creates and returns an array of initialized NSBitmapImageRep objects based on images in the bitmapData. Returns empty array if images cannot be interpreted.

-(id)initWithBitmapDataPlanes:(unsigned char**)planes 
      pixelsWide:((int)width pixelsHigh:(int) height
      bitsPerSample:(int)bps samplesPerPixel:(int)spp
      hasAlpha:(BOOL)alpha 
      isPlanar:(BOOL)planar 
      colorSpaceName:(NSString*)colorSpaceName 
      bytesPerRow:(int)rowBytes bitsPerPixel:(int)pixelBits

RGB data will have three or four planes (without or with alpha) and CMYK will have four or five. White or black color space (grey-scales with 1.0 == white or 1.0 == black, respectively) will have one or two. If isPlanar is NO then only the first plane of planes will be read. This effectively places the rep in meshed mode.

Colorspace Names

   NSCalibratedWhiteColorSpace
   NSCalibratedBlackColorSpace
   NSCalibratedRGBColorSpace
   NSDeviceWhiteColorSpace
   NSDeviceBlackColorSpace
   NSDeviceRGBColorSpace
   NSDeviceCMYKColorSpace
   NSNamedColorSpace
   NSCustomColorSpace
 -(unsigned char*)bitmapData

Returns a pointer to the bitmap data or first plane if in planar configuration.

-(void)getBitmapDataPlanes:(unsigned char**)planes

The planes pointer should be an array of five character pointers. If the bitmap data is in planar configuration, each pointer will be initialized to point to one of the data planes. If there are less than five planes, the remaining pointers will be set to NULL.

-(NSData*)TIFFRepresentation

Returns a TIFF representation of the data, using the compression that is returned by getCompression:factor:. An NSTIFFException or NSBadBitmapParameters-Exception may be raised if an error is encountered.

Logical Steps for Using an NSImage and NSBitmapImageRep

1) Define data buffer. See Figure 4.

   Example: unsigned char* dataBuffer = 
      (void*)calloc((pixWide*pixHigh*bytesPerPix), 
         sizeof(char));

2) Allocate and initialize an NSBitmapImageRep. See Figure 4.

   Example: myNSBitmapImageRep = [NSBitmapImageRep alloc];
      [myNSBitmapImageRep 
         initWithBitmapDataPlanes:&dataBuffer
         pixelsWide:pixWide
         pixelsHigh:pixHigh
         bitsPerSample:SAMPLE_SIZE   
         samplesPerPixel:SAMPLE_NUMBER hasAlpha:NO
         isPlanar:NO colorSpace:NSCalibratedRGBColorSpace
         bytesPerRow:(PIXEL_BYTES * pixWidth)
         bitsPerPixel:(PIXEL_BYTES * 8)];

The bitsPerSample argument describes the number of bits per color component. In the example code to follow, this is eight bits or one byte. To allocate three bytes per pixel one would thus set bitsPerSample to eight and samplesPerPixel to three.

Figure 4. Relationship between NSImage, NSBitmapImageRep, and data buffer.

3) Allocate and initialize an NSImage. Figure 4 describes the relationship between the NSImage and its representations.

Example:

myNSImage = [[NSImage alloc] init];

4) Tell the NSImage instance what representations to use.

Example:

[myNSImage addRepresentation:myBitmapImageRep];

5) Set the size to match the size of the image in the bitmap representation.

Example:

[myNSImage setSize:viewRectangle.size];//NSRect struct

6) Flip the y-axis. This is necessary, to make a right-handed coordinate system where (0,0) is the lower-left corner.

Example:

[myNSImage setFlipped];

7) Fill the buffer with data. Often in rendering, the color values will be a float or double triplet or quartet, with each element having values between 0.0 and 1.0. It is necessary to massage this raw data into the appropriate format for the data buffer. In the example code this means simply translating to a triplet of integers having values between 0 and 255.

Example:

color[i] *= 255; //Convert to unsigned int byte: Step 1
   if (modf(color[i],&theIntPart) >= 0.5)//Round to nearest 
               //int: Step 2
   theIntColor[i] = theIntPart + 1; //Assign to type int 
               //var: Step 3a
   else
   theIntColor[i] = theIntPart;     //Step 3b

8) Draw the image to the NSView with either the compositeToPoint:operation: or dissolveToPoint:fraction: methods. One caveat is that if execution is not in the drawRect: method of a particular NSView when this drawing is performed, then focus needs to be locked on a particular NSView or NSImage fiirst. An example of this would be the case where one wants some initial startup images to appear in the application's NSViews to dazzle the users.

The drawRect: method is actually called during startup so one could still use it with some code to determine whether it was called during startup or called by the user. But... maybe information is needed from another object before doing the drawing... and since there is no guarantee as to which order objects inside the .nib file will be initialized, one cannot be certain other objects will exist yet when the NSView is being initialized. What to do?

Use a delegate of NSApp that automatically gets fired off after all objects get initialized upon startup. This delegate method would then lockFocus on the NSView to which it wants to draw, and have its NSImage object do a composite operation. The composite operation needs to know where to send the output. If the focus is locked, then unlock it after the drawing (unless an exception is raised).

Example:

      [myNSImage compositeToPoint:theOriginPt 
         operation:NSCompositeCopy];
      [myNXImage dissolveToPoint:theOriginPt fraction:delta];

When and How to Use This Code Fragment

The following code fragment has been extracted from Physics Lab's RayTracerRenderer class. Relevant instance variables and global definitions are included with it. The initWithFrame: initializer is included but code has been removed from it that is not relevant to bitmaps and would complicate and obfuscate its purpose. Every effort has been made to provide a piece of code that can be hooked into your own project.

The drawRect: method is really the heart of the renderer. Several other support methods that are called are included after it, and listed in the order they are called. When an NSView receives a draw message it generates its own drawRect: message; the programmer does not explicitly send this message. What does this code do? A tiny overview of graphics techniques and terminology is in order.

Physics Lab implements a mathematical model of various types of objects and their orientation in space. So mathematically there is a virtual four-dimensional Universe that is being modeled (the objects can move through three dimensions with respect to time). So there is the problem of how to display these three-dimensional objects at a given time, on a two-dimensional surface (the computer screen). It is true that the model is a virtual one, but it closely parallels the same process found in everyday life when one makes a movie of an actual three-dimensional scene. A mapping occurs from the four-dimensional world onto the rectangular, two-dimensional film.

The first thing the app does is it positions the view volume at the desired location in space. The user has to specify what part of the virtual mathematical Universe is of interest for mapping onto the screen. The view volume as used here is a truncated pyramidal shape with the flat "top" surface being the projection surface (also called the window in the view plane) for the objects contained within the volume.

Imagine a large, truncated, transparent pyramid (having a flat top surface rather than a point) with a pole sticking up out of the middle and long enough so that the end of the pole would be at the point of the pyramid if the pyramid had a point. The tip of the pole thus represents where the viewer's eye is located -- the Perspective Reference Point or PRP. Imagine the dimensions of the top surface of the pyramid are let's say 40' x 30', to make a concrete example. There are user interface controls that allow the specification of the size and location of this view volume in space. See Figure 5.

Now, you want to draw what you see from your precarious vantage point atop the pole, into an NSView using the bitmaps we have been discussing. You know the NSView has let's say 150 x 200 pixels, so you will make a data buffer for the NSBitmapImageRep large enough to hold ninety-thousand bytes (for the RGB color model with no alpha: 150 * 200 * 3 = 90,000). So you need to look at some thirty-thousand equally spaced points on that top surface, which acts as a projection surface for the objects embedded inside the transparent pyramid.

The idea is to look at each of these points on the projection surface, and record the color that is seen there. Then set the color of the corresponding pixel on the computer screen to match that color. In effect, this is the mapping that the computer performs from the virtual 3D world to the computer screen for any given time t. It is important to calculate the spacing for a uniform grid to place on the top surface. Imagine looking down from the top of the pole at each of the points on the grid. If you see an object behind that point, you note exactly what the color is. If no object, then you set the color to the background color. The values you collect are the values that will be used for setting individual pixels. Our bitmaps. In a few paragraphs, that sums up the act of rendering in this app. Note that to get into canonical descriptions of view volumes and the like would require much more rigour because for example, the "pole" does not have to be in the center of the pyramid.

Figure 5. View Volume Description.

This code provides the functionality of creating the grid for the projection surface. The idea is to generate enough equally spaced grid points to collect exactly the amount of data to fill the NSView. You provide the way of generating a 2D surface that has the image data projected onto it, and this code will sample that data in a fair fashion to create and populate an NSBitmapImageRep of any given resolution -- then draw that data into an NSView.

This code may be divided into several sections. Note that there is a dependency coded into the drawRect: method. It assumes that the UIInfoFields and UIPreferencesInfo-Fields pointers have already been allocated and defined if a rendering operation is in process. Since drawRect: is called when the NSView is being initialized, a test must be made to see whether initialization is occurring or whether an actual rendering operation is in progress. If the pointers are still set to NULL (thanks to the initWithFrame: method) then processing of basically the entire drawRect: method is avoided. There is conditional code in this snippet for checking return codes for error conditions. A cleaner approach would be to implement Rhapsody's assertion and exception handling mechanisms; this will probably be done in the next pass through the code.

Section 1

This is simply the relevant global definitions, and instance variables from the RayTracerRenderer class. These can be placed in the custom app with the appropriate name changes.

Section 2

The variable declarations in the drawRect: method are straightforward and commented. It probably will not be necessary to modify any of these, except for changing myName to match the name of your class.

Section 3

Initialization of the method variables should not require much modification either. The erase method is called to clear the view to a preset color. It uses a simple pswraps function called PSsetgray().

Section 4

This section calculates two different coordinate systems, plus the matrices necessary to move back and forth between them. Please note that in the calcCoordinateSystems method there are messages for other functions not included in this article. There are several reasons why this is so: 1) The methods require a knowledge of linear algebra; 2) To include them would more than double the amount of code to digest; 3) They reduce the focus of the article; 4) With a few simplifying assumptions described below, this whole section on creating coordinate systems is not even necessary.

First, the questions must be answered, "What are coordinate systems and why do we ever need them?" Most programmers are familiar with a screen coordinate system. In Rhapsody, the lower-left corner of the screen has the coordinates (0,0). Each window has its own coordinate system starting with (0,0) in its lower-left corner too. An NSView naturally has its own coordinate system implemented in the same way. Thus a hierarchy of coordinate systems exist wherein an image is positioned in an NSView which is positioned in an NSWindow which is positioned in an NSApp which is positioned on the screen. Of course a given window can have numerous NSViews and there are many more factors which shall be ignored for the sake of simplicity here. All of these various coordinate systems belong to the same group--they are associated with finding a specific pixel on the computer screen. This coordinate system is called UVN within this code. For graphics work, other coordinate systems must be considered that have nothing to do with the computer screen.

This program implements a spatial mathematical model of the Universe. A coordinate system is needed to describe objects using an arbitrary distance scale of angstroms, feet, meters, kilometers, parsecs, light-years, galactic radii or what-have-you. The literature often calls this the World Coordinate System or WCS. It is the coordinate system of the physical world volume one wishes to model.

Another coordinate system is used to describe the space within the view volume (which is positioned in space within WCS coordinates) as it faces the origin of the WCS. The view volume has its own (0,0,0) origin point for associating the relative position of every point within it. This coordinate system is called UVN1 within the code. Instead of X,Y,Z axes, there are U,V,N axes. Yet another coordinate system is used within the view volume if it is rotated so that it no longer faces the WCS origin. This system is called UVN2 within the code.

So, it is necessary to map a point described in WCS coordinates, to UVN1, then to UVN2, and finally to screen coordinates (UVN). It probably would be more intuitive to rename them and move from WCS to UVN2 to UVN1 and finally to UVN, right? This change may be implemented in the future. Fortunately some simplifications can be made so that it is only necessary to map from WCS straight to UVN screen coordinates, which should make this all easier to understand.

Here are the simplifying assumptions. Refer back to Figure 5. This view volume can be of any height, width, and depth. That is not a problem. The problem is if one wants to position it anywhere, and with any aspect or angle relative to the WCS axes. If one wants to do that, then one must implement multiple coordinate systems. On the other hand, much can be done with a fixed view volume. By fixed, it is meant that the coordinate system of the view volume exactly aligns with the World Coordinate System (which describes the physical Universe being modeled). The Viewer's Reference Point or VRP is the origin of the view volume coordinate system. It thus must equal (0,0,0) in WCS. This simplification requires the view volume to be centered at (0,0,0) in the physical Universe being modeled. It cannot be located at any other position. The objects must be located in the view volume in order to be displayed -- so they must be modeled near the origin.

Further, the window in the view plane (top surface of the pyramid) is parallel to the plane created by the X-Y axes. The simplest approach is to further have the "eye" or Perspective Reference Point (PRP) located on the positive Z axis, and have the VRP located at the center of the view volume. For mapping purposes the +Z axis can be thought of as coming "out" of the monitor, and -Z "going into" the monitor. In a standard way, +X goes to the right and +Y goes upward. This preserves a right-handed coordinate system. The Z coordinate for the window thus becomes frontN and the Z coordinate for the back clipping plane (a negative Z value) becomes rearN -- two instance variables in the RayTracerRenderer class.

The scale must be the same in the screen coordinate system and WCS.

To drive all this home, a simple example is in order. Imagine that the units of distance are meters. The window in the view plane is two meters wide and two meters high. It is located at N = 1, and it is parallel to the plane formed by the U-V axes. WCS is in meters and UVN is in meters. Their axes exactly align with one another. WCS exactly equals UVN. The PRP is located at N = 2 and the back clipping plane is located at N = -1. The origin or VRP of the view volume is located at (0,0,0). Any objects that fall within the truncated pyramid formed by this view volume will be visible. The view volume can not be rotated or scaled or translated away from its origin point. Life is good.

With these simplifications it is possible to do all calculations in one coordinate system (WCS). No translations, scalings, and rotations are required to move from one to another. Actually this is technically not quite true for a ray tracer as one generally still must convert the cast ray to a "generic object" coordinate system to see if it intersects with the graphics primitives. But, as far as this code snippet is concerned, no changes in coordinate systems would be required. The "generic object" coordinate system is found in the RayTracerShader instance that is messaged by drawRect: to calculate the actual pixel color -- and that code is not included here. If another type of shader is used, that coordinate system would not be needed.

Section 5

This is the meaty part. The nested for-loops perform the mapping from the window in the view plane to a mathematical viewport, which has a one-to-one correspondence with each pixel. Some simple math is performed to derive deltaU and deltaV. These are the increments that for the window of given dimensions, will generate the same amount of points as there are pixels in the NSView image. The actual for-loops use row and col as indices. These refer to the NSView image and as can be seen from Figure 6, use a different coordinate system than that of the window. A cast ray is created, originating at the "eye" or PRP and passing through a point on the window defined by uPixel and vPixel.

Figure 6. Mapping from window in view plane to screen.

This ray is passed to the shader. Many types of shaders are possible. Physics Lab uses a recursive ray tracing algorithm (the nested for-loops are actually part of it). For this article, the message to the shader is a black box that simply returns the appropriate color for a given pixel. Note that the NS3.3 development environment contains an entire suite of powerful 3D rendering classes in 3DKit. This was not used here for a couple of reasons, mainly because the current version at the time of development did not allow one to hook in a custom shader, but also for portability reasons.

Once the color value is returned, it is converted from the range 0.0 - 1.0 to the range 0 - 255. Further, it is rounded to the nearest integer. That is because the image data uses single bytes for each color component or sample. The data buffer is populated by the time the nested for-loops complete, and then a message to compositeToPoint:operation: in the NSImage completes the rendering process. Since all of this is happening inside of the NSView's own drawSelf:: method, it is not necessary to lock focus. Code is included for rendering the image using pswraps also. This may be useful for performance comparison purposes.

Code Snippet:

///SECTION 1: STRUCTURES & INSTANCE VARS///
typedef struct _GL_Ray
{
   double start[3];
   double dir[3];
   double normalizedDir[3];
   double mediumRefractivity;   //Index of refraction for 
                                //medium ray traveling
                                //through. Useful for 
                                //Constructive Solid
                                //Geometry or anytime have 
                                //compound object boundaries. 
if == 0, then a vacuum.
   double length;
} GL_Ray;
typedef struct _GL_UIInfo
{
   double RAField;       //Right Ascension from user interface.
   double DField;        //Declination from user interface.
   double VVRAField;     //View Volume Right Ascension from UI
   double VVDField;      //View Volume Declination from UI
   double CTField;       //Current Time from UI
   double TBIField;      //Time Between Images    from UI
   double FITField;      //First Image Time from UI
   double DFCField;      //Distance From Center from UI
   int VPWidthField;     //View Plane Width from UI
   int VPHeightField;    //View Plane Height from UI
   int VVDepthField;     //View Volume Depth from UI
   
   id OList; //Object array.
   id LList; //Light array.
} GL_UIInfo;
typedef struct _GL_UIPreferencesInfo
{
   double ambientField[3];   //Ambient light RGB array.
   int   PRPDistanceField;   //Distance from view plane.
   
   //Diffuse coefficients
   double attenuationFactorC1Field;   //Light-source 
                                      //attenuation factor.
   double attenuationFactorC2Field;   //Light-source 
                                      //attenuation factor.
   double attenuationFactorC3Field;   //Light-source 
                                      //attenuation factor.
   
   //Global brightness controls
   double ambientBrightnessField;   //These settings scale 
                                    //final color.
   double diffuseBrightnessField;   //Brightness for all objs.
                                    //objects in the scene.
   double specularBrightnessField;  //(range 0 - 1).
   
   int maxRayTreeDepthField; //How recursive you want to be?
   int statusMode;   //TERSE, VERBOSE, or DEBUGGING?
   
} GL_UIPreferencesInfo;

//Global #defines
#define GL_SAMPLE_SIZE 8   //Number of bits in a sample (one 
                           //color component).
#define GL_SAMPLE_NUMBER 3 //Number of samples per pixel.
#define GL_PIXEL_BYTES 3   //May be different from 
                           //GL_SAMPLE_NUMBER *
                           //GL_SAMPLE_SIZE / 8 if 
                           //GL_SAMPLE_SIZE is non-integer
                           //multiple of a byte and data 
                           //aligned along word boundary.

///INSTANCE VARIABLES FOR RENDERER (USED IN THIS CODE FRAGMENT)//
double PRP[3]; //Projection Reference Point (eye).
double VRP[3]; //In XYZ (WCS) coords.
double UVN1[3][3],UVN2[3][3];   //Axes defined in XYZ, 
                                //UVN1 coords, respectively.

double inverseUVN1[3][3];   //Inverse matrices
double inverseUVN2[3][3];   //for moving betw coord systems.

double VUP1[3],VUP2[3];      //Defines "which way is up" in
                             //containing coord 
                             //systems. (XYZ, UVN1 coords,
                             //respectively)

GL_UIInfo* UIInfoFields;      //Instance of structure for 
                              //containing UI data from main 
                              //window and orientation window.

GL_UIPreferencesInfo* UIPreferencesInfoFields; //Struct 
                                               //containing info
                                               //from preferences panel.

int frontN, rearN; //Z coords (if view volume fixed) or N 
                   //coordinates in UVN screen coordinate system, of
                   //where the front and rear clipping planes of the 
                   //view volume are located. Note front clipping 
                   //plane contains view window.

int aWindow[4], viewport[4];   //LL and UR corners for view 
                               //window and view port.

///END OF INSTANCE VARIABLES///

///END OF SECTION 1///


//The drawRect: method is called indirectly. The application code sends a display msg 
//to an NSView and the NSView eventually sends itself a drawRect: msg, executing this
//code. In essence, "rays" are fired through the points in space that map to the pixels on 
//the screen. This code determines the mappings and creates the rays. It then sends the 
//rays to a "black box" that returns the color of any object (if any) that was "struck" by 
//the ray as it traces a path through its mathematical Universe. 

////////////////////////////////////////////////////////////////////////////

-drawRect:(NSRect *)r
{

///////////////////////SECTION 2: DECLARATIONS//////////////////////////////

//Used for status msgs
   char myName[] = "[rayTracerRenderer drawRect]";
   int status = GOOD;

   int i, row, col, theByteOffset;   //Various indexes.

   double color[3];      //The raw shader color data ranging 
                              //from 0.0 to 1.0.
   double theIntPart;   //Used in intermediate step for 
                              //converting data from float to int.
   int theIntColor[3];   //Integer value of color returned 
                              //from shader.

   double aGray;         //Contains color value in PSWraps code 
                           //(commented out).

   //View window variables.
   double deltaU, deltaV, uPixel, vPixel, targetPt[3]; 

   GL_Ray newRay;   //Ray from PRP to (uPixel, vPixel,
                        //nFront)
   
   unsigned char* bitmapData = NULL;   //Buffer for image data   
   
   NSPoint theOriginPt;   //Used to tell NSimage where in 
                                 //NSView to begin 
                                 //compositing (contains lower-left 
                                 //point of the image in NSView
                                 //coordinate system)
   
   NSRect theRect, viewRect;   //Rectangle structures.

   NSImage theImage;
   NSBitmapImageRep theBitmapRep;   //OUR BITMAP!

//////////END OF SECTION 2: DECLARATIONS//////////////


//////////SECTION 3: INITIALIZE///////////////////////
   //Get view bounds
   viewRect = [self bounds];

   //Allocate memory for buffer, based on size of NSView bounds rectangle
   bitmapData = (void*)calloc(viewRect.size.width *
viewRect.size.height * GL_PIXEL_BYTES,sizeof(unsigned 
char));

   if (bitmapData==NULL)
      status = BAD;

   //Initialize originPt
   theOriginPt.x = 0.0;
   theOriginPt.y = 0.0;
   
   //Check and see if render button has been pressed or if this method
   //called simply by initialization display msg at startup time
   if (status==GOOD)
   {
      //These will have been initialized by time execution gets here
      //if Render button has been pressed. Both SHOULD be NULL if it is 
      //the init call, but using OR as it is an abort condition if EITHER
      //is NULL.
      if ((UIInfoFields==NULL) ||
         (UIPreferencesInfoFields==NULL)) 
      {                      
         [self erase];   //If called during init, clear NSView
         status = BAD;   //but don't do any other processing in 
                           //this method.
      }
   }
   else
      [self erase];   //If status bad, still clear the View; 
                        //don't do any
                        //other processing in this method though.
//////////////END OF SECTION 3: INITIALIZE//////////////////////


//////////////SECTION 4: GENERATE COORDINATE SYSTEMS////////////

//NOTE: This section is not necessary if you make the following assumptions:
// 1) Your view volume has an origin that is at (0,0,0) in the World 
//  Coordinate System (X,Y,Z coordinates). 
// 2) The axes of your view volume are parallel to the X,Y,Z axes.
// 3) The scale of your view volume is equivalent to that of the X,Y,Z 
//  coordinate system.
// 4) The view plane is parallel to the X-Y plane and offset in positive Z dir
// Put another way: Your U,V,N screen coord system is exactly the same as the 
// X,Y,Z World Coordinate System.
   if (status==GOOD)
      status = [self calcCoordinateSystems];      

//////////END OF SECTION 4: GENERATE COORDINATE SYSTEMS////////


/////////////////SECTION 5: RAY CASTING////////////////////////

/*******PERFORM RAY CASTING IN WCS (X,Y,Z)**********************/
   if (status==GOOD)
   {
      //Convert eye from second uvn to xyz. PRP DFC provided by pref panel
      //Uses UVN2, UVN1, and VRP instance variables.
      //PRP = Distance From Center (of view plane) plus half of view 
      //volume depth. 
      //Not necessary for fixed view volume.
      status = [self convertUVN2toXYZ:PRP];
   }
   
   if (status==GOOD)
   {
      //Calc viewport iteration factor from view window to viewport
      //Uses aWindow[], and viewport[] instance variables which are
      //set by getUIInfo. Note that window size and viewport size 
      //forced by getUI method to be of even size (thus always has 
      //exact center). A 200 x 100 window has coords (-100,-50,100,50)
      //and a 50 x 50 viewport has coords (0,0,49,49). 
      deltaU = ((float)aWindow[2] - (float)aWindow[0]) / 
         ((float)pixelsWide - 1); 
      deltaV = ((float)aWindow[3] - (float)aWindow[1]) / 
         ((float)pixelsHigh - 1);   
   }

   if (status==GOOD)
   {
      vPixel = aWindow[1]; //Window bottom.

      //From bottom to top.
      for (row = 0; row <= (pixelsHigh - 1); row++)
{
         uPixel = aWindow[0]; //Window left.
         //From left to right.   
         for (col = 0; col <= (pixelsWide - 1); col++) 
         {
            //Construct ray from eye through (u,v,frontN)
            //By first making targetPt array in UVN2 coords.
            targetPt[0] = uPixel;
            targetPt[1] = vPixel;
            targetPt[2] = frontN;

            //Convert targetPt to XYZ coords
            //Uses UVN2, UVN1, and VRP instance variables.
            //Not necessary if your UVN==XYZ
            status = [self convertUVN2toXYZ:targetPt];
            if (status != GOOD)
               break;

            //Calc length and normalizedDir elements of ray struct
            //Returns nil if length of ray is zero (cannot normalize).
            //This would only happen if PRP distance from view plane
            //is zero. NOTE: PRP DFC field validation should not 
            //allow value of zero or neg. (PRP has to be in front of
            //view plane).
            if ([talkToMathUtilities defineRay:&newRay 
               withStart:PRP andTargetPt:targetPt]==nil)
            {
               status = BAD;
               break;
            }
            
            //TraceRay has to return status since recursively
            //used for intra-object communication too.
            //This is a Black Box for the purposes of this article.
            //Suffice that for the given ray, an appropriate 
            //pixel color is returned. Implement any algorithm you 
            //desire here. The basic idea is to extend this 
            //mathematical ray and find what objects if any it 
            //intersects within the view volume. If no intersection 
            //then return ambient light color as the background. 
            //If intersection then at point of intersection of 
            //nearest object determine the color and brightness 
            //based on the object and all light sources in the scene 
            //(diffuse, specular, plus recursive reflective and 
            //refractive rays). Easy as that.
            status = [talkToShader traceRay:newRay atDepth:1 
               returnsColor:color];
            if (status!=GOOD)
               break;   
                   
            for (i = 0; i < 3; i++)
            {
               //Clean up and insure an invariant on domain of data
               //(in case of bugs in your shader)

               //Clamp color to max of 1.0
               if (color[i] > 1.0)
                  color[i] = 1.0; 
               //and min of 0.0
               if (color[i] < 0.0)
                  color[i] = 0.0;
               
               //COMMENT THIS OUT if want to use PSWraps code
               color[i] *= 255; //Convert to unsigned int byte
               if (modf(color[i],&intPart) >= 0.5)
                  intColor[i] = intPart + 1;
               else
                  intColor[i] = intPart;
               //TO HERE

            }
            
/*            //USE THIS CODE if want to see pswraps version
            //PS monochrome
            aGray = (color[0] + color[1] + color[2]) / 3.0;   
            PSsetgray(aGray); //Normalized total
            PSCompositeRect(col,row,1,1,NSCompositeCopy); 
            //TO HERE
*/             
            //Use this code if want to see NSImage version
            //Calculate starting byte index in data array
            theByteOffset = GL_PIXEL_BYTES * ((row * 
               viewRect.size.width) + col);
            for (i = 0; i < 3; i++)
               bitmapData[byteOffset + i] = intColor[i];
            //TO HERE
            
            uPixel += deltaU;
         }//End of inner-for
         vPixel += deltaV;
         if (status!=GOOD)
            break;
      }//End of outer-for   
   
      //COMMENT OUT THIS CODE if want to use PSWraps version
      if (theImage!=nil)   
      {   
         [theImage release]; //Note releasing NSImage
         //releases managed reps.   
         theImage = nil;
      }
      
   
      theImage = [[NSImage alloc] 
         initWithSize:viewRect.size];   //No retain
      if (theImage==nil)
         status = BAD;
      else
      {
         [theImage setFlipped:YES];   //Flip coords so (0,0) is 
                        //upper left 
                        //instead of lower left corner.
                        //(May not need this depending on order
                        //in which your data is generated).
      }

      //Initialize NSBitMapImageRep
      theBitmapRep = [NSBitmapImageRep alloc]; //No retain
      if (theBitmapRep==nil)
         status = BAD;
      else
      {
         //Note ampersand
         [theBitmapRep initWithBitmapDataPlanes:&bitmapData
            pixelsWide:viewRect.size.width
            pixelsHigh:viewRect.size.height 
            bitsPerSample:GL_SAMPLE_SIZE      
            samplesPerPixel:GL_SAMPLE_NUMBER hasAlpha:NO
            isPlanar:NO 
colorSpaceName:NSCalibratedRGBColorSpace
            bytesPerRow:(viewRect.size.width * GL_PIXEL_BYTES)
            bitsPerPixel:(GL_PIXEL_BYTES * 8)];
         [theImage addRepresentation:theBitmapRep] ;
      }
   
   
      //Composite the NSImage to the NSView
      [GL_NSImage compositeToPoint: theOriginPt 
         operation:NSCompositeCopy];
   
      //TO HERE

   }   
   
   if (status != GOOD)
      return nil;
      
   return self;
}
////////////////END OF SECTION 5: RAY CASTING/////////////////

-(void)erase //Clears the view
{
   PSsetgray(NSWhite);
   NSRectFill([self bounds]);
   return;
}


/////////////////////////////////////////////////////////////

//The calcCoordinateSystems method is another black box here. If some simplifying 
//assumptions are made as described in the article, this entire section is not needed. 
//It calls functions that are not included in this article. 

/////////////////////////////////////////////////////////////
-(BOOL)calcCoordinateSystems
{
   int status = GOOD;
   
   /*CREATE SECOND VIEWER REFERENCE COORD FROM WORLD COORD*/
   if (status==GOOD)
   {
      //Calculate VRP based on UI specification for right ascension,
      //declination, and distance from center
      status = [self calcVRPusingRAandDandDFC]; //In WCS
   }
   
   if (status==GOOD)
   {
      //Get U, V, and N axes for unrotated viewing volume
      //Status should always be GOOD as meth terminates app on error
      //Calculates uNormal,vNormal,nNormal
      status = [self calcUVN1usingVRP]; 
   }
   
   if (status==GOOD)
   {
      //Get U, V, and N axes for ROTATED viewing volume.
      //Calcs uNormal, vNormal, nNormal   
      status = [self calcUVN2byRotUVN1usingVVRAandVVD];
   }
   /********************************************************************/
}

Summary

This article has mostly focused on one of the four NSImageRep subclasses plus NSView and NSImage. These classes have a tremendous number of uses and possible configurations that go beyond the scope of this article. The performance figures will vary from app to app of course because it really depends where the app is spending its time. In the test application a large percentage of the time is spent in shading calculations, yet the speed increase is still roughly twice that over using pswraps to draw individual pixels. This test was done with one sphere object and one point light. Adding objects to the scene reduced this performance gain as more and more time is consumed computationally by calculating intersections, etc.

It is likely that if one measured the time solely used for drawing the pixels to screen (and ignoring time spent creating the pixel data) that the performance differential would be much greater. I suspect the performance increase is partly obscured in this case because of the relatively large amount of time spent generating the data. Here is an example of how this could happen. Suppose that ten seconds are required to generate the pixel data for the test rendering. Now imagine that while using pswraps a rendering takes a total of twenty seconds. While using bitmaps the same rendering takes a total of eleven seconds. The test would suggest a speed increase of roughly 2X when in reality the speed increase is more on the order of 10X (it took one second instead of ten to draw the image data to the screen). The pswraps code is included in the snippet (commented out) for those who wish to run formal comparison tests using a profiler.

The Physics Lab software is an ongoing project aimed at visualization of field phenomena such as gravity, electromagnetism, and subatomic forces. More information about this product can be obtained from the web site at: http://www.gj.net/prv.


Richard Warner is an independent consultant living in Colorado. He worked for over eight years with the USDA as first a computer specialist then a computer scientist. His small company, Perceptual Research Ventures, has published several technical articles in Radio-Electronics magazine, plus designed and marketed the Synergy Card expansion card for PCs and compatibles. Current projects include the Physics Lab viz application. He may be contacted at: rwarner@prv.com.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Whitethorn Games combines two completely...
If you have ever gone fishing then you know that it is a lesson in patience, sitting around waiting for a bite that may never come. Well, that's because you have been doing it wrong, since as Whitehorn Games now demonstrates in new release Skate... | Read more »
Call of Duty Warzone is a Waiting Simula...
It's always fun when a splashy multiplayer game comes to mobile because they are few and far between, so I was excited to see the notification about Call of Duty: Warzone Mobile (finally) launching last week and wanted to try it out. As someone who... | Read more »
Albion Online introduces some massive ne...
Sandbox Interactive has announced an upcoming update to its flagship MMORPG Albion Online, containing massive updates to its existing guild Vs guild systems. Someone clearly rewatched the Helms Deep battle in Lord of the Rings and spent the next... | Read more »
Chucklefish announces launch date of the...
Chucklefish, the indie London-based team we probably all know from developing Terraria or their stint publishing Stardew Valley, has revealed the mobile release date for roguelike deck-builder Wildfrost. Developed by Gaziter and Deadpan Games, the... | Read more »
Netmarble opens pre-registration for act...
It has been close to three years since Netmarble announced they would be adapting the smash series Solo Leveling into a video game, and at last, they have announced the opening of pre-orders for Solo Leveling: Arise. [Read more] | Read more »
PUBG Mobile celebrates sixth anniversary...
For the past six years, PUBG Mobile has been one of the most popular shooters you can play in the palm of your hand, and Krafton is celebrating this milestone and many years of ups by teaming up with hit music man JVKE to create a special song for... | Read more »
ASTRA: Knights of Veda refuse to pump th...
In perhaps the most recent example of being incredibly eager, ASTRA: Knights of Veda has dropped its second collaboration with South Korean boyband Seventeen, named so as it consists of exactly thirteen members and a video collaboration with Lee... | Read more »
Collect all your cats and caterpillars a...
If you are growing tired of trying to build a town with your phone by using it as a tiny, ineffectual shover then fear no longer, as Independent Arts Software has announced the upcoming release of Construction Simulator 4, from the critically... | Read more »
Backbone complete its lineup of 2nd Gene...
With all the ports of big AAA games that have been coming to mobile, it is becoming more convenient than ever to own a good controller, and to help with this Backbone has announced the completion of their 2nd generation product lineup with their... | Read more »
Zenless Zone Zero opens entries for its...
miHoYo, aka HoYoverse, has become such a big name in mobile gaming that it's hard to believe that arguably their flagship title, Genshin Impact, is only three and a half years old. Now, they continue the road to the next title in their world, with... | Read more »

Price Scanner via MacPrices.net

B&H has Apple’s 13-inch M2 MacBook Airs o...
B&H Photo has 13″ MacBook Airs with M2 CPUs and 256GB of storage in stock and on sale for up to $150 off Apple’s new MSRP, starting at only $849. Free 1-2 day delivery is available to most US... Read more
M2 Mac minis on sale for $100-$200 off MSRP,...
B&H Photo has Apple’s M2-powered Mac minis back in stock and on sale today for $100-$200 off MSRP. Free 1-2 day shipping is available for most US addresses: – Mac mini M2/256GB SSD: $499, save $... Read more
Mac Studios with M2 Max and M2 Ultra CPUs on...
B&H Photo has standard-configuration Mac Studios with Apple’s M2 Max & Ultra CPUs in stock today and on Easter sale for $200 off MSRP. Their prices are the lowest available for these models... Read more
Deal Alert! B&H Photo has Apple’s 14-inch...
B&H Photo has new Gray and Black 14″ M3, M3 Pro, and M3 Max MacBook Pros on sale for $200-$300 off MSRP, starting at only $1399. B&H offers free 1-2 day delivery to most US addresses: – 14″ 8... Read more
Department Of Justice Sets Sights On Apple In...
NEWS – The ball has finally dropped on the big Apple. The ball (metaphorically speaking) — an antitrust lawsuit filed in the U.S. on March 21 by the Department of Justice (DOJ) — came down following... Read more
New 13-inch M3 MacBook Air on sale for $999,...
Amazon has Apple’s new 13″ M3 MacBook Air on sale for $100 off MSRP for the first time, now just $999 shipped. Shipping is free: – 13″ MacBook Air (8GB RAM/256GB SSD/Space Gray): $999 $100 off MSRP... Read more
Amazon has Apple’s 9th-generation WiFi iPads...
Amazon has Apple’s 9th generation 10.2″ WiFi iPads on sale for $80-$100 off MSRP, starting only $249. Their prices are the lowest available for new iPads anywhere: – 10″ 64GB WiFi iPad (Space Gray or... Read more
Discounted 14-inch M3 MacBook Pros with 16GB...
Apple retailer Expercom has 14″ MacBook Pros with M3 CPUs and 16GB of standard memory discounted by up to $120 off Apple’s MSRP: – 14″ M3 MacBook Pro (16GB RAM/256GB SSD): $1691.06 $108 off MSRP – 14... Read more
Clearance 15-inch M2 MacBook Airs on sale for...
B&H Photo has Apple’s 15″ MacBook Airs with M2 CPUs (8GB RAM/256GB SSD) in stock today and on clearance sale for $999 in all four colors. Free 1-2 delivery is available to most US addresses.... Read more
Clearance 13-inch M1 MacBook Airs drop to onl...
B&H has Apple’s base 13″ M1 MacBook Air (Space Gray, Silver, & Gold) in stock and on clearance sale today for $300 off MSRP, only $699. Free 1-2 day shipping is available to most addresses in... Read more

Jobs Board

Senior Product Associate - *Apple* Pay (AME...
…is seeking a Senior Associate of Digital Product Management to support our Apple Pay product team. Labs drives innovation at American Express by originating, Read more
Medical Assistant - Surgical Oncology- *Apple...
Medical Assistant - Surgical Oncology- Apple Hill Location: WellSpan Medical Group, York, PA Schedule: Full Time Sign-On Bonus Eligible Remote/Hybrid Regular Apply Read more
Omnichannel Associate - *Apple* Blossom Mal...
Omnichannel Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
Cashier - *Apple* Blossom Mall - JCPenney (...
Cashier - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Blossom Mall Read more
Operations Associate - *Apple* Blossom Mall...
Operations Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.