TweetFollow Us on Twitter

What's in Your Target

Volume Number: 24 (2008)
Issue Number: 03
Column Tag: Programming

What's in Your Target

Unit testing and analysis coverage

by Aaron Montgomery with Dave Dribin, contributing editor

Introduction

If you are building projects with Xcode, you are already using targets in your project. The target collects together information about how to build a library or application. If you are working with more complicated projects, you may have one target that builds a library and a second target that builds an application that depends on that library. This article describes Xcode targets that help in auxiliary tasks. Using an Xcode target to produce documentation has been discussed in MacTech (see the references at the end of the article). In this article, we present a target that runs unit tests using the CPlusTest framework for Carbon applications (there is also a Sen Testing Kit for Cocoa applications, but we will not cover that here). We will then add a shell script that allows us to use the Linux Coverage Tool to analyze how much of our code we are executing. The inspiration for this article was the November 3 2005 entry in Chris Liscio's log that discussed how to add gcov analysis to unit testing (see the references). This article assumes you are working with Xcode 3 and building for a Mac OS X 10.5 target. I have done similar projects for Xcode 2.2 and 2.3 on Mac OS X 10.4 and will point out differences for those configurations as we go along.

We start with a simple application called SuperAdd that implements a "highly optimized" adding routine. The application started as a basic Carbon Application project and we will assume that the reader already has the skills required to create a Carbon application with Xcode.

Unit Tests

Before I discuss how to add a testing target, a few words are in line about what unit tests can do, and (more importantly) what they cannot do. Unit tests are designed to call your functions with inputs that you specify and then verify that the function produced the correct output. Unit tests do not debug your code. They may help you determine which section of code is problematic, but they cannot tell you how to fix the problem.

Deciding which tests to write is important, but do not let it paralyze you. First consider which functions should be tested and try to establish the exact requirements of the function. Then you can write some tests that confirm that your function meets these requirements. Since it may be prohibitive to test every possible input, you will need to be judicious about which inputs you use to test your function. The Apple documentation provides some guidelines. As you continue to work on the main application, you will discover cases where the function fails to meet your needs either because the original requirements are not exactly correct, or because the function was improperly coded. Each time this happens, you can add a test. Thinking about how you will test your functions may also affect how you define your functions. A function called solely for the purposes of side effects will be tougher to test than one that produces an output. Similarly, monolithic functions with many tasks will be more difficult to test than smaller functions with a single clear task since you will need to test the monolith with a larger variety of inputs. Finally, the CPlusTest framework does not support the testing of user generated events. I will discuss a (na•ve) way to handle this for smaller projects in the section on code coverage below. There are commercial systems for testing user interfaces, but they are beyond the scope of this article.

Target Settings

These instructions come (mostly) from the Apple documentation for Unit Testing with the CPlusTest framework. Start by selecting New Target... from the Project menu. Select Unit Test Bundle from the Carbon section. Choose a name (I chose Unit Tests) and a project (SuperAdd). Voila, a unit testing target. Now you need to make some adjustments to the project configuration so the target will work.

At this point you have to make a decision about whether you want to do unit testing with the Release configuration, the Debug configuration, or a new configuration. The advantage of unit testing the Release configuration is that you will be testing the shipping code. The disadvantage is that you will need to change some of the build settings to use the unit tests and the coverage analysis. These changes may be inappropriate for the shipping product. The disadvantage with testing the Debug configuration is that you are not actually testing your shipping code. You will also not be able to use Zero Link during these builds and this may be important to your development cycle. You could create a new configuration for unit testing (with or without coverage analysis). In larger products, this might be a more appropriate choice. However, for this demonstration, we will go ahead and execute unit tests and coverage analysis with the Debug configuration.

Go to the Targets group and open the information inspector for the Unit Tests target. In the General tab, add a Direct Dependency of the SuperAdd application. This will build the application prior to testing it. In the Build tab, you will need to adjust a number of settings. Make sure that Configuration is set appropriately (in the case of this example, we are setting this up for the Debug Configuration). In the Linking collection, you will need to set the Bundle Loader to your executable. This will allow you to access functions and variables in the original application from your test code. The location for this example is

$(BUILT_PRODUCTS_DIR)/$(PROJECT).app/Contents/MacOS/$(PROJECT)

In the Unit Testing collection, you need to set the Test Host (the code that your test code will be injected into). In our case, this is the same as the Bundle Loader and so we can use $(BUNDLE_LOADER) as the value here. These settings will not affect the SuperAdd application, only the testing code. I have also used the same prefix header for the unit tests as I used for the executable. This prefix header declares a global variable (gInitialized) that is used in both sets of code. The SuperAdd code sets this variable to true when it is finished with its initialization routine. The Unit Tests code will not start running until this variable has been set to true. Using a common prefix header allows both sets of code to see this variable.

If you are building with Xcode 3, you can skip to the next section, entitled Source Code. If you are building using Xcode 2.3, you will need to make some other changes to the targets. In the Unit Tests target, you will want to add the flag -fno-cxa-atexit to the Linker's Other Flags in the Linking collection. This is to work around a bug introduced in Xcode 2.3 and 2.4 but fixed in Xcode 3. Now go to the Targets group and open the information inspector for the SuperAdd target. In the Build tab, you will need to adjust two settings. In this case, you are actually setting the build settings for the SuperAdd application. You will probably only want to change these settings in the Debug configuration. In the Linking collection, you need to turn off Zero Link. In the Code Generation collection, you need to turn off Symbols Hidden by Default. I could not find the Symbols Hidden by Default setting mentioned in the Apple documentation. If it is turned on, your Unit Tests bundle will not be able to see the variables and functions you would want to use and you will receive linking errors.

Source Code

Now you need to write the code that runs the tests and the code that implements the tests. Apple supplies a RunTestsInTimer class with the CPlusTest framework documentation that is used to run the tests. I have adjusted the code to create a CTestRunner class. When a CTestRunner is created, it will create an event loop timer. When the timer fires, the CTestRunner checks if the application is initialized. If the application is initialized, it will run the tests, otherwise it will wait until the timer fires again.

RunTests code in CTestRunner.cpp

void CTestRunner::RunTests(void)
{
   //gInitialized prevents premature running of tests
   if (gInitialized)
   {
      //prevent a second timer firing while we're doing the tests
      {
         RemoveEventLoopTimer(myTimerRef);
         myTimerRef = NULL;
         DisposeEventLoopTimerUPP(myTimerUPP);
         myTimerUPP = NULL;
      }
      
      //run the tests
      {
         TestRun run;
         TestLog log(std::cerr);
         run.addObserver(&log);
         TestSuite& allTests = TestSuite::allTests();
         allTests.run(run);
         std::cerr << "Ran " << run.runCount() << " tests,"
            << run.failureCount() << " failed." << std::endl;
      }
      
      //either quit the application
      //QuitApplicationEventLoop();
      //or show User Interface test instructions
      ShowCoverageWindow();
   }
}

The one significant change is that call to ShowCoverageWindow instead of QuitApplication-EventLoop. Since ShowCoverageWindow does not use the CPlusTest Framework's testing macros and classes, but exists solely to obtain complete code coverage, I will discuss it in the section on code coverage below.

I create a testing class for each C module or C++ class used in the main project and use a standardized naming convention: the name of the unit tests associated with the module foobar is called UTFoobar. I also organize the unit tests in a source tree underneath the folder Tests that mirrors the source tree used for the application. In this case, we have to test the superadd module, so we create a class called UTSuperadd. I have also created a module named UTUI and it is designed to test the user interface. Like ShowCoverageWindow above, it focuses on code coverage and will be discussed later.

The UTSuperadd class is used to test the functions defined in superadd. The UTSuperadd class is a subclass of TestCase (a part of the CPlusTest framework) and contains a number of tests. The class declaration is given below.

UTSuperadd declaration in UTSuperadd.h

class UTSuperadd
:public
   TestCase
{
public:
   //! This method constructs an UTSuperadd.
   UTSuperadd(TestInvocation* inInvocation);
   //! This method deconstructs an UTSuperadd.
   virtual ~UTSuperadd(void);
   
   //! This method tests superadd's ability to add two negatives.
   void TestSuperAddNegNeg(void);
   //! This method tests superadd's ability to add a negative and a zero.
   void TestSuperAddNegZer(void);
   //
   //   similar tests omitted
   //
      
};

There are two choices when running multiple tests. You could create a single test method that executes all the tests or you can create a number of smaller methods, each of which execute one test. The advantage of the single monolith is that there are fewer tests to register. However, testing will stop at the first failed test. With a number of smaller functions, you will get a log of which tests failed and which tests passed. Since this process is supposed to be automated, I prefer to run a lot of tests in a single batch rather than running until one test fails. It is also often the case that patterns in which tests are failing can lead to hints as to how to debug the code.

The code below demonstrates a simple test to verify that superadd(-1, -1) is correct. The definition of the method defines the test, the next line instantiates an object of type UTSuperadd and registers the test with the CPlusTest framework. You can use the macro CPTAssert to test assertions. If the input to the macro is false, an error will appear in the build results window.

UTSuperadd::TestSuperAddNegNeg

// define the method
void UT_superadd::TestSuperAddNegNeg(void)
{
   CPTAssert(superadd(-1, -1) == -1 + -1);
}
// register the test
UTSuperadd SuperAddNegNeg(
   TEST_INVOCATION(UTSuperadd, TestSuperAddNegNeg));

One issue that does not appear in this example is the issue of memory and resource allocation necessary for your tests. It may seem appropriate to make these allocations in a constructor, but that can cause problems since you cannot control exactly when the constructor will be executed (as the objects are static and hence you have no control over when they are created). Instead, allocations should occur in the virtual function setUp and deallocations should occur in the virtual function tearDown. These functions will be called immediately before and after each test is run. As a result, you know that they will be run after the application has been initialized and before the unit testing has ended.

Running the tests

When you build the Unit Tests target, the application will be built (if necessary) and then the Unit Tests target will be built. As part of the build process of the Unit Tests target, the application will be launched and the tests run. There is no need to choose Build and Run as the tests are run as part of the build process. You can see the Build Results and the Build Transcripts corresponding to running the tests in Figures 1 and 2.


Figure 1: Build Results


Figure 2: Build Transcript

Failed tests will show up as errors in the Build Results warnings pane. The Build Transcript lists the number of tests run and the number of tests that failed. Assuming your application did not crash, you will also get a note like "Passed tests for architecture 'i386'." This simply means that the application exited normally, it does not reflect whether individual tests were passed. Additional information about which tests ran and whether they passed or failed will also show up in the Build Transcript pane.

One thing you need to be careful about is that the tests will appear to have run even if there was some error in building the application or the unit test bundle. What happens is that an old application or test bundle from a previous build is being run. You should always check the build log to make sure that this did not happen. For important milestone testing, cleaning all targets before running the tests might be a good policy so that you can insure that the tests were run on the most recent build.

Coverage Testing

The goal of coverage testing is to execute each command in the source at least once. Like unit testing, a successful coverage test does not mean a bug free program: SuperAdd passes the coverage testing with a phenomenal 100% coverage, but still contains a number of bugs.

The coverage tool provided with gcc is called gcov. You can find information about this tool in the GCC documentation (a link is provided in the references). Once you have set up the project to use gcov (steps I will present later in this article), you will generate three new types of files. Files with the suffix gcno are created when the application is built. They contain the necessary information to link blocks of executable code in the binary with lines in the source files. Files with the suffix gcda are created when the application is run. They contain information about which blocks of code were executed. Files with the suffix of gcov are created when you run gcov. These text files contain an annotated version of your source code where the annotations indicate how often each line of your source was executed. We will not use the gcov files directly, but will use the Linux Test Project's coverage tools to create a collection of interlinked html files with the same information. The lcov tool (a Perl script) collects the data from gcov and creates an lcov.info file and the genhtml tool uses this file to generate interlinked html files with the coverage information.

One important thing to remember is that gcov counts the number of times a line of code was executed. If you are trying to verify that you are executing every instruction, your code layout should contain one instruction per line. Although formatting style is often personal preference or company policy, some formats are more amenable to coverage testing than others. For example, in the first conditional statement below, we cannot tell from the results if x was ever incremented, we just know that the equality was tested. The second layout allows us to determine if x was incremented.

Conditional statements

// here we cannot tell if x++ was executed
if (x == y) x++;
// here we can tell if x++ was executed
if (x == y)
x++;

In addition to possibly adjusting your coding style, trying to obtain 100% coverage may require refactoring your code. If you are finding it difficult to reach some section of code buried inside a larger function, you may decide to write a new function that executes that code. Then you can test this function directly. Whatever you do, don't let the quest for 100% code coverage lead you to poor code writing. The final goal is a well-written program, code coverage is one way to help, but it is not the overall goal.

Getting lcov

The Xcode installer will install gcov. You can obtain lcov at the website listed in the references. The online documentation for lcov is out of date, however the man pages appear to be up to date. You will want to place these scripts somewhere convenient. One possibility is in your shell's executable path and another is to package them with the project. In this example, I have created a Tools folder as part of the project and added the scripts to this folder (so downloading the project will provide you with the scripts).

The biggest problem with the lcov script found online is that it is based on an older version of gcov. To reset the coverage testing process, the script attempts to delete all the old coverage data files. The script deletes files with the extension da; however, gcov now produces files with the extension gcda. To fix the lcov script, open it in a text editor and then find and replace all occurrences of .da with .gcda. If you download the lcov provided with the project, this has already been done for you.

Target Settings

Again, we need to decide which build configuration we will want to use for coverage testing. If you are testing code coverage while running unit tests, this will be the same configuration you used to build the application that is tested with the unit tests. For this example, we will be adjusting the Debug configuration.

Open the information inspector for the SuperAdd target (not the Unit Tests target). In the Code Generation collection, turn on Instrument Program Flow and Generate Test Coverage (these options will create the gcno and gcda files). In the Linking collection, add -lgcov to the Other Linker Flags (this option will link in the gcov library). Notice that you do not need to adjust any settings for the Unit Tests target. You are not testing coverage of the code in the unit tests.

Shell Script

The unit tests are run in a Run Script phase of the Unit Tests target. Go to the Targets pane and disclose the phases for the Unit Tests target. Replace the Run Script phase script with the following code.

Run Script Phase for "Unit Tests" Info

source ${PROJECT_DIR}/Tools/lcov.sh

The shell script that is actually executed is shown below. I have used a prefix of MONSTERWORKS in the shell script to prevent clashing shell environment variables. In the script listed below, I abbreviated this to MW. Unfortunately, even with this abbreviation, the script contains some very long lines. The version below tries to break the lines. The character ¬ along with any following white space should be removed (or read the script included with the project).

lcov.sh

# the name of the application
MW_APP_NAME=SuperAdd
# the target that builds the executable
MW_TARGET_NAME=${MW_APP_NAME}
# the configuration in which we do unit testing/coverage analysis
MW_CONFIGURATION=Debug
# path to the lcov tools
MW_LCOV_PATH=${PROJECT_DIR}/Tools
# where the object files for the application will be found
MW_OBJ_DIR=${OBJROOT}/¬
   ${MW_APP_NAME}.build/${CONFIGURATION}/¬
   ${MW_TARGET_NAME}.build/Objects-normal/${NATIVE_ARCH}
# we only execute the coverage test if we are using the 'Debug' configuration
if [ "${CONFIGURATION}" = "${MW_CONFIGURATION}" ]; then
   # clean out the old data
   ${MW_LCOV_PATH}/lcov ¬
      –directory ${MW_OBJ_DIR} –zerocounters
   #remove the old report
   pushd ${OBJROOT}/${CONFIGURATION}
      if [ -e lcov ]; then
         rm -r lcov/*
      fi
   popd
      
   # run the unit tests
   "${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests"
   pushd ${OBJROOT}/${CONFIGURATION}
      # create the coverage directory
      if [ ! -e lcov ]; then
         mkdir lcov
      fi
      #analyze the coverage data
      ${MW_LCOV_PATH}/lcov ¬
         –directory ${MW_OBJ_DIR} ¬
         –capture –output-file lcov/lcov.info
      
      # create the html pages
      ${MW_LCOV_PATH}/genhtml ¬
         –output-directory lcov lcov/lcov.info
      # open the coverage analysis
      open lcov/index.html
   popd
   
   # clean up
   ${MW_LCOV_PATH}/lcov ¬
      –directory ${MW_OBJ_DIR} –zerocounters
fi

Although it appears long and complicated, the steps are fairly simple. If we aren't using the correct configuration, we simply skip the script. Otherwise, we start by removing any of the coverage results from the previous run of the script. Be careful with the recursive rm command and confirm that you really are removing the files from the correct directory. After this, we run the unit tests. Next we run lcov to generate the coverage results and genhtml to produce the HTML pages. We finish by opening up the HTML pages and cleaning up after ourselves.

Now when you build the Unit Tests target in the Debug configuration, the application will be built (if necessary), the application will launch, and the unit tests will run and the application will quit. Then lcov and genhtml are executed and the results of this are opened so that you see a window like that shown in Figure 3.


Figure 3: Coverage Overview

There is an inline function in Headers that causes it to show up in the coverage analysis, but we are primarily interested in the Sources folder. Following that link and then the link to the results from main.cpp leads to a page shown in Figure 4.


Figure 4: Missed Lines

Blue lines were executed and orange lines were not. If the line is uncolored, then it does not contain executable code (commands that span multiple lines have the last line highlighted). In this case, it is the window event handler that is not being called. This isn't surprising since we never interact with any windows in the program.

Testing the User Interface

Automated testing of the User Interface is beyond the ability of the CPlusTest framework. However, we can interject some supervised user interface testing with the project. The code in UTUI works for simple user interfaces. It opens a utility window that leads the user through the steps they should take to exercise the code. One step for SuperAdd is closing the window and the utility window for this step is shown in Figure 5.


Figure 5: User Interface Testing

To achieve 100% code coverage of SuperAdd, you should comment/uncomment the lines in CTestRunner::RunTests to invoke ShowCoverageWindow instead of QuitApplicationEventLoop, build the Unit Tests target, switch to the application, and follow all of the instructions in the utility window.

Conclusion and References

I think I've run out of space, but hopefully you will be able to implement some of these ideas in projects of your own. The following is a list of references that have been mentioned in the article.

Documentation:

You can find a MacTech article about how to set up a target to use doxygen to document your code at:

http://www.mactech.com/articles/mactech/Vol.20/20.03/Documentingyourcode/index.html

Unit Testing:

You can find information about unit testing with Xcode at:

http://developer.apple.com/documentation/DeveloperTools/Conceptual/UnitTesting/UnitTesting.html

There is also a tutorial on using unit testing and coverage analysis at Chris Liscio's Boo-urns Log at:

http://www.supermegaultragroovy.com/Software%20Development/xcode_code_coverage_howto

GNU Documentation:

You can find documentation about gcov along with the rest of the gcc tools at:

http://gcc.gnu.org/onlinedocs/

Linux Coverage Tool:

The Linux coverage tools can be found at:

http://ltp.sourceforge.net/coverage/lcov.php


Aaron Montgomery teaches mathematics at Central Washington University. He also enjoys hiking, mountain biking, and alpine skiing. You can reach him at eeyore@monsterworks.com.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

A Better Finder Rename 10.00b1 - File, p...
A Better Finder Rename is the most complete renaming solution available on the market today. That's why, since 1996, tens of thousands of hobbyists, professionals and businesses depend on A Better... Read more
CrossOver 14.1.6 - Run Windows apps on y...
CrossOver can get your Windows productivity applications and PC games up and running on your Mac quickly and easily. CrossOver runs the Windows software that you need on Mac at home, in the office,... Read more
Printopia 2.1.14 - Share Mac printers wi...
Run Printopia on your Mac to share its printers to any capable iPhone, iPad or iPod Touch. Printopia will also add virtual printers, allowing you to save print-outs to your Mac and send to apps.... Read more
Google Drive 1.24 - File backup and shar...
Google Drive is a place where you can create, share, collaborate, and keep all of your stuff. Whether you're working with a friend on a joint research project, planning a wedding with your fiancé, or... Read more
Chromium 45.0.2454.85 - Fast and stable...
Chromium is an open-source browser project that aims to build a safer, faster, and more stable way for all Internet users to experience the web. Version 45.0.2454.85: Note: Does not contain the "... Read more
OmniFocus 2.2.5 - GTD task manager with...
OmniFocus helps you manage your tasks the way that you want, freeing you to focus your attention on the things that matter to you most. Capturing tasks and ideas is always a keyboard shortcut away in... Read more
iFFmpeg 5.7.1 - Convert multimedia files...
iFFmpeg is a graphical front-end for FFmpeg, a command-line tool used to convert multimedia files between formats. The command line instructions can be very hard to master/understand, so iFFmpeg does... Read more
VOX 2.6 - Music player that supports man...
VOX is a beautiful music player that supports many filetypes. The beauty is in its simplicity, yet behind the minimal exterior lies a powerful music player with a ton of features and support for all... Read more
Box Sync 4.0.6567 - Online synchronizati...
Box Sync gives you a hard-drive in the Cloud for online storage. Note: You must first sign up to use Box. What if the files you need are on your laptop -- but you're on the road with your iPhone? No... Read more
Carbon Copy Cloner 4.1.4 - Easy-to-use b...
Carbon Copy Cloner backups are better than ordinary backups. Suppose the unthinkable happens while you're under deadline to finish a project: your Mac is unresponsive and all you hear is an ominous,... Read more

You Can Play Madfinger Games' Unkil...
Madfinger Games - probably best known for the Dead Trigger series - has officially launched their newest zombie shooter (that isn't called Dead Trigger), named Unkilled. [Read more] | Read more »
KORG iELECTRIBE for iPhone (Music)
KORG iELECTRIBE for iPhone 1.0.1 Device: iOS iPhone Category: Music Price: $9.99, Version: 1.0.1 (iTunes) Description: ** 50% OFF Special Launch Sale - For a Limited Time **The ELECTRIBE reborn in an even smaller form A full-fledged... | Read more »
I am Bread (Games)
I am Bread 1.0 Device: iOS Universal Category: Games Price: $4.99, Version: 1.0 (iTunes) Description: ‘I am Bread’ is the latest quirky adventure from the creators of 'Surgeon Simulator', Bossa Studios. This isn't the best thing... | Read more »
Rock(s) Rider - HD Edition (Games)
Rock(s) Rider - HD Edition 1.0.0 Device: iOS Universal Category: Games Price: $2.99, Version: 1.0.0 (iTunes) Description: *** PLEASE NOTE: Compatible with iPhone 4s, iPad 2, iPad mini, iPod touch (5th generation) or newer *** Do you... | Read more »
Rebuild 3: Gangs of Deadsville (Games)
Rebuild 3: Gangs of Deadsville 1.0 Device: iOS Universal Category: Games Price: $4.99, Version: 1.0 (iTunes) Description: It's been a few years since the zombpocalypse turned the world's cities into graveyards and sent the few... | Read more »
Power Ping Pong (Games)
Power Ping Pong 1.0 Device: iOS Universal Category: Games Price: $4.99, Version: 1.0 (iTunes) Description: Do you wield your bat with zen-like focus or do your balls of fury give you a killer spin? Table tennis goes mobile with a... | Read more »
Z.O.N.A Project X (Games)
Z.O.N.A Project X 1.00 Device: iOS Universal Category: Games Price: $1.99, Version: 1.00 (iTunes) Description: Z.O.N.A Project X - shooter in the post-apocalyptic world. | Read more »
Trick Shot (Games)
Trick Shot 1.0.6 Device: iOS Universal Category: Games Price: $1.99, Version: 1.0.6 (iTunes) Description: A game where all you have to do is throw a ball into a box, simple? Trick Shot is a minimalist physics puzzler with 90 levels... | Read more »
VoxelCity (Games)
VoxelCity 1.0.2 Device: iOS Universal Category: Games Price: $1.99, Version: 1.0.2 (iTunes) Description: Looking for a new city builder? Tired of social media anti-games with no strategy? Look no further! NO IAP EVER! VoxelCity is a... | Read more »
Goat Simulator MMO Simulator (Games)
Goat Simulator MMO Simulator 1.0 Device: iOS Universal Category: Games Price: $4.99, Version: 1.0 (iTunes) Description: ** IMPORTANT - SUPPORTED DEVICESiPhone 4S, iPad 2, iPod Touch 5 or better.** Coffee Stain Studios brings next-gen... | Read more »

Price Scanner via MacPrices.net

Near-Office Input Functionality Virtually Any...
Today Logitech introduced the Logitech K380 Multi-Device Bluetooth Keyboard and the Logitech M535 Bluetooth Mouse, giving users the freedom to work on any device, most anywhere. According to... Read more
College Student Deals: Additional $100 off Ma...
Take an additional $100 off all MacBooks and iMacs at Best Buy Online with their College Students Deals Savings, valid through September 4, 2015. Anyone with a valid .EDU email address can take... Read more
2.8GHz Mac mini available for $988, includes...
Adorama has the 2.8GHz Mac mini available for $988, $11 off MSRP, including a free copy of Apple’s 3-Year AppleCare Protection Plan. Shipping is free, and Adorama charges sales tax in NY & NJ... Read more
Will You Buy An iPad Pro? – The ‘Book Mystiqu...
It looks like we may not have to wait much longer to see what finally materializes as a new, larger-panel iPad (Pro/Plus?) Usually reliable Apple product prognosticator KGI Securities analyst Ming-... Read more
eFileCabinet Announces SMB Document Managemen...
Electronic document management (EDM) eFileCabinet, Inc., a hosted solutions provider for small to medium businesses, has announced that its SecureDrawer and eFileCabinet Online services will be... Read more
WaterField Designs Unveils American-Made, All...
San Francisco’s WaterField Designs today unveiled their all-leather Cozmo 2.0 — an elegant attach laptop bag with carefully-designed features to suit any business environment. The Cozmo 2.0 is... Read more
Apple’s 2015 Back to School promotion: Free B...
Purchase a new Mac or iPad at The Apple Store for Education and take up to $300 off MSRP. All teachers, students, and staff of any educational institution qualify for the discount. Shipping is free,... Read more
128GB MacBook Airs on sale for $100 off MSRP,...
B&H Photo has 11″ & 13″ MacBook Airs with 128GB SSDs on sale for $100 off MSRP. Shipping is free, and B&H charges NY sales tax only: - 11″ 1.6GHz/128GB MacBook Air: $799.99, $100 off MSRP... Read more
13-inch 2.5GHz MacBook Pro (refurbished) avai...
The Apple Store has Apple Certified Refurbished 13″ 2.5GHz MacBook Pros available for $829, or $270 off the cost of new models. Apple’s one-year warranty is standard, and shipping is free: - 13″ 2.... Read more
27-inch 3.2GHz iMac on sale for $1679, save $...
B&H Photo has the 27″ 3.2GHz iMac on sale for $1679.99 including free shipping plus NY sales tax only. Their price is $120 off MSRP. Read more

Jobs Board

*Apple* Retail - Multiple Customer Support P...
Job Description: Customer Support Specialist - Retail Customer Service and Sales Transform Apple Store visitors into loyal Apple customers. When customers enter the Read more
*Apple* Desktop Analyst - KDS Staffing (Unit...
…field and consistent professional recruiting achievement. Job Description: Title: Apple Desktop AnalystPosition Type: Full-time PermanentLocation: White Plains, NYHot Read more
Simply Mac *Apple* Specialist- Repair Techn...
Simply Mac is the greatest premier retailer of Apple products expertise in North America. We're looking for dedicated individuals to provide personalized service and Read more
Simply Mac *Apple* Specialist- Service Repa...
Simply Mac is the greatest premier retailer of Apple products expertise in North America. We're looking for dedicated individuals to provide personalized service and Read more
*Apple* Desktop Analyst - KDS Staffing (Unit...
…field and consistent professional recruiting achievement. Job Description: Title: Apple Desktop AnalystPosition Type: Full-time PermanentLocation: White Plains, NYHot Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.