TweetFollow Us on Twitter

What's in Your Target

Volume Number: 24 (2008)
Issue Number: 03
Column Tag: Programming

What's in Your Target

Unit testing and analysis coverage

by Aaron Montgomery with Dave Dribin, contributing editor


If you are building projects with Xcode, you are already using targets in your project. The target collects together information about how to build a library or application. If you are working with more complicated projects, you may have one target that builds a library and a second target that builds an application that depends on that library. This article describes Xcode targets that help in auxiliary tasks. Using an Xcode target to produce documentation has been discussed in MacTech (see the references at the end of the article). In this article, we present a target that runs unit tests using the CPlusTest framework for Carbon applications (there is also a Sen Testing Kit for Cocoa applications, but we will not cover that here). We will then add a shell script that allows us to use the Linux Coverage Tool to analyze how much of our code we are executing. The inspiration for this article was the November 3 2005 entry in Chris Liscio's log that discussed how to add gcov analysis to unit testing (see the references). This article assumes you are working with Xcode 3 and building for a Mac OS X 10.5 target. I have done similar projects for Xcode 2.2 and 2.3 on Mac OS X 10.4 and will point out differences for those configurations as we go along.

We start with a simple application called SuperAdd that implements a "highly optimized" adding routine. The application started as a basic Carbon Application project and we will assume that the reader already has the skills required to create a Carbon application with Xcode.

Unit Tests

Before I discuss how to add a testing target, a few words are in line about what unit tests can do, and (more importantly) what they cannot do. Unit tests are designed to call your functions with inputs that you specify and then verify that the function produced the correct output. Unit tests do not debug your code. They may help you determine which section of code is problematic, but they cannot tell you how to fix the problem.

Deciding which tests to write is important, but do not let it paralyze you. First consider which functions should be tested and try to establish the exact requirements of the function. Then you can write some tests that confirm that your function meets these requirements. Since it may be prohibitive to test every possible input, you will need to be judicious about which inputs you use to test your function. The Apple documentation provides some guidelines. As you continue to work on the main application, you will discover cases where the function fails to meet your needs either because the original requirements are not exactly correct, or because the function was improperly coded. Each time this happens, you can add a test. Thinking about how you will test your functions may also affect how you define your functions. A function called solely for the purposes of side effects will be tougher to test than one that produces an output. Similarly, monolithic functions with many tasks will be more difficult to test than smaller functions with a single clear task since you will need to test the monolith with a larger variety of inputs. Finally, the CPlusTest framework does not support the testing of user generated events. I will discuss a (nave) way to handle this for smaller projects in the section on code coverage below. There are commercial systems for testing user interfaces, but they are beyond the scope of this article.

Target Settings

These instructions come (mostly) from the Apple documentation for Unit Testing with the CPlusTest framework. Start by selecting New Target... from the Project menu. Select Unit Test Bundle from the Carbon section. Choose a name (I chose Unit Tests) and a project (SuperAdd). Voila, a unit testing target. Now you need to make some adjustments to the project configuration so the target will work.

At this point you have to make a decision about whether you want to do unit testing with the Release configuration, the Debug configuration, or a new configuration. The advantage of unit testing the Release configuration is that you will be testing the shipping code. The disadvantage is that you will need to change some of the build settings to use the unit tests and the coverage analysis. These changes may be inappropriate for the shipping product. The disadvantage with testing the Debug configuration is that you are not actually testing your shipping code. You will also not be able to use Zero Link during these builds and this may be important to your development cycle. You could create a new configuration for unit testing (with or without coverage analysis). In larger products, this might be a more appropriate choice. However, for this demonstration, we will go ahead and execute unit tests and coverage analysis with the Debug configuration.

Go to the Targets group and open the information inspector for the Unit Tests target. In the General tab, add a Direct Dependency of the SuperAdd application. This will build the application prior to testing it. In the Build tab, you will need to adjust a number of settings. Make sure that Configuration is set appropriately (in the case of this example, we are setting this up for the Debug Configuration). In the Linking collection, you will need to set the Bundle Loader to your executable. This will allow you to access functions and variables in the original application from your test code. The location for this example is


In the Unit Testing collection, you need to set the Test Host (the code that your test code will be injected into). In our case, this is the same as the Bundle Loader and so we can use $(BUNDLE_LOADER) as the value here. These settings will not affect the SuperAdd application, only the testing code. I have also used the same prefix header for the unit tests as I used for the executable. This prefix header declares a global variable (gInitialized) that is used in both sets of code. The SuperAdd code sets this variable to true when it is finished with its initialization routine. The Unit Tests code will not start running until this variable has been set to true. Using a common prefix header allows both sets of code to see this variable.

If you are building with Xcode 3, you can skip to the next section, entitled Source Code. If you are building using Xcode 2.3, you will need to make some other changes to the targets. In the Unit Tests target, you will want to add the flag -fno-cxa-atexit to the Linker's Other Flags in the Linking collection. This is to work around a bug introduced in Xcode 2.3 and 2.4 but fixed in Xcode 3. Now go to the Targets group and open the information inspector for the SuperAdd target. In the Build tab, you will need to adjust two settings. In this case, you are actually setting the build settings for the SuperAdd application. You will probably only want to change these settings in the Debug configuration. In the Linking collection, you need to turn off Zero Link. In the Code Generation collection, you need to turn off Symbols Hidden by Default. I could not find the Symbols Hidden by Default setting mentioned in the Apple documentation. If it is turned on, your Unit Tests bundle will not be able to see the variables and functions you would want to use and you will receive linking errors.

Source Code

Now you need to write the code that runs the tests and the code that implements the tests. Apple supplies a RunTestsInTimer class with the CPlusTest framework documentation that is used to run the tests. I have adjusted the code to create a CTestRunner class. When a CTestRunner is created, it will create an event loop timer. When the timer fires, the CTestRunner checks if the application is initialized. If the application is initialized, it will run the tests, otherwise it will wait until the timer fires again.

RunTests code in CTestRunner.cpp

void CTestRunner::RunTests(void)
   //gInitialized prevents premature running of tests
   if (gInitialized)
      //prevent a second timer firing while we're doing the tests
         myTimerRef = NULL;
         myTimerUPP = NULL;
      //run the tests
         TestRun run;
         TestLog log(std::cerr);
         TestSuite& allTests = TestSuite::allTests();;
         std::cerr << "Ran " << run.runCount() << " tests,"
            << run.failureCount() << " failed." << std::endl;
      //either quit the application
      //or show User Interface test instructions

The one significant change is that call to ShowCoverageWindow instead of QuitApplication-EventLoop. Since ShowCoverageWindow does not use the CPlusTest Framework's testing macros and classes, but exists solely to obtain complete code coverage, I will discuss it in the section on code coverage below.

I create a testing class for each C module or C++ class used in the main project and use a standardized naming convention: the name of the unit tests associated with the module foobar is called UTFoobar. I also organize the unit tests in a source tree underneath the folder Tests that mirrors the source tree used for the application. In this case, we have to test the superadd module, so we create a class called UTSuperadd. I have also created a module named UTUI and it is designed to test the user interface. Like ShowCoverageWindow above, it focuses on code coverage and will be discussed later.

The UTSuperadd class is used to test the functions defined in superadd. The UTSuperadd class is a subclass of TestCase (a part of the CPlusTest framework) and contains a number of tests. The class declaration is given below.

UTSuperadd declaration in UTSuperadd.h

class UTSuperadd
   //! This method constructs an UTSuperadd.
   UTSuperadd(TestInvocation* inInvocation);
   //! This method deconstructs an UTSuperadd.
   virtual ~UTSuperadd(void);
   //! This method tests superadd's ability to add two negatives.
   void TestSuperAddNegNeg(void);
   //! This method tests superadd's ability to add a negative and a zero.
   void TestSuperAddNegZer(void);
   //   similar tests omitted

There are two choices when running multiple tests. You could create a single test method that executes all the tests or you can create a number of smaller methods, each of which execute one test. The advantage of the single monolith is that there are fewer tests to register. However, testing will stop at the first failed test. With a number of smaller functions, you will get a log of which tests failed and which tests passed. Since this process is supposed to be automated, I prefer to run a lot of tests in a single batch rather than running until one test fails. It is also often the case that patterns in which tests are failing can lead to hints as to how to debug the code.

The code below demonstrates a simple test to verify that superadd(-1, -1) is correct. The definition of the method defines the test, the next line instantiates an object of type UTSuperadd and registers the test with the CPlusTest framework. You can use the macro CPTAssert to test assertions. If the input to the macro is false, an error will appear in the build results window.


// define the method
void UT_superadd::TestSuperAddNegNeg(void)
   CPTAssert(superadd(-1, -1) == -1 + -1);
// register the test
UTSuperadd SuperAddNegNeg(
   TEST_INVOCATION(UTSuperadd, TestSuperAddNegNeg));

One issue that does not appear in this example is the issue of memory and resource allocation necessary for your tests. It may seem appropriate to make these allocations in a constructor, but that can cause problems since you cannot control exactly when the constructor will be executed (as the objects are static and hence you have no control over when they are created). Instead, allocations should occur in the virtual function setUp and deallocations should occur in the virtual function tearDown. These functions will be called immediately before and after each test is run. As a result, you know that they will be run after the application has been initialized and before the unit testing has ended.

Running the tests

When you build the Unit Tests target, the application will be built (if necessary) and then the Unit Tests target will be built. As part of the build process of the Unit Tests target, the application will be launched and the tests run. There is no need to choose Build and Run as the tests are run as part of the build process. You can see the Build Results and the Build Transcripts corresponding to running the tests in Figures 1 and 2.

Figure 1: Build Results

Figure 2: Build Transcript

Failed tests will show up as errors in the Build Results warnings pane. The Build Transcript lists the number of tests run and the number of tests that failed. Assuming your application did not crash, you will also get a note like "Passed tests for architecture 'i386'." This simply means that the application exited normally, it does not reflect whether individual tests were passed. Additional information about which tests ran and whether they passed or failed will also show up in the Build Transcript pane.

One thing you need to be careful about is that the tests will appear to have run even if there was some error in building the application or the unit test bundle. What happens is that an old application or test bundle from a previous build is being run. You should always check the build log to make sure that this did not happen. For important milestone testing, cleaning all targets before running the tests might be a good policy so that you can insure that the tests were run on the most recent build.

Coverage Testing

The goal of coverage testing is to execute each command in the source at least once. Like unit testing, a successful coverage test does not mean a bug free program: SuperAdd passes the coverage testing with a phenomenal 100% coverage, but still contains a number of bugs.

The coverage tool provided with gcc is called gcov. You can find information about this tool in the GCC documentation (a link is provided in the references). Once you have set up the project to use gcov (steps I will present later in this article), you will generate three new types of files. Files with the suffix gcno are created when the application is built. They contain the necessary information to link blocks of executable code in the binary with lines in the source files. Files with the suffix gcda are created when the application is run. They contain information about which blocks of code were executed. Files with the suffix of gcov are created when you run gcov. These text files contain an annotated version of your source code where the annotations indicate how often each line of your source was executed. We will not use the gcov files directly, but will use the Linux Test Project's coverage tools to create a collection of interlinked html files with the same information. The lcov tool (a Perl script) collects the data from gcov and creates an file and the genhtml tool uses this file to generate interlinked html files with the coverage information.

One important thing to remember is that gcov counts the number of times a line of code was executed. If you are trying to verify that you are executing every instruction, your code layout should contain one instruction per line. Although formatting style is often personal preference or company policy, some formats are more amenable to coverage testing than others. For example, in the first conditional statement below, we cannot tell from the results if x was ever incremented, we just know that the equality was tested. The second layout allows us to determine if x was incremented.

Conditional statements

// here we cannot tell if x++ was executed
if (x == y) x++;
// here we can tell if x++ was executed
if (x == y)

In addition to possibly adjusting your coding style, trying to obtain 100% coverage may require refactoring your code. If you are finding it difficult to reach some section of code buried inside a larger function, you may decide to write a new function that executes that code. Then you can test this function directly. Whatever you do, don't let the quest for 100% code coverage lead you to poor code writing. The final goal is a well-written program, code coverage is one way to help, but it is not the overall goal.

Getting lcov

The Xcode installer will install gcov. You can obtain lcov at the website listed in the references. The online documentation for lcov is out of date, however the man pages appear to be up to date. You will want to place these scripts somewhere convenient. One possibility is in your shell's executable path and another is to package them with the project. In this example, I have created a Tools folder as part of the project and added the scripts to this folder (so downloading the project will provide you with the scripts).

The biggest problem with the lcov script found online is that it is based on an older version of gcov. To reset the coverage testing process, the script attempts to delete all the old coverage data files. The script deletes files with the extension da; however, gcov now produces files with the extension gcda. To fix the lcov script, open it in a text editor and then find and replace all occurrences of .da with .gcda. If you download the lcov provided with the project, this has already been done for you.

Target Settings

Again, we need to decide which build configuration we will want to use for coverage testing. If you are testing code coverage while running unit tests, this will be the same configuration you used to build the application that is tested with the unit tests. For this example, we will be adjusting the Debug configuration.

Open the information inspector for the SuperAdd target (not the Unit Tests target). In the Code Generation collection, turn on Instrument Program Flow and Generate Test Coverage (these options will create the gcno and gcda files). In the Linking collection, add -lgcov to the Other Linker Flags (this option will link in the gcov library). Notice that you do not need to adjust any settings for the Unit Tests target. You are not testing coverage of the code in the unit tests.

Shell Script

The unit tests are run in a Run Script phase of the Unit Tests target. Go to the Targets pane and disclose the phases for the Unit Tests target. Replace the Run Script phase script with the following code.

Run Script Phase for "Unit Tests" Info

source ${PROJECT_DIR}/Tools/

The shell script that is actually executed is shown below. I have used a prefix of MONSTERWORKS in the shell script to prevent clashing shell environment variables. In the script listed below, I abbreviated this to MW. Unfortunately, even with this abbreviation, the script contains some very long lines. The version below tries to break the lines. The character ¬ along with any following white space should be removed (or read the script included with the project).

# the name of the application
# the target that builds the executable
# the configuration in which we do unit testing/coverage analysis
# path to the lcov tools
# where the object files for the application will be found
# we only execute the coverage test if we are using the 'Debug' configuration
   # clean out the old data
   ${MW_LCOV_PATH}/lcov ¬
      –directory ${MW_OBJ_DIR} –zerocounters
   #remove the old report
      if [ -e lcov ]; then
         rm -r lcov/*
   # run the unit tests
      # create the coverage directory
      if [ ! -e lcov ]; then
         mkdir lcov
      #analyze the coverage data
      ${MW_LCOV_PATH}/lcov ¬
         –directory ${MW_OBJ_DIR} ¬
         –capture –output-file lcov/
      # create the html pages
      ${MW_LCOV_PATH}/genhtml ¬
         –output-directory lcov lcov/
      # open the coverage analysis
      open lcov/index.html
   # clean up
   ${MW_LCOV_PATH}/lcov ¬
      –directory ${MW_OBJ_DIR} –zerocounters

Although it appears long and complicated, the steps are fairly simple. If we aren't using the correct configuration, we simply skip the script. Otherwise, we start by removing any of the coverage results from the previous run of the script. Be careful with the recursive rm command and confirm that you really are removing the files from the correct directory. After this, we run the unit tests. Next we run lcov to generate the coverage results and genhtml to produce the HTML pages. We finish by opening up the HTML pages and cleaning up after ourselves.

Now when you build the Unit Tests target in the Debug configuration, the application will be built (if necessary), the application will launch, and the unit tests will run and the application will quit. Then lcov and genhtml are executed and the results of this are opened so that you see a window like that shown in Figure 3.

Figure 3: Coverage Overview

There is an inline function in Headers that causes it to show up in the coverage analysis, but we are primarily interested in the Sources folder. Following that link and then the link to the results from main.cpp leads to a page shown in Figure 4.

Figure 4: Missed Lines

Blue lines were executed and orange lines were not. If the line is uncolored, then it does not contain executable code (commands that span multiple lines have the last line highlighted). In this case, it is the window event handler that is not being called. This isn't surprising since we never interact with any windows in the program.

Testing the User Interface

Automated testing of the User Interface is beyond the ability of the CPlusTest framework. However, we can interject some supervised user interface testing with the project. The code in UTUI works for simple user interfaces. It opens a utility window that leads the user through the steps they should take to exercise the code. One step for SuperAdd is closing the window and the utility window for this step is shown in Figure 5.

Figure 5: User Interface Testing

To achieve 100% code coverage of SuperAdd, you should comment/uncomment the lines in CTestRunner::RunTests to invoke ShowCoverageWindow instead of QuitApplicationEventLoop, build the Unit Tests target, switch to the application, and follow all of the instructions in the utility window.

Conclusion and References

I think I've run out of space, but hopefully you will be able to implement some of these ideas in projects of your own. The following is a list of references that have been mentioned in the article.


You can find a MacTech article about how to set up a target to use doxygen to document your code at:

Unit Testing:

You can find information about unit testing with Xcode at:

There is also a tutorial on using unit testing and coverage analysis at Chris Liscio's Boo-urns Log at:

GNU Documentation:

You can find documentation about gcov along with the rest of the gcc tools at:

Linux Coverage Tool:

The Linux coverage tools can be found at:

Aaron Montgomery teaches mathematics at Central Washington University. He also enjoys hiking, mountain biking, and alpine skiing. You can reach him at


Community Search:
MacTech Search:

Software Updates via MacUpdate

Ys Chronicles II (Games)
Ys Chronicles II 1.0.0 Device: iOS Universal Category: Games Price: $4.99, Version: 1.0.0 (iTunes) Description: After a hard fight to recover the six sacred books in Ys Chronicles I, Adol is back for a sequel! | Read more »
FINAL FANTASY Ⅸ 1.0.4 Device: iOS Universal Category: Games Price: $16.99, Version: 1.0.4 (iTunes) Description: ==========●Special sale price for the FINAL FANTASY IX release! ●20% off from February 10 to February 21, 2016... | Read more »
Tennis Club Story (Games)
Tennis Club Story 1.03 Device: iOS Universal Category: Games Price: $4.99, Version: 1.03 (iTunes) Description: Aim for the ace position of tennis club prestige in this simulation! Your leadership decides if players make it to the big... | Read more »
Juggernaut Wars guide - How to use skill...
Juggernaut Warsis a brand new auto-RPG on iOS and Android that challenges you to build a team of heroes, send them out into various different missions to defeat waves of heroes, and level them up to increase their power. The actual combat itself... | Read more »
Check out the new Pirate Attack update i...
Love pirates and board games? Well, you'll love the new Pirate Attack themed update that just launched in Game of Dice. It adds a bunch of new content themed around pirates, like an all new event map based on a pirate ship which revamps the toll... | Read more »
Splash Cars guide - How to paint the tow...
Splash Cars is an arcade driving game that feels like a hybrid between Dawn of the Plow and Splatoon. In it, you'll need to drive a car around to repaint areas of a town that have lost all of their color. Check out these tips to help you perform... | Read more »
The best video player on mobile
We all know the stock video player on iOS is not particularly convenient, primarily because it asks us to hook a device up to iTunes to sync video in a world that has things like Netflix. [Read more] | Read more »
Four apps to help improve your Super Bow...
Super Bowl Sunday is upon us, and whether you’re a Panthers or a Broncos fan you’re no doubt gearing up for it. [Read more] | Read more »
LooperSonic (Music)
LooperSonic 1.0 Device: iOS Universal Category: Music Price: $4.99, Version: 1.0 (iTunes) Description: LooperSonic is a multi-track audio looper and recorder that will take your loops to the next level. Use it like a loop pedal to... | Read more »
Space Grunts guide - How to survive
Space Grunts is a fast-paced roguelike from popular iOS developer, Orange Pixel. While it taps into many of the typical roguelike sensibilities, you might still find yourself caught out by a few things. We delved further to find you some helpful... | Read more »

Price Scanner via

What iPad Pro Still Needs To Make It Truly Pr...
I love my iPad Air 2. So much that I’m grudgingly willing to put up with its compromises and limitations as a production tool in order to take advantage of its virtues. However, since a computer for... Read more
21-inch 3.1GHz 4K on sale for $1399, $100 off...
B&H Photo has the 21″ 3.1GHz 4K iMac on sale $1399 for a limited time. Shipping is free, and B&H charges NY sales tax only. Their price is $100 off MSRP: - 21″ 3.1GHz 4K iMac (MK452LL/A): $... Read more
Apple price trackers, updated continuously
Scan our Apple Price Trackers for the latest information on sales, bundles, and availability on systems from Apple’s authorized internet/catalog resellers. We update the trackers continuously: - 15″... Read more
Save up to $240 with Apple Certified Refurbis...
Apple is now offering Certified Refurbished 12″ Retina MacBooks for up to $240 off the cost of new models. Apple will include a standard one-year warranty with each MacBook, and shipping is free. The... Read more
Apple refurbished 13-inch Retina MacBook Pros...
Apple has Certified Refurbished 13″ Retina MacBook Pros available for up to $270 off the cost of new models. An Apple one-year warranty is included with each model, and shipping is free: - 13″ 2.7GHz... Read more
Apple refurbished Time Capsules available for...
Apple has certified refurbished Time Capsules available for $120 off MSRP. Apple’s one-year warranty is included with each Time Capsule, and shipping is free: - 2TB Time Capsule: $179, $120 off - 3TB... Read more
13-inch 2.5GHz MacBook Pro (refurbished) avai...
Apple has Certified Refurbished 13″ 2.5GHz MacBook Pros available for $829, or $270 off the cost of new models. Apple’s one-year warranty is standard, and shipping is free: - 13″ 2.5GHz MacBook Pros... Read more
Apple refurbished 15-inch Retina MacBook Pros...
Apple has Certified Refurbished 2015 15″ Retina MacBook Pros available for up to $380 off the cost of new models. An Apple one-year warranty is included with each model, and shipping is free: - 15″ 2... Read more
New Liquid Crystal Technology Prevents Automo...
Researchers at the University of Central Florida have developed three new liquid crystal mixtures which will allow automobile displays to operate at unprecedented high and low temperatures In... Read more
BookBook For iPad Pro Coming Soon
The iPad Pro is a device unlike any other, and with Apple Pencil, it’s the ideal portable sketchpad: all that’s missing is the modern easel and portfolio to go. TwelveSouth’s BookBook for iPad Pro... Read more

Jobs Board

Lead Engineer - *Apple* OSX & Hardware...
Lead Engineer - Apple OSX & Hardware **Job ID:** 3125919 **Full/Part\-Time:** Full\-time **Regular/Temporary:** Regular **Listed:** 2016\-02\-10 **Location:** Cary, Read more
*Apple* System Analyst - ATOS IT Services...
Apple System AnalystReference no.198783CountryUSARegionUS - CALIFORNIACityUS - CALIFORNIA - BURBANKPosition TypeProfessionalJob AreaIT SupportJob TypeFull Read more
*Apple* Retail - Multiple Positions (US) - A...
Sales Specialist - Retail Customer Service and Sales Transform Apple Store visitors into loyal Apple customers. When customers enter the store, you're also the Read more
*Apple* Retail - Multiple Positions (US) - A...
Job Description: Sales Specialist - Retail Customer Service and Sales Transform Apple Store visitors into loyal Apple customers. When customers enter the store, Read more
*Apple* Subject Matter Expert - Experis (Uni...
This position is for an Apple Subject Matter Expert to assist in developing the architecture, support and services for integration of Apple devices into the domain. Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.