TweetFollow Us on Twitter

Taking Advantage of The Intel Core Duo Processor-Based iMac

Volume Number: 22 (2006)
Issue Number: 7
Column Tag: Performance Optimization

Taking Advantage of The Intel Core Duo Processor-Based iMac

How to make your applications run faster

by Ganesh Rao and Ron Wayne Green

Introduction

This is the first of a three part series that will address the most effective techniques to optimize applications for the Intel(R) Core(TM) Duo processor-based Macs. Part one introduces the key aspects of the Core Duo processor, and exposes the architectural features for which tuning is most important. A data-driven performance methodology using the software development tools available on a Mac to highlight tuning and optimization opportunities for a variety of applications is then described at length. Intel Core Duo processors feature two execution cores and each of the cores is capable of vector processing of data, referred to as the Intel(R) Digital Media Boost, which extends the Single Instruction Multiple Data (SIMD) technology. The second part of this series outlines how to take advantage of SIMD by enabling vectorization in the Intel Compiler. The final part of this 3-part series provides readers with the next level of optimization by taking advantage of both execution cores in addition to SIMD. We will cover auto-parallelization, where simple loops can be rendered parallel. And finally we will cover OpenMP, which are powerful user-specified directives embedded in source code to auto-magically tell the compiler to thread the application. You will love how easily you can thread applications while at the same time maintaining fine grain control of threads.

In this article, advanced and innovative software optimizations techniques supported by industry-leading compilers are addressed. These optimization techniques are used in the field every day to get better performance. Key topics will be illustrated with C++ and Fortran code snippets.

Intel Core duo processor

There is a rumor going around that Apple Macs now use an Intel processor, and a very happy Intel processor at that! All humor aside, we know that the MacTech community is gaining a very sophisticated understanding of the details of the Intel Core Duo processor. We want to call out features in the processor that, based on our experience, are most likely to increase the performance of your application. Stated differently, in this section we call out processor features that can be leveraged to extract better application performance. The Intel Core Duo processor includes two execution cores in a single processor. Please see Figure 1. Each of the execution cores supports Single instruction Multiple Data (SIMD), which involves performing multiple computations with a single instruction in parallel. Please see Illustration 2 for a diagrammatic representation of SIMD.



Figure 1: Intel(R) Core(R) Duo processor architectue



Figure 2: SIMD performs the same operation on multiple data

Applications that are most likely to benefit from SIMD are those that can be characterized as 'loopy'. SIMD is quite commonly seen in programs that spend a significant amount of time processing integers and/or floating point numbers in a loop. An example of this is a matrix-multiply operation. Intel Streaming SIMD Extensions (SSE), and the AIM Alliance AltiVec* instructions are example implementations of SIMD. In a subsequent article, part 2 of this 3-part series, we will get an opportunity to share our best practices to taking advantage of the SIMD processing capability in your processor.

SIMD extracts the best performance of a single core. Taking this to the next level, it is obvious that one needs to keep both cores busy to get maximal performance from an application. The most optimal way of taking advantage of both execution cores is to thread your application. We will share some of our best known methods to thread applications in the third part of the series. We will wrap up our three part discussion by highlighting innovative compiler technologies.

Drawing the baseline

The start of any performance optimization activity should be the clear definition of the performance baseline. The unit of the baseline could be either transactions per second, or more simply, the run-time of the application. Our experience is that we are setting ourselves up for failure if we do not have a clear, reproducible understanding of the baseline. Having a reproducible baseline also means clearly defining your benchmark application with the correct workload that is representative of anticipated usage. It may be worthwhile at this stage to consider if you can peel out a part of the application you wish to examine and wrap a main() function around it. This technique allows you to observe the behavior of the section of the application of most interest. You can then use the 'time' utility to measure the time spent by the program. In most production applications, it is difficult to completely separate the kernel that we wish to observe and improve performance. In these cases, it may be easier to insert timers in your code as shown below:

Example:

/* Sample Timing */
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void)
{
   clock_t start, finish;
   long loop;
   double  duration, loop_calc;
   start = clock();
   // CODE TO BE MEASURED HERE
   //
   finish = clock();
   duration = (double)(finish - start)/CLOCKS_PER_SEC;
   printf("\n%2.3f seconds\n", duration);
}

While it is perfectly fine to use this 'time' API for applications and sections of code that run for a sufficient duration, the resolution of the clock is not fine enough for measuring a small, fast-running section of code.

An alternative is to use the rdtsc instruction (Read Time Stamp Counter). The rdtsc instruction returns the elapsed CPU clocks since the last reboot. This allows significantly higher resolution than using the 'time' API. Intel compilers implement a convenient intrinsic1 that makes it easy to measure rdtsc.

#include <stdio.h>
int main(void)
{
uint64_t start;
uint64_t stop;
uint64_t elapsed;
  
  #if __INTEL_COMPILER  
  // Start the counter
start=_rdtsc();  
#else   
  
  //Code to be measured here
  
  ...
  
//
#if __INTEL_COMPILER  
//Stop the counter
stop=_rdtsc();
elapsed = stop - start;
#else
//Calculate the runtime
elapsed = stop - start;
  printf("Processor cycles = %i64\n", elapsed); 
}

As of this writing, in some cases, rdtsc may report a wrong Time-Stamp counter value2. Using the technique described above with rdtsc does not work well if your thread switches context between the two cores, since the timer is separate on each core.

The other preferred alternative is to use the OS supported mach_absolute_time API abstraction.

#include <CoreServices/CoreServices.h>
#include <mach/mach.h>
#include <mach/mach_time.h>
int main(void)
{
    uint64_t        start;
    uint64_t        stop;
    uint64_t        elapsed;
    // Start the clock.
    start = mach_absolute_time();
    //Code to be measured here
  
    ...
  
    //
    // Stop the clock.
    stop = mach_absolute_time();
    // Calculate the run time
    elapsed = stop - start;
    printf("Processor cycles = %i64\n", elapsed); 
}

In the measurements we did, while mach_absolute_time and rdtsc seemed to provide answers that were close, there were small deviations. We need to clarify that while it may be comforting to think that we are measuring at the accuracy of clock-ticks, the measurements come bundled with a lot of variances. Specifically, you cannot measure the latency of a single instruction or even a bundle of instructions using either rdtsc or mach_absolute_time. In many cases, it is to the benefit of the programmer to set up benchmarks that have a sufficient runtime between start and stop timer. A sufficient runtime may be at a minimum on the order of tens or hundreds of seconds.

Hotspots in the code

Once we have a baseline, a powerful alternative to hand peeling code and inserting timers is to run a profiler to identify the hotspots in your code. Shark3 is a powerful tool to help you achieve this. We are not going to go into too much detail about using Shark in this article, since it is covered extensively elsewhere. Additionally, Shark can do much more than what we are calling out here. At a high level, Shark allows you to get a time profile which is based on sampling your code at fixed time intervals. Depending on your application, you may see profiles that are relatively flat, meaning there are no particular areas in your code that are exercised more than others. Or you could see clear peaks, which would mean that your program exercises a smaller portion of your code more extensively. Shark can clump the time profile by threads allowing you to see the profile of your code for each of the individual threads.

As a quick guide, start Shark from the hard disk at "/Developer/Applications/Performance Tools/CHUD"4. Figure 3 shows the start of a Shark session.



Figure 3: Shark Info window

Don't hit the Shark "start" button yet. First, start the application you need to profile. Hit the "start" button in Shark. Once started, Shark will automatically stop after 30 seconds or you can choose to hit "stop". Note that it is a good idea to take Shark snapshots over slightly extended periods to get repeatable results. Also, make sure that you have stopped running other applications so as to not pollute the profile gathered. Depending on your application, you may choose to start after your application has "warmed up" or progressed beyond startup initializations and initial file IO. If you are experienced with your application and its runtime behavior, it is relatively easy to know the hotspots in your code, and where they occur during a typical run. Thus, a correct technique is to monitor your application's log output, determine when the hotspot is started, start Shark, and gather a profile over a sufficient length of time.



Figure 4: Shark Time Profile

Note that at this stage it may still be to your advantage to insert timers in your code with print-statements as we saw in the previous section around the areas of code that are of interest to you.

Using the techniques highlighted above, we can gain insight into the operating characteristics of programs, and understand where we can make a difference. We can generally think of performance improvement for the serial portion of the code, but also consider threading the code and consider performance improvements due to threading. We can do a back-of-the-envelope estimate of the potential degree to which the performance of the overall application can be optimized due to serial improvements in the code, using Amdahl's law, as illustrated below.

Let us say that the hotspot or the section of the serial code we are optimizing is taking up fraction x of the total program run time. Then a speedup of fraction y on this section of the code should theoretically improve overall performance by 1/ ((1-x) + x/y). As a limiting condition, the theoretical maximum speedup possible is 1/(1-x). The limiting maximum speed up would occur if the section of the code we are considering takes zero time to run. As an example, if a section we are focused on is taking 50% of the total run time (x = .5), and we provide a doubling of speed (y = 2) in this section, we can expect an overall speedup of 1/(.5+(.5/2)) = 1/.75 = 1.33 or 33% speedup of the overall performance. As a theoretical maximum, we can get a 2x performance gain for the whole application where fraction x = .5, when speedup y tends to infinity.

Once we determine where we can make a difference, and how much of a difference we can make, we can then look at ways and means in which to make improvements. Please note that while in this article we are looking at serial improvements, in a future article we will look at estimating and planning for parallel improvements in detail.

One other related note before we end this section. Note that compilers as part of optimization can completely eliminate chunks of code it determines will not effect the outcome of the final program, also referred to as dead code elimination. While this is a very good thing for real applications, you need to be careful to ensure that the compilers do not throw away the performance kernel you have extracted in a snippet program in order to examine. Typically an output statement of the result will be all that is required to ensure that the Compiler does not eliminate the small section of code.

COMPILERS

This may sound like a cliche, but perhaps the first and the foremost tool at your disposal to make a performance difference should be your compiler. In addition to the GNU (gcc) Compiler, we will be discussing using the Intel(R) C++ compiler in the following sections. Both compilers integrate into Apple's Xcode Integrated Development Environment, and are binary and source compatible. Fortran developers can use the Intel(R) Fortran Compiler for Mac OS or several GNU options including g77, gfortran, or G95. While GNU is invoked with the 'gcc' command line, Intel Compilers are invoked with the 'icc' command line for C/C++ and the 'ifort' command for Fortran. While the examples that follow use the Intel C/C++ compiler, the same options apply to the Intel Fortran compiler (ifort).

Generally speaking, newer versions of the compiler optimize for systems running newer processors. You can verify the version of the compiler by using the -v flag.

$ icc -v
Version 9.1
$ gcc -v
Using built-in specs.
Target: i686-apple-darwin8
Configured with: /private/var/tmp/gcc/gcc-5250.obj~12/src/configure --disable-checking 
-enable-werror --prefix=/usr --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ 
--program-transform-name=/^[cg][^.-]*$/s/$/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 
--build=powerpc-apple-darwin8 --with-arch=pentium-m --with-tune=prescott --program-prefix= 
--host=i686-apple-darwin8 --target=i686-apple-darwin8
Thread model: posix
gcc version 4.0.1 (Apple Computer, Inc. build 5250)
  

Here is a very brief run down of the general optimization options available with the compilers. O0 (gcc -O0 or icc -O0) means no optimization is turned on. While it may be helpful to have O0 option to debug applications, your application will run at significant sub-optimal speed at this option level.

O1 and O2 are higher levels of optimization. O1 usually makes optimization tradeoffs that result in smaller compile time compared to O2.

O3 is the highest level of optimization and makes aggressive decisions on optimizations that require a judgment call between the size of the generated code, and the expected resulting speed of the application.

We should note here that despite throwing the best optimization options, compilers can still use your help. As an example, let us look at an often overlooked performance hit: denormals5, denormalized IEEE floating point representations in your code, can trigger exceptions that could result in severe runtime penalties. This is because denormals may require hardware and the OS to intervene in operations using denormal operands. When your application frequently uses very small numbers, you should consider taking advantage of the flush-to-zero (also referred to as FTZ for short) feature. The FTZ feature allows the CPU to take denormal values in registers within the CPU, and convert those values to zero, a valid IEEE representation. FTZ is default when using SIMD.

Consider the following example where denormals are deliberately triggered for illustration. Here, we look at the timing between gcc and icc for the following example:

#include <stdio.h>
main()
{
        long int i;
        double coefficient = .9;
        double data = 3e-308;
        for (i=0; i < 99999999; i++)
        {
                data *= coefficient;
        }
        printf("%f\t %x\n", data, *(unsigned long*)&data);
}
$ g++ -O3 denormal.cpp -o gden
$ time ./gden
0.000000         5
   real    0m13.462s
user    0m12.676s
sys     0m0.041s
$ icc denormal.cpp -o iden
denormal.cpp(8) : (col. 9) remark: LOOP WAS VECTORIZED.
$ time ./iden
0.000000         0
real    0m0.178s
user    0m0.138s
sys     0m0.006s

Notice that since the loop was fairly simple, the Intel compiler was able to vectorize the loop, and therefore use SIMD. Because Flush-To-Zero is the default when using SIMD registers, notice that the runtime improvement can be dramatic. We will dive into SIMD and auto-vectorization in more detail in the next installment of this series of articles.

Next installment

Now that we had a chance to go through the introductions, in the next installment, we will see how to pack a punch in your optimizations, without going through the tedious process of hand assembling instructions or even intrinsics. We will accomplish this by taking advantage of the Auto-vectorization feature. And yes, if you have Altivec code or SSE instructions that you are intending to migrate to take advantage of Auto-vectorization, then the next installment is a must read for you!

In the meantime, hopefully you will get the chance to visit with some members of the Intel Software Development Products team at WWDC.


Both authors are members of the Intel Compiler team. Ganesh Rao has been with Intel for over nine years and currently helps optimize applications to take advantage of the latest Intel processors using the Intel Compilers.

Ron Wayne Green has been involved in Fortran and high-performance computing applications development and support for over twenty years, and currently assists with Fortran and high-performance computing issues.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

PDFKey Pro 4.3 - Edit and print password...
PDFKey Pro can unlock PDF documents protected for printing and copying when you've forgotten your password. It can now also protect your PDF files with a password to prevent unauthorized access and/... Read more
Kodi 15.0.beta1 - Powerful media center...
Kodi (was XBMC) is an award-winning free and open-source (GPL) software media player and entertainment hub that can be installed on Linux, OS X, Windows, iOS, and Android, featuring a 10-foot user... Read more
DiskCatalogMaker 6.4.12 - Catalog your d...
DiskCatalogMaker is a simple disk management tool which catalogs disks. Simple, light-weight, and fast. Finder-like intuitive look and feel. Super-fast search algorithm. Can compress catalog data... Read more
Macs Fan Control 1.3.0.0 - Monitor and c...
Macs Fan Control allows you to monitor and control almost any aspect of your computer's fans, with support for controlling fan speed, temperature sensors pane, menu-bar icon, and autostart with... Read more
Lyn 1.5.11 - Lightweight image browser a...
Lyn is a lightweight and fast image browser and viewer designed for photographers, graphic artists and Web designers. Featuring an extremely versatile and aesthetically pleasing interface, it... Read more
NeoOffice 2014.11 - Mac-tailored, OpenOf...
NeoOffice is a complete office suite for OS X. With NeoOffice, users can view, edit, and save OpenOffice documents, PDF files, and most Microsoft Word, Excel, and PowerPoint documents. NeoOffice 3.x... Read more
LaunchBar 6.4 - Powerful file/URL/email...
LaunchBar is an award-winning productivity utility that offers an amazingly intuitive and efficient way to search and access any kind of information stored on your computer or on the Web. It provides... Read more
Remotix 3.1.4 - Access all your computer...
Remotix is a fast and powerful application to easily access multiple Macs (and PCs) from your own Mac. Features Complete Apple Screen Sharing support - including Mac OS X login, clipboard... Read more
DesktopLyrics 2.6.6 - Displays current i...
DesktopLyrics is an application that displays the lyrics of the song currently playing in "iTunes" right on your desktop. The lyrics for the song have to be set in iTunes; DesktopLyrics does nothing... Read more
VOX 2.5.1 - Music player that supports m...
VOX is a beautiful music player that supports many filetypes. The beauty is in its simplicity, yet behind the minimal exterior lies a powerful music player with a ton of features and support for all... Read more

This Week at 148Apps: May 18-22, 2015
May Days at 148Apps How do you know what apps are worth your time and money? Just look to the review team at 148Apps. We sort through the chaos and find the apps you're looking for. The ones we love become Editor’s Choice, standing out above the... | Read more »
Biz Builder Delux (Games)
Biz Builder Delux 1.0.0 Device: iOS Universal Category: Games Price: $4.99, Version: 1.0.0 (iTunes) Description: Ah, there's nothing like the rhythmic bustle of a burgeoning business burg... especially when you're the one building it... | Read more »
Auroch Digital is Bringing Back Games Wo...
| Read more »
Blades of Brim is a New Endless Runner f...
SYBO Games, the minds behind the ever-popular Subway Surfers, have announced their latest project: Blades of Brim. [Read more] | Read more »
Carbo - Handwriting in the Digital Age...
Carbo - Handwriting in the Digital Age 1.0 Device: iOS Universal Category: Productivity Price: $3.99, Version: 1.0 (iTunes) Description: | Read more »
Draggy Dead (Games)
Draggy Dead 1.1 Device: iOS Universal Category: Games Price: $.99, Version: 1.1 (iTunes) Description: Ditch your dead end job and take up a rewarding career in Grave Robbing today!Guide the recently deceased to a fun filled life of... | Read more »
Bad Dinos (Games)
Bad Dinos 1.0.0 Device: iOS Universal Category: Games Price: $2.99, Version: 1.0.0 (iTunes) Description: | Read more »
The Apple Watch isn't Great as a Fi...
| Read more »
Show the World What You See With Stre.am...
Live broadcasting is getting popular on mobile devices, which is why you can now get Stre.am, by Infinite Takes. [Read more] | Read more »
PhotoTime's 2.1 Update Adds Apple W...
The latest PhotoTime update is adding even more functionality to the handy photo organizing app. Yep, including Apple Watch support. [Read more] | Read more »

Price Scanner via MacPrices.net

12-inch MacBook stock status for Monday, May...
The new 12″ Retina MacBooks are still on backorder at The Apple Store with a 3-5 week waiting period. However, a few models are in stock today at Apple resellers. Stock is limited, so act now if you’... Read more
New 27-inch 3.3GHz 5K iMac in stock with free...
Adorama has the new 27″ 3.3GHz 5K iMac in stock today for $1999 including free shipping plus NY & NJ sales tax only. Adorama will include a free copy of Apple’s 3-year AppleCare Protection Plan. Read more
Memorial Day Weekend Sale: New 27-inch 3.3GHz...
Best Buy has the new 27″ 3.3GHz 5K iMac on sale for $1899.99 this weekend. Choose free shipping or free local store pickup (if available). Sale price for online orders only, in-store prices may vary... Read more
OtterBox Maximizes Portability, Productivity...
From the kitchen recipe book to the boarsroom presentation, the OtterBox Agility Tablet System turns tablets into one of the most versatile pieces of handheld technology available. Available now, the... Read more
Launch of New Car App Gallery and Open Develo...
Automatic, a company on a mission to bring the power of the Internet into every car, has announced the launch of the Automatic App Gallery, an app store for nearly every car or truck on the road... Read more
Memorial Day Weekend Sale: 13-inch 1.6GHz Mac...
Best Buy has the new 13″ 1.6GHz/128GB MacBook Air on sale for $849 on their online store this weekend. Choose free shipping or free local store pickup (if available). Sale price for online orders... Read more
Memorial Day Weekend Sale: 27-inch 3.5GHz 5K...
Best Buy has the 27″ 3.5GHz 5K iMac on sale for $2099.99 this weekend. Choose free shipping or free local store pickup (if available). Sale price for online orders only, in-store prices may vary.... Read more
Sale! 16GB iPad mini 3 for $349, save $50
B&H Photo has the 16GB iPad mini 3 WiFi on sale for $349 including free shipping plus NY sales tax only. Their price is $50 off MSRP, and it’s the lowest price available for this model. Read more
Price drop on 2014 15-inch Retina MacBook Pro...
B&H Photo has dropped prices on 2014 15″ Retina MacBook Pros by $200. Shipping is free, and B&H charges NY sales tax only: - 15″ 2.2GHz Retina MacBook Pro: $1799.99 save $200 - 15″ 2.5GHz... Read more
With a Mission to Make Mobile Free, Scratch W...
Scratch Wireless, claiming to be the world’s first truly free mobile service, has announced the availability of a new Scratch-enabled Android smartphone, the Coolpad Arise. The smartphone is equipped... Read more

Jobs Board

Payments Counsel, *Apple* Pay (mobile payme...
**Job Summary** Apple is looking for an atto ey to join Apple 's Legal Department to support Apple Pay. **Key Qualifications** 7+ years of relevant experience Read more
Touch Hardware Design and Integration Enginee...
…Summary** Design, develop, and launch next-generation Touch solutions in the new Apple Watch product category. The Touch team develops cutting-edge Touch solutions and Read more
*Apple* Solutions Consultant - Retail Sales...
**Job Summary** As an Apple Solutions Consultant (ASC) you are the link between our customers and our products. Your role is to drive the Apple business in a retail Read more
*Apple* TV Live Streaming Frameworks Test En...
**Job Summary** Work and contribute towards the engineering of Apple 's state-of-the-art products involving video, audio, and graphics in Interactive Media Group (IMG) at Read more
*Apple* Retail - Multiple Positions (US) - A...
Sales Specialist - Retail Customer Service and Sales Transform Apple Store visitors into loyal Apple customers. When customers enter the store, you're also the Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.