TweetFollow Us on Twitter

Taking Advantage of The Intel Core Duo Processor-Based iMac

Volume Number: 22 (2006)
Issue Number: 7
Column Tag: Performance Optimization

Taking Advantage of The Intel Core Duo Processor-Based iMac

How to make your applications run faster

by Ganesh Rao and Ron Wayne Green

Introduction

This is the first of a three part series that will address the most effective techniques to optimize applications for the Intel(R) Core(TM) Duo processor-based Macs. Part one introduces the key aspects of the Core Duo processor, and exposes the architectural features for which tuning is most important. A data-driven performance methodology using the software development tools available on a Mac to highlight tuning and optimization opportunities for a variety of applications is then described at length. Intel Core Duo processors feature two execution cores and each of the cores is capable of vector processing of data, referred to as the Intel(R) Digital Media Boost, which extends the Single Instruction Multiple Data (SIMD) technology. The second part of this series outlines how to take advantage of SIMD by enabling vectorization in the Intel Compiler. The final part of this 3-part series provides readers with the next level of optimization by taking advantage of both execution cores in addition to SIMD. We will cover auto-parallelization, where simple loops can be rendered parallel. And finally we will cover OpenMP, which are powerful user-specified directives embedded in source code to auto-magically tell the compiler to thread the application. You will love how easily you can thread applications while at the same time maintaining fine grain control of threads.

In this article, advanced and innovative software optimizations techniques supported by industry-leading compilers are addressed. These optimization techniques are used in the field every day to get better performance. Key topics will be illustrated with C++ and Fortran code snippets.

Intel Core duo processor

There is a rumor going around that Apple Macs now use an Intel processor, and a very happy Intel processor at that! All humor aside, we know that the MacTech community is gaining a very sophisticated understanding of the details of the Intel Core Duo processor. We want to call out features in the processor that, based on our experience, are most likely to increase the performance of your application. Stated differently, in this section we call out processor features that can be leveraged to extract better application performance. The Intel Core Duo processor includes two execution cores in a single processor. Please see Figure 1. Each of the execution cores supports Single instruction Multiple Data (SIMD), which involves performing multiple computations with a single instruction in parallel. Please see Illustration 2 for a diagrammatic representation of SIMD.



Figure 1: Intel(R) Core(R) Duo processor architectue



Figure 2: SIMD performs the same operation on multiple data

Applications that are most likely to benefit from SIMD are those that can be characterized as 'loopy'. SIMD is quite commonly seen in programs that spend a significant amount of time processing integers and/or floating point numbers in a loop. An example of this is a matrix-multiply operation. Intel Streaming SIMD Extensions (SSE), and the AIM Alliance AltiVec* instructions are example implementations of SIMD. In a subsequent article, part 2 of this 3-part series, we will get an opportunity to share our best practices to taking advantage of the SIMD processing capability in your processor.

SIMD extracts the best performance of a single core. Taking this to the next level, it is obvious that one needs to keep both cores busy to get maximal performance from an application. The most optimal way of taking advantage of both execution cores is to thread your application. We will share some of our best known methods to thread applications in the third part of the series. We will wrap up our three part discussion by highlighting innovative compiler technologies.

Drawing the baseline

The start of any performance optimization activity should be the clear definition of the performance baseline. The unit of the baseline could be either transactions per second, or more simply, the run-time of the application. Our experience is that we are setting ourselves up for failure if we do not have a clear, reproducible understanding of the baseline. Having a reproducible baseline also means clearly defining your benchmark application with the correct workload that is representative of anticipated usage. It may be worthwhile at this stage to consider if you can peel out a part of the application you wish to examine and wrap a main() function around it. This technique allows you to observe the behavior of the section of the application of most interest. You can then use the 'time' utility to measure the time spent by the program. In most production applications, it is difficult to completely separate the kernel that we wish to observe and improve performance. In these cases, it may be easier to insert timers in your code as shown below:

Example:

/* Sample Timing */
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void)
{
   clock_t start, finish;
   long loop;
   double  duration, loop_calc;
   start = clock();
   // CODE TO BE MEASURED HERE
   //
   finish = clock();
   duration = (double)(finish - start)/CLOCKS_PER_SEC;
   printf("\n%2.3f seconds\n", duration);
}

While it is perfectly fine to use this 'time' API for applications and sections of code that run for a sufficient duration, the resolution of the clock is not fine enough for measuring a small, fast-running section of code.

An alternative is to use the rdtsc instruction (Read Time Stamp Counter). The rdtsc instruction returns the elapsed CPU clocks since the last reboot. This allows significantly higher resolution than using the 'time' API. Intel compilers implement a convenient intrinsic1 that makes it easy to measure rdtsc.

#include <stdio.h>
int main(void)
{
uint64_t start;
uint64_t stop;
uint64_t elapsed;
  
  #if __INTEL_COMPILER  
  // Start the counter
start=_rdtsc();  
#else   
  
  //Code to be measured here
  
  ...
  
//
#if __INTEL_COMPILER  
//Stop the counter
stop=_rdtsc();
elapsed = stop - start;
#else
//Calculate the runtime
elapsed = stop - start;
  printf("Processor cycles = %i64\n", elapsed); 
}

As of this writing, in some cases, rdtsc may report a wrong Time-Stamp counter value2. Using the technique described above with rdtsc does not work well if your thread switches context between the two cores, since the timer is separate on each core.

The other preferred alternative is to use the OS supported mach_absolute_time API abstraction.

#include <CoreServices/CoreServices.h>
#include <mach/mach.h>
#include <mach/mach_time.h>
int main(void)
{
    uint64_t        start;
    uint64_t        stop;
    uint64_t        elapsed;
    // Start the clock.
    start = mach_absolute_time();
    //Code to be measured here
  
    ...
  
    //
    // Stop the clock.
    stop = mach_absolute_time();
    // Calculate the run time
    elapsed = stop - start;
    printf("Processor cycles = %i64\n", elapsed); 
}

In the measurements we did, while mach_absolute_time and rdtsc seemed to provide answers that were close, there were small deviations. We need to clarify that while it may be comforting to think that we are measuring at the accuracy of clock-ticks, the measurements come bundled with a lot of variances. Specifically, you cannot measure the latency of a single instruction or even a bundle of instructions using either rdtsc or mach_absolute_time. In many cases, it is to the benefit of the programmer to set up benchmarks that have a sufficient runtime between start and stop timer. A sufficient runtime may be at a minimum on the order of tens or hundreds of seconds.

Hotspots in the code

Once we have a baseline, a powerful alternative to hand peeling code and inserting timers is to run a profiler to identify the hotspots in your code. Shark3 is a powerful tool to help you achieve this. We are not going to go into too much detail about using Shark in this article, since it is covered extensively elsewhere. Additionally, Shark can do much more than what we are calling out here. At a high level, Shark allows you to get a time profile which is based on sampling your code at fixed time intervals. Depending on your application, you may see profiles that are relatively flat, meaning there are no particular areas in your code that are exercised more than others. Or you could see clear peaks, which would mean that your program exercises a smaller portion of your code more extensively. Shark can clump the time profile by threads allowing you to see the profile of your code for each of the individual threads.

As a quick guide, start Shark from the hard disk at "/Developer/Applications/Performance Tools/CHUD"4. Figure 3 shows the start of a Shark session.



Figure 3: Shark Info window

Don't hit the Shark "start" button yet. First, start the application you need to profile. Hit the "start" button in Shark. Once started, Shark will automatically stop after 30 seconds or you can choose to hit "stop". Note that it is a good idea to take Shark snapshots over slightly extended periods to get repeatable results. Also, make sure that you have stopped running other applications so as to not pollute the profile gathered. Depending on your application, you may choose to start after your application has "warmed up" or progressed beyond startup initializations and initial file IO. If you are experienced with your application and its runtime behavior, it is relatively easy to know the hotspots in your code, and where they occur during a typical run. Thus, a correct technique is to monitor your application's log output, determine when the hotspot is started, start Shark, and gather a profile over a sufficient length of time.



Figure 4: Shark Time Profile

Note that at this stage it may still be to your advantage to insert timers in your code with print-statements as we saw in the previous section around the areas of code that are of interest to you.

Using the techniques highlighted above, we can gain insight into the operating characteristics of programs, and understand where we can make a difference. We can generally think of performance improvement for the serial portion of the code, but also consider threading the code and consider performance improvements due to threading. We can do a back-of-the-envelope estimate of the potential degree to which the performance of the overall application can be optimized due to serial improvements in the code, using Amdahl's law, as illustrated below.

Let us say that the hotspot or the section of the serial code we are optimizing is taking up fraction x of the total program run time. Then a speedup of fraction y on this section of the code should theoretically improve overall performance by 1/ ((1-x) + x/y). As a limiting condition, the theoretical maximum speedup possible is 1/(1-x). The limiting maximum speed up would occur if the section of the code we are considering takes zero time to run. As an example, if a section we are focused on is taking 50% of the total run time (x = .5), and we provide a doubling of speed (y = 2) in this section, we can expect an overall speedup of 1/(.5+(.5/2)) = 1/.75 = 1.33 or 33% speedup of the overall performance. As a theoretical maximum, we can get a 2x performance gain for the whole application where fraction x = .5, when speedup y tends to infinity.

Once we determine where we can make a difference, and how much of a difference we can make, we can then look at ways and means in which to make improvements. Please note that while in this article we are looking at serial improvements, in a future article we will look at estimating and planning for parallel improvements in detail.

One other related note before we end this section. Note that compilers as part of optimization can completely eliminate chunks of code it determines will not effect the outcome of the final program, also referred to as dead code elimination. While this is a very good thing for real applications, you need to be careful to ensure that the compilers do not throw away the performance kernel you have extracted in a snippet program in order to examine. Typically an output statement of the result will be all that is required to ensure that the Compiler does not eliminate the small section of code.

COMPILERS

This may sound like a cliche, but perhaps the first and the foremost tool at your disposal to make a performance difference should be your compiler. In addition to the GNU (gcc) Compiler, we will be discussing using the Intel(R) C++ compiler in the following sections. Both compilers integrate into Apple's Xcode Integrated Development Environment, and are binary and source compatible. Fortran developers can use the Intel(R) Fortran Compiler for Mac OS or several GNU options including g77, gfortran, or G95. While GNU is invoked with the 'gcc' command line, Intel Compilers are invoked with the 'icc' command line for C/C++ and the 'ifort' command for Fortran. While the examples that follow use the Intel C/C++ compiler, the same options apply to the Intel Fortran compiler (ifort).

Generally speaking, newer versions of the compiler optimize for systems running newer processors. You can verify the version of the compiler by using the -v flag.

$ icc -v
Version 9.1
$ gcc -v
Using built-in specs.
Target: i686-apple-darwin8
Configured with: /private/var/tmp/gcc/gcc-5250.obj~12/src/configure --disable-checking 
-enable-werror --prefix=/usr --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ 
--program-transform-name=/^[cg][^.-]*$/s/$/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 
--build=powerpc-apple-darwin8 --with-arch=pentium-m --with-tune=prescott --program-prefix= 
--host=i686-apple-darwin8 --target=i686-apple-darwin8
Thread model: posix
gcc version 4.0.1 (Apple Computer, Inc. build 5250)
  

Here is a very brief run down of the general optimization options available with the compilers. O0 (gcc -O0 or icc -O0) means no optimization is turned on. While it may be helpful to have O0 option to debug applications, your application will run at significant sub-optimal speed at this option level.

O1 and O2 are higher levels of optimization. O1 usually makes optimization tradeoffs that result in smaller compile time compared to O2.

O3 is the highest level of optimization and makes aggressive decisions on optimizations that require a judgment call between the size of the generated code, and the expected resulting speed of the application.

We should note here that despite throwing the best optimization options, compilers can still use your help. As an example, let us look at an often overlooked performance hit: denormals5, denormalized IEEE floating point representations in your code, can trigger exceptions that could result in severe runtime penalties. This is because denormals may require hardware and the OS to intervene in operations using denormal operands. When your application frequently uses very small numbers, you should consider taking advantage of the flush-to-zero (also referred to as FTZ for short) feature. The FTZ feature allows the CPU to take denormal values in registers within the CPU, and convert those values to zero, a valid IEEE representation. FTZ is default when using SIMD.

Consider the following example where denormals are deliberately triggered for illustration. Here, we look at the timing between gcc and icc for the following example:

#include <stdio.h>
main()
{
        long int i;
        double coefficient = .9;
        double data = 3e-308;
        for (i=0; i < 99999999; i++)
        {
                data *= coefficient;
        }
        printf("%f\t %x\n", data, *(unsigned long*)&data);
}
$ g++ -O3 denormal.cpp -o gden
$ time ./gden
0.000000         5
   real    0m13.462s
user    0m12.676s
sys     0m0.041s
$ icc denormal.cpp -o iden
denormal.cpp(8) : (col. 9) remark: LOOP WAS VECTORIZED.
$ time ./iden
0.000000         0
real    0m0.178s
user    0m0.138s
sys     0m0.006s

Notice that since the loop was fairly simple, the Intel compiler was able to vectorize the loop, and therefore use SIMD. Because Flush-To-Zero is the default when using SIMD registers, notice that the runtime improvement can be dramatic. We will dive into SIMD and auto-vectorization in more detail in the next installment of this series of articles.

Next installment

Now that we had a chance to go through the introductions, in the next installment, we will see how to pack a punch in your optimizations, without going through the tedious process of hand assembling instructions or even intrinsics. We will accomplish this by taking advantage of the Auto-vectorization feature. And yes, if you have Altivec code or SSE instructions that you are intending to migrate to take advantage of Auto-vectorization, then the next installment is a must read for you!

In the meantime, hopefully you will get the chance to visit with some members of the Intel Software Development Products team at WWDC.


Both authors are members of the Intel Compiler team. Ganesh Rao has been with Intel for over nine years and currently helps optimize applications to take advantage of the latest Intel processors using the Intel Compilers.

Ron Wayne Green has been involved in Fortran and high-performance computing applications development and support for over twenty years, and currently assists with Fortran and high-performance computing issues.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Make the passage of time your plaything...
While some of us are still waiting for a chance to get our hands on Ash Prime - yes, don’t remind me I could currently buy him this month I’m barely hanging on - Digital Extremes has announced its next anticipated Prime Form for Warframe. Starting... | Read more »
If you can find it and fit through the d...
The holy trinity of amazing company names have come together, to release their equally amazing and adorable mobile game, Hamster Inn. Published by HyperBeard Games, and co-developed by Mum Not Proud and Little Sasquatch Studios, it's time to... | Read more »
Amikin Survival opens for pre-orders on...
Join me on the wonderful trip down the inspiration rabbit hole; much as Palworld seemingly “borrowed” many aspects from the hit Pokemon franchise, it is time for the heavily armed animal survival to also spawn some illegitimate children as Helio... | Read more »
PUBG Mobile teams up with global phenome...
Since launching in 2019, SpyxFamily has exploded to damn near catastrophic popularity, so it was only a matter of time before a mobile game snapped up a collaboration. Enter PUBG Mobile. Until May 12th, players will be able to collect a host of... | Read more »
Embark into the frozen tundra of certain...
Chucklefish, developers of hit action-adventure sandbox game Starbound and owner of one of the cutest logos in gaming, has released their roguelike deck-builder Wildfrost. Created alongside developers Gaziter and Deadpan Games, Wildfrost will... | Read more »
MoreFun Studios has announced Season 4,...
Tension has escalated in the ever-volatile world of Arena Breakout, as your old pal Randall Fisher and bosses Fred and Perrero continue to lob insults and explosives at each other, bringing us to a new phase of warfare. Season 4, Into The Fog of... | Read more »
Top Mobile Game Discounts
Every day, we pick out a curated list of the best mobile discounts on the App Store and post them here. This list won't be comprehensive, but it every game on it is recommended. Feel free to check out the coverage we did on them in the links below... | Read more »
Marvel Future Fight celebrates nine year...
Announced alongside an advertising image I can only assume was aimed squarely at myself with the prominent Deadpool and Odin featured on it, Netmarble has revealed their celebrations for the 9th anniversary of Marvel Future Fight. The Countdown... | Read more »
HoYoFair 2024 prepares to showcase over...
To say Genshin Impact took the world by storm when it was released would be an understatement. However, I think the most surprising part of the launch was just how much further it went than gaming. There have been concerts, art shows, massive... | Read more »
Explore some of BBCs' most iconic s...
Despite your personal opinion on the BBC at a managerial level, it is undeniable that it has overseen some fantastic British shows in the past, and now thanks to a partnership with Roblox, players will be able to interact with some of these... | Read more »

Price Scanner via MacPrices.net

You can save $300-$480 on a 14-inch M3 Pro/Ma...
Apple has 14″ M3 Pro and M3 Max MacBook Pros in stock today and available, Certified Refurbished, starting at $1699 and ranging up to $480 off MSRP. Each model features a new outer case, shipping is... Read more
24-inch M1 iMacs available at Apple starting...
Apple has clearance M1 iMacs available in their Certified Refurbished store starting at $1049 and ranging up to $300 off original MSRP. Each iMac is in like-new condition and comes with Apple’s... Read more
Walmart continues to offer $699 13-inch M1 Ma...
Walmart continues to offer new Apple 13″ M1 MacBook Airs (8GB RAM, 256GB SSD) online for $699, $300 off original MSRP, in Space Gray, Silver, and Gold colors. These are new MacBook for sale by... Read more
B&H has 13-inch M2 MacBook Airs with 16GB...
B&H Photo has 13″ MacBook Airs with M2 CPUs, 16GB of memory, and 256GB of storage in stock and on sale for $1099, $100 off Apple’s MSRP for this configuration. Free 1-2 day delivery is available... Read more
14-inch M3 MacBook Pro with 16GB of RAM avail...
Apple has the 14″ M3 MacBook Pro with 16GB of RAM and 1TB of storage, Certified Refurbished, available for $300 off MSRP. Each MacBook Pro features a new outer case, shipping is free, and an Apple 1-... Read more
Apple M2 Mac minis on sale for up to $150 off...
Amazon has Apple’s M2-powered Mac minis in stock and on sale for $100-$150 off MSRP, each including free delivery: – Mac mini M2/256GB SSD: $499, save $100 – Mac mini M2/512GB SSD: $699, save $100 –... Read more
Amazon is offering a $200 discount on 14-inch...
Amazon has 14-inch M3 MacBook Pros in stock and on sale for $200 off MSRP. Shipping is free. Note that Amazon’s stock tends to come and go: – 14″ M3 MacBook Pro (8GB RAM/512GB SSD): $1399.99, $200... Read more
Sunday Sale: 13-inch M3 MacBook Air for $999,...
Several Apple retailers have the new 13″ MacBook Air with an M3 CPU in stock and on sale today for only $999 in Midnight. These are the lowest prices currently available for new 13″ M3 MacBook Airs... Read more
Multiple Apple retailers are offering 13-inch...
Several Apple retailers have 13″ MacBook Airs with M2 CPUs in stock and on sale this weekend starting at only $849 in Space Gray, Silver, Starlight, and Midnight colors. These are the lowest prices... Read more
Roundup of Verizon’s April Apple iPhone Promo...
Verizon is offering a number of iPhone deals for the month of April. Switch, and open a new of service, and you can qualify for a free iPhone 15 or heavy monthly discounts on other models: – 128GB... Read more

Jobs Board

IN6728 Optometrist- *Apple* Valley, CA- Tar...
Date: Apr 9, 2024 Brand: Target Optical Location: Apple Valley, CA, US, 92308 **Requisition ID:** 824398 At Target Optical, we help people see and look great - and Read more
Medical Assistant - Orthopedics *Apple* Hil...
Medical Assistant - Orthopedics Apple Hill York Location: WellSpan Medical Group, York, PA Schedule: Full Time Sign-On Bonus Eligible Remote/Hybrid Regular Apply Now Read more
*Apple* Systems Administrator - JAMF - Activ...
…**Public Trust/Other Required:** None **Job Family:** Systems Administration **Skills:** Apple Platforms,Computer Servers,Jamf Pro **Experience:** 3 + years of Read more
Liquor Stock Clerk - S. *Apple* St. - Idaho...
Liquor Stock Clerk - S. Apple St. Boise Posting Begin Date: 2023/10/10 Posting End Date: 2024/10/14 Category: Retail Sub Category: Customer Service Work Type: Part Read more
Top Secret *Apple* System Admin - Insight G...
Job Description Day to Day: * Configure and maintain the client's Apple Device Management (ADM) solution. The current solution is JAMF supporting 250-500 end points, Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.