TweetFollow Us on Twitter

Parallel Programming With the Intel Core Duo Processor

Volume Number: 22 (2006)
Issue Number: 9
Column Tag: Performance Optimization

Parallel Programming With the Intel Core Duo Processor

Extracting maximum performance from your applications

by Ron Wayne Green

Introduction

This is the third and final part of a three part series that addresses the most effective techniques to optimize applications for the Intel(R) Core(TM) Duo processor-based Macintosh. Part one of this series introduced key aspects of the Intel Core Duo processor and exposed the architectural features for which tuning is most important. Also presented in that first article was a data-driven performance methodology using the software development tools available on an Intel-based Macintosh to highlight tuning and optimization opportunities. The second article of this three-part series introduced the Intel(R) Digital Media Boost technology of the Intel Core Duo processor, its capabilities, and how a programmer can exploit this computing power. We wrap up the final part of this three-part series with the most interesting and creative aspect to programming the Intel(R) Core Duo processor - taking advantage of both execution cores in the Intel Core Duo processor.

The Need for Parallelization

Multi-processor workstations have been available for many years. However, until recently systems with more than one execution core have been beyond the price range of the average home user. The Intel(R) Core(TM) Duo processor is helping to change this. Now parallel processing capability is within the reach of every user. The question that naturally comes up is "Now that I have two cores, what can I do with them"? The short answer to this question is that you either:

    1. Run more programs simultaneously (multi-tasking)

    2. Parallelize your applications (multi-theading or parallel programming)

In this article we will take a look at these two methods to take advantage of the additional processing power provided by a dual-core processor. We will emphasize parallel programming models as this is perhaps most of interest to software developers interested in high-performance, scalable applications.

Multi-tasking Parallelism

Multi-tasking parallelism is a model where two or more processes are created and run simultaneously by the OS. This follows The creation of the processes is done either programmatically via fork()/exec() function calls or often times by simply running multiple instances of the same program simultaneously. The processes created in either of these methods will be scheduled on the two cores by the OS.

This model of parallelism works well when the user is performing parametric studies, performing Monte Carlo simulations, statistical studies, or where a large number of job runs are used to characterize a solution space. Typically these types of problems use tens to thousands of input files with variations on the characteristic parameters. The same executable program is run as a separate job for each file. The goal with this model is throughput, or processing as many runs in a given wall clock time as possible.

Where this model is applicable: This model of parallelism benefits from the multiple cores by treating each core as an execution unit for each of the separate tasks (jobs). However, having the extra processing core in the Intel(R) Core(TM) Duo processor benefits this model by providing the system with twice the throughput capability over a single-core system. This model should be straightforward and familiar.

Parallelization via Multi-threading

The remainder of this article focuses on multi-threaded programming techniques. These techniques enable a single program to create multiple threads of execution. Threads are scheduled equally on available executions cores providing increased application performance.

Parallelization via auto-parallelizing compilers

Auto-parallelizing compilers attempt to analyze programs and automatically generate parallel code. As far as ease of use for the programmer, there is nothing easier than taking an existing serial code and allowing an auto-parallelizing compiler to generate parallel code. There is an old saying that applies here: If something seems too good to be true, it probably is. Creating an auto-parallelizing compiler that produces efficient code in all cases for a wide variety of applications has proven to be a very difficult task. Thus, auto-parallelization remains a hot research topic and the perfect auto-parallelizing compiler is yet to be produced. However, there are some cases where auto-parallelization is perfectly suited.

Let us examine the auto-parallelization capabilities of the Intel Fortran and C++ Compilers for Mac OS. Both these compilers support auto-parallelization. In addition, these compilers have to capability to produce a parallelization report. Auto-parallelizing compilers are most effective at very small, tight, uncomplicated loop structures. There are some applications where the majority of time is spent in a tight little uncomplicated loop. Consider the code example shown below which we save in a file named "heat.f90"

      1   program heat
      2
      3   real, dimension(500,500), target :: plate
      4   real, dimension(498,498)         :: new_inside
      5   real, pointer, dimension(:,:)  :: n, e, s, w, inside
      6   real    :: diff
      7   integer :: i,j, niter
      8
      9   ! Set up initial conditions
     10   plate = 0
     11   plate(1:500,1) = 1.0  ! boundary values
     12   plate(1,1:500) = (/ ( 0.01*j, j = 500, 1, -1 ) /)
     13
     14   !  Point to parts of the plate
     15   inside => plate(2:499,2:499)
     16   n => plate(1:498,2:499)
     17   s => plate(3:500,2:499)
     18   e => plate(2:499,3:500)
     19   w => plate(2:499,1:498)
     20
     21   write(*,*) "starting iterations"
     22   do
     23     new_inside = (n + s + e + w)/4.0
     24     diff = maxval(abs(new_inside-inside))
     25     inside = new_inside
     26     if (diff < 1.0e-4) then
     27       exit  !...exits the do loop, not the program
     28     end if
     29   end do
     30   write(*,*) "done"
     31   end program heat

This example is a highly simplified 2-D heat transfer problem. In this example we simulate a 2-D plate of material with some heat applied to two edges. For simplicity we assume that this initial boundary condition is maintained. The purpose of this example is to simulate the heat transfer over simulated time from the edges of the plate into and across the plate until steady state is reached. The general method to simulate the heat transfer is to divide the plate into cells. In our example, we divide the plate into 500x500 cells represented by the array elements in plate. Focusing on a single cell, we assume that the heat energy in a cell at time T+1 is the average of the heat energy at time T in the four neighboring cells in the north, south, east and west directions. The syntax for this is:

new = ( old(north) + old(south) + old(east) + old(west) ) / 4

or

new_energy( i,j ) = ( old( i-1, j )+ old( i+1,j )+ old( I,j+1 )+ old( I,j-1) )/4.0

Which we can visualize as shown in Figure 4 below



Figure 1. Energy for cell (i,j) at T+1

This example may take some consideration if you are not familiar with modern Fortran 90 array syntax and pointers. Let's examine this code. On line 3 we declare a 500x500 element array named plate. In Fortran 90, pointers can only point to variables with the TARGET attribute as we see in this declaration. The variable plate will hold the current, or time T, energy values. On line 4 we declare a second array named new_inside that will be used to store the new values inside the plate, the time T+1 values. The inside of the plate excludes the boundary cells which are assumed to be fixed energy and will not change over simulated time.

Keeping in mind that Fortran 90 pointers can point to array sections, we set up a pointer named inside that points to the non-boundary elements of plate. These are rows 2 through 499 and columns 2 through 499. This is done on line 15 in the code example and can be visualized by the diagram below. Keep in mind that pointer variable inside is pointing to a subset of the same memory as plate.



Figure 2. Pointer "inside" of "plate"

Now, visualize taking this inside section of plate and shifting it 1 cell north. This is what the pointer variable n (north) is set to with the statement on line 16. We do a similar initialization for the variables s (south), e (east), and w (west) on lines 17, 18, and 19 respectively. This can be visualized as shown in Figure 3.



Figure 3. Pointers for north, south, east, and west

Looking at the example code, on line 22 starts a do loop that simulates the time steps for the simulation. Note that the exit criteria for the simulation is handled in the conditional on lines 26-28. The bulk of the work then is contained in the statement:

new_inside = (n + s + e + w)/4.0

This seeming simple statement does an element-by-element addition of the cells pointed to by n, e, s and w. After some study you will see that this does the equivalent double-nested loop:

do i=1,498
  do j=1,498
    new_inside(i,j) = ( n(i,j)+s(i,j)+e(i,j)+w(i,j) )/4.0
  end do
end do

The next interesting statement, line 24, takes advantage of the fact that Fortran 90 intrinsic functions abs and maxval can operate on array operands. The statement:

 diff = maxval(abs(new_inside-inside))

will find the absolute values of the differences between the new energies ( new_inside ) and the old energies ( inside ) on all cells. This result of the abs function is an array of shape ( 1:498, 1:498 ). This array then is the argument to MAXVAL instrinsic which will return the scalar value of the maximum value of these differences. This we store in the scalar variable diff which is then used to determine when the difference between the old and the new energies have reached a threshold when we can assume that the plate has reached a steady state. Again, this same functionality could be implemented with a double-nested loop. Obviously this example ignores many thermodynamic principles such as realistic time steps and thermal conductivity constants. Still, it is representative of many iterative approximation methods.

To test the efficiency of the auto-parallelization capabilities of the Intel Fortran Compiler for Mac OS, we first need to do a baseline run. For this example we use -O2 which you will recall is the default optimization level when -O is not specified.

ifort -o heat -O2  heat.f90
$ time ./heat
 starting iterations
 done
real    0m17.614s
user    0m17.555s
sys     0m0.025s

To auto-parallelize an application with the Intel Fortran and C++ compilers for Mac OS, we use the -parallel compiler option. This causes the compiler to look for parallelization opportunites on an entire source file.

$ ifort -o heat -O2 -parallel heat.f90
heat.f90(23) : (col. 2) remark: LOOP WAS AUTO-PARALLELIZED.
heat.f90(24) : (col. 9) remark: LOOP WAS AUTO-PARALLELIZED.

Note the remarks for lines 23 and 24. These indicate that the Fortran 90 array syntax was recognized and parallelized. We now run this parallelized version of the code.

$ time ./heat
 starting iterations
 done
real    0m11.422s
user    0m22.284s
sys     0m0.193s

The wall clock time is shown on the line labeled "real". Keep in mind that the "user" time reflects the time spent on BOTH cores added together. Before parallelization the real time was 17.6 seconds. After parallelization this was reduced to 11.4 seconds. When analyzing the improvement, a value called Speedup is used. Speedup is calculated as:

Speedup = ( Time sequential ) / (Time parallel)

Which for our example gives: Speedup = 17.6 / 11.4 or 1.54. This is a 54% improvement. So is this acceptable? Keep in mind we used a crude wall clock measure to gauge our performance. This included application startup and teardown overhead. Also, when examples run for short periods of time such as this example there is not enough work in the parallel regions to amortize the overhead costs of thread creation, dispatch, join and teardown. We also have to look at the return on investment - how much effort did this take? We got a 54% speedup out of a simple compilation and link with zero changes to the source code. Quite a good return on investment by any measure.

From this example, the primary limitation of auto-parallelization may have become apparent. Auto-parallelization is applied to an entire source file. There is no method to control which loops or array syntax equivalents are parallelized. In some cases the loop may be parallelized by may not have enough work to amortize the cost of the parallelization overhead. Thus, it is not uncommon to find cases where an auto-parallelized application actually slows down relative to it's sequential counterpart. To help determine where this is occurring, the Intel Fortran and C++ Compilers provide a parallelization report capability. This functionality is enabled with the -par-report compiler option:

    Syntax: -par-report[n]

    Arguments: n is the level of diagnostic messages to display.

Possible values are:

    0. No diagnostic messages are displayed

    1. Diagnostic messages are displayed indicating loops successfully auto-parallelized

    2. Diagnostic messages are displayed indicating loops successfully and unsuccessfully auto-parallelized

    3. The same diagnostic messages are displayed as specified by 2 plus additional information about any proven or assumed dependencies inhibiting auto-parallelization (reasons for not parallelizing).

Where this model is applicable: Auto-parallelization is most effective for codes with very simple, tight looping structures that are easily identifiable by the compiler. Keep in mind that a profile of your code should identify that the majority of run time should be in these tight loops. Auto-parallelization is performed at the source file scope. Thus, it is most effective for modular code. Auto-parallelization should be applied selectively since in some cases it has been shown to negatively affect runtime performance.

Parallelization via parallel libraries

There is a class of applications that spend the majority of runtime in common function calls. "Common" in this context refers to a function designed to solve widespread or well-known mathematical operations or data processing tasks. Examples include matrix operations and manipulations, fast Fourier transforms, image processing, video and audio encoding, encryption, signal processing and compression/decompression. And I cannot repeat the following advice often enough: profile your application and know where it is spending time. Then examine these program hot spots and look to see if this hot spot is performing something very common. I am constantly amazed at how many applications contain hand-coded matrix solvers like matrix-matrix multiplication, eigenvalue calculation, matrix inversion, etc. Also commonly found in user applications are user-written 2-D or 3-D fast Fourier transforms. Performance of user written functions varies wildly. However, in almost all cases the performance of user written functions is substandard when compared to professionally written and tuned libraries from vendors.

It is also common for applications to rely on freely distributed source libraries such as BLAS, LAPACK, Atlas, and FFTW, to name a few. These library implementations are available for download in source form and are then compiled by the user and linked into an application. While these libraries often outperform user written code, they are typically not tuned for a particular target platform.

In last month's article we introduced the Intel(R) Math Kernel Library for Mac OS (Intel(R) MKL). We mentioned that Intel MKL provides functions for linear algebra ( BLAS, LAPACK, sparse solvers), fast-fourier transforms (FFTs), vector math libraries, and vector statistical functions. In addition to being highly vectorized, many of the Intel MKL routines libraries are internally threaded. Thus, a sequential application can call an Intel MKL library routine and that routine will automatically spawn threads and split the work across the available cores.

Where this model is applicable: This model applies to codes that spend the majority of their runtime in common library routines. In this model the programmer does not need to worry about manually parallelizing their application, they simply rely on the threaded parallel library function to leverage the available cores. Again, this model works best when the bulk of the application runtime is consumed in common library routines.

Parallelization via openmp

OpenMP is an application program interface (API) that provides a means for specifying parallelism in C++ or Fortran. OpenMP is a multi-platform, vendor-independent specification for a set of compiler directives, library routines, and environment variables used to specify shared memory parallelism. It is a widely, readily available, flexible and relatively easy to learn. OpenMP is supported by an independent organization that can be found at http://www.openmp.org. In addition, there are several books and many online tutorials and examples that can be used to learn OpenMP programming. A full description of OpenMP is far beyond the scope of this article. Instead we examine an example OpenMP application, describe how to build and run the application, and examine performance on a dual-core system.

OpenMP is not a computer language. Rather, it works with an existing OpenMP capable Fortran or C++ compiler. The Intel Fortran and C++ Compilers for Mac OS support the OpenMP version 2.5 API specification. OpenMP allows a programmer to specify parallelism at the loop level. This is done by adding pragmas (C/C++) or compiler directives (Fortran) to existing source code. The advantage of this design is that non-OpenMP compilers simply ignore the pragmas or compiler directives. Thus only one source needs to be maintained for both OpenMP and sequential targets.

Let us begin our exploration of OpenMP by examining a serial application:

      1 #include <stdio.h>
      2 int main() {
      3  int n, i;
      4  double sum, pi, x, h;
      5
      6  n = 1000000000;
      7  h = 1.0 / n;
      8  sum = 0.0;
      9  printf("starting \n");
     10  for ( i=1 ; i<n ; i++ ){
     11    x = h * ( i-0.5);
     12    sum = sum + (4.0/(1.0+x*x));
     13  }
     14  pi = h*sum;
     15  printf("pi is %f \n",pi);
     16 }

Without going into a lot of detail, this program calculates an estimate for the value PI by integrating the function f(X) = 4.0 / ( 1 + X^2 ) between the values 0 <= X < 1.0. This uses trapezoidal rule for the approximation of the integral.

Again, we generate a baseline run at -O2, this time focusing on the wall clock or "real" time:

icc -o pi -O2 pi.c
time ./pi
starting
pi is 3.141593
real    0m15.669s

Now we examine an OpenMP version of this same code:

      1 #include <stdio.h>
      2
      3 int main() {
      4  int n, i;
      5  double sum, pi, x, h ;
      6  n = 1000000000;
      7  h = 1.0 / n;
      8  sum = 0.0;
      9  printf("starting \n");
     10 #pragma omp parallel for private(i,x) shared(h) reduction(+:sum)
     11  for ( i=1 ; i<n ; i++ ){
     12    x = h * ( i-0.5);
     13    sum = sum + (4.0/(1.0+x*x));
     14  }
     15  pi = h*sum;
     16  printf("pi is %f \n",pi);
     17 }

The only difference in this version of the program from the serial version of the program is the OpenMP pragma at line 10. This pragma tells the compiler that the C for loop that follows should be parallelized. This defines what is known as a parallel region. The iterations of this for loop are split equally amongst the available threads. For a dual-core processor, two threads are created by default. The first thread will process loop iterations n=1 through n/2 -1. The second thread will process loop iterations n/2 through n.

The private, shared, and reduction clauses control sharing of variables within the loop. The private clause specifies variables that are owned created and owned by each thread and are not shared. The variable h holds a value that is shared by the two threads. Finally, the reduction clause specifies that each thread will get a private copy of the variable sum. On exit from the parallel region the separate copies are combined with the "+" operation.

To compile an OpenMP program using the Intel C/C++ or Fortran Compilers for Mac OS the -openmp compiler option is used:

icc -o pi_omp -O2 -openmp pi_omp.c
pi_omp.c(10) : (col. 1) remark: OpenMP DEFINED LOOP WAS PARALLELIZED.

By default, when we run this program we will create one thread for each core in the processor. For an Intel(R) Core(TM) Duo processor, this will be 2 threads. For an Intel(R) Core(TM) Solo processor, this will be 1 thread. Thus, on an Intel(R) Core(TM) Duo processor based system we get the following timing:

$ time ./pi_omp
starting
pi is 3.141593
real    0m8.421s

Calculating the speedup ( 15.669sec serial vs. 8.421sec parallel) we get:

    Speedup = 15.669 / 8.421 = 1.860 or approximately 86% speedup

OpenMP also allows a programmer to run an executable and force a non-default number of threads. This is done by setting the OMP_NUM_THREADS environment variable. For example, we can run this same executable on one core by setting OMP_NUM_THREADS=1 and simply rerunning the program (no recompilation required):

export OMP_NUM_THREADS=1
time ./pi_omp
starting
pi is 3.141593
real    0m16.651s

Collecting our results for a two-core system we see:

  Pi serial:                     15.669 (seconds)
  OpenMP PI run with 1 thread:    16.651 (seconds)
  OpenMP PI run with 2 threads:.   8.421 (seconds)

Note that the 1-thread OpenMP case has about a 1 second overhead compared to the non-OpenMP version of the same code. This is overhead associated with initial process startup and parallel region overhead even when we do not use more than one thread. Keep in mind that this 1 second overhead would be considered "noise" in a real-world, longer-running application.

Where this model is applicable: OpenMP provides parallelization of looping constructs (for, DO, etc). This model is applicable to user codes where the majority of runtime is consumed inside loops. Unlike auto-parallelization, these looping structures can be large and/or complex. This makes OpenMP applicable to a large number of applications. Because of this, OpenMP is considered by many to be the method of choice for portable, flexible and easy parallel programming.

Parallelization via explicit threading

POSIX(R) Threads programming, or more commonly known as pThreads programming, uses explicit API calls to create and manage user threads. This is the most low-level parallel programming method presented in this article. To understand this model, we take the PI example above and present a pThread implementation:

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <pthread.h>
#define THREADS 2
#define N 1000000000
typedef struct {
        int id;
} parm;
double sum[THREADS], h;
void *Calc_Partials(void *arg) //thread function to calculate partial sum
{
  int my_thrd;           //this thread's number ( 0 through NUM_THREADS-1 )
  int my_iters;          //number of iterations for each thread
  int my_start, my_end;  //iteration start and end for each thread
  int i;
  double  x, local_sum;
  parm *p=(parm *)arg;
  my_iters = N / THREADS;
  my_thrd = p->id;
  my_start = (my_iters * my_thrd) + 1;
  my_end = my_iters * (my_thrd+1);
  for ( i=my_start ; i < my_end ; i++ ){
   x = h * ( i-0.5);
   local_sum = local_sum + (4.0/(1.0+x*x));
  }
  sum[my_thrd] = local_sum;  //store local partial to global partial
  return ( NULL );
}
int main(int argc, char* argv[]) {
        int n,i;
        pthread_t *threads;
        pthread_attr_t pthread_custom_attr;
        parm *p;
        h = 1.0 / N;
        threads=(pthread_t *)malloc(THREADS*sizeof(*threads));
        pthread_attr_init(&pthread_custom_attr);
        p=(parm *)malloc(sizeof(parm)*THREADS);
        /* Start up thread */
        for (i=0; i<THREADS; i++) {
                p[i].id=i;
                pthread_create(
                   &threads[i],
                   &pthread_custom_attr,
                   Calc_Partials,
                   (void *)(p+i));
        }
        /* Synchronize the completion of each thread. */
        for (i=0; i<THREADS; i++) pthread_join(threads[i],NULL);
        printf( "pi is %f\n", (sum[0]+sum[1])*h);
        free(p);
}  

Running this example we will use 2 threads. These will each get scheduled on each of the execution cores of the Intel(R) Core(TM) Duo processor.

time ./pi_pthreads
pi is 3.141593
real    0m8.123s

which is very similar to the OpenMP time of 8.421 seconds for the same problem.

There is obviously a lot of detail that could be discussed in this example. One immediate observation is that the pThreads version of the PI program is considerably more complex than the OpenMP version. In pThreads programming, the threads are dispatched via the pthread_create() function call. This call creates a thread and dispatches it to a user-written function, which in this example is named Calc_Partials(). You will recall that OpenMP is a method to parallelize loops. Thus pThreads programming provides a more coarse level of parallelism than OpenMP.

Where this model is applicable: This model is most useful for applications that do not necessarily have loop structures to parallelize. Certain applications do not have clearly defined hot spots or hot loops. Rather, these applications may have a deep and complex call tree, or speed a lot of time in IO, or have a need for background processing. Or perhaps there is a need for non-symmetric parallelism. In cases like this, one thread may run function A while the second thread performs something completely different in function B. In these ways pThread programming provides the greatest flexibility of all the methods presented at the cost of increased program complexity.

Summary

The Intel(R) Core(TM) Duo processor provides parallel processing capability at a very affordable cost. This is spawning a demand for applications that can utilize the extra processing capability. Developers are responding by taking at look at their applications and determining how best to parallelize their code. This article presented an introduction to parallelization methods. There is no one model that is universally applicable to all applications. It is hoped this overview will allow you to analyze your application and determine which method will work best for your application.

Further Reading

Examples shown in this article are available for download. Please visit the MacTech source code ftp and select this month's issue, 22.09 :

http://www.mactech.com/editorial/filearchives.html

There are many outstanding books in parallel programming techniques, OpenMP, and POSIX(R) threads programming. The following list provides a starting point but is by no means a conclusive list.

A good starting point for understanding the Intel(R) Core(TM) Duo processor and parallel programming models is Multi-Core Programming, Shameem Akter and Jason Roberts, Intel Press, ISBN 0-9764832-4-6.

OpenMP tutorials and examples are readily available online at www.openmp.org. In addition, this website has lists of several very thorough books on OpenMP.

POSIX(R) Threads programming information can be found in several books. A good starting point is Programming with POSIX(R) Threads, David R. Butenhof, Addison-Wesley Professional, ISBN 0201633922.


Ron Wayne Green is a member of the Intel Compiler team. He has been involved in Fortran and high-performance computing applications development and support for over twenty years and contributes his expertise to technical computing issues.

 
AAPL
$117.60
Apple Inc.
-1.03
MSFT
$47.47
Microsoft Corpora
-0.12
GOOG
$541.08
Google Inc.
+1.81

MacTech Search:
Community Search:

Software Updates via MacUpdate

MacUpdate Desktop 6.0.3 - Discover and i...
MacUpdate Desktop 6 brings seamless 1-click installs and version updates to your Mac. With a free MacUpdate account and MacUpdate Desktop 6, Mac users can now install almost any Mac app on macupdate.... Read more
SteerMouse 4.2.2 - Powerful third-party...
SteerMouse is an advanced driver for USB and Bluetooth mice. It also supports Apple Mighty Mouse very well. SteerMouse can assign various functions to buttons that Apple's software does not allow,... Read more
iMazing 1.1 - Complete iOS device manage...
iMazing (was DiskAid) is the ultimate iOS device manager with capabilities far beyond what iTunes offers. With iMazing and your iOS device (iPhone, iPad, or iPod), you can: Copy music to and from... Read more
PopChar X 7.0 - Floating window shows av...
PopChar X helps you get the most out of your font collection. With its crystal-clear interface, PopChar X provides a frustration-free way to access any font's special characters. Expanded... Read more
Carbon Copy Cloner 4.0.3 - Easy-to-use b...
Carbon Copy Cloner backups are better than ordinary backups. Suppose the unthinkable happens while you're under deadline to finish a project: your Mac is unresponsive and all you hear is an ominous,... Read more
ForeverSave 2.1.3 - Universal auto-save...
ForeverSave auto-saves all documents you're working on while simultaneously doing backup versioning in the background. Lost data can be quickly restored at any time. Losing data, caused by... Read more
Voila 3.8.1 - Capture, annotate, organiz...
Voila is a screen-capture, recording, and annotation tool that is a full-featured replacement for Mac's screen-capture and screen-recording capabilities. It has a large and robust set of editing,... Read more
SyncTwoFolders 2.0.6 - Syncs two user-sp...
SyncTwoFolders simply synchronizes two folders. It supports synchronization across mounted network drives and it is a possibility to run a simulation showing in a log what will be done. Please visit... Read more
Duplicate Annihilator 5.1.1 - Find and d...
Duplicate Annihilator takes on the time-consuming task of comparing the images in your iPhoto library using effective algorithms to make sure that no duplicate escapes. Duplicate Annihilator detects... Read more
HandBrake 0.10.0 - Versatile video encod...
HandBrake is a tool for converting video from nearly any format to a selection of modern, widely supported codecs. Supported Sources: VIDEO_TS folder, DVD image or real DVD (unencrypted -- CSS is... Read more

Latest Forum Discussions

See All

Tilt to Live Bundle Set to Arrive This T...
Tilt to Live Bundle Set to Arrive This Thanksgiving Posted by Ellis Spice on November 25th, 2014 [ permalink ] One Man Left has unveiled an upcoming Tilt to Live bundle, allowing players to get the series for a di | Read more »
BattleLore: Command (Entertainment)
BattleLore: Command 1.0 Device: iOS Universal Category: Entertainment Price: $9.99, Version: 1.0 (iTunes) Description: ***NOTE: Compatible with iPad 2/iPad mini, iPod touch 5 and up and iPhone 4S and up – WILL NOT RUN ON EARLIER... | Read more »
Weather Or Not Review
Weather Or Not Review By Jennifer Allen on November 25th, 2014 Our Rating: :: STYLISH WEATHER REPORTINGiPhone App - Designed for the iPhone, compatible with the iPad Check the weather quickly and conveniently with Weather or Not... | Read more »
The All-New Football Manager Handheld 20...
The All-New Football Manager Handheld 2015 is Available Now Posted by Jessica Fisher on November 25th, 2014 [ permalink ] Universal App - Designed for iPhone and iPad | Read more »
Six iOS Games to Get You Ready for Thank...
Image Source: Friends Wiki At this point in the month, you or at least a few people you know are probably getting ready to scramble around (or are already scrambling around) for Thanksgiving Dinner. It’s a hectic day of precise oven utilization, but... | Read more »
Call of Duty: Heroes: Tips, Tricks, and...
Hello Heroes: What’d we think of Call of Duty‘s take on Clash of Clans? Check out our Call of Duty: Heroes review to find out! Just downloaded Call of Duty: Heroes and need some handy tips and tricks on how to get ahead of the rest? As we often do,... | Read more »
Call of Duty: Heroes Review
Call of Duty: Heroes Review By Jennifer Allen on November 25th, 2014 Our Rating: :: CLASH OF FRANCHISESUniversal App - Designed for iPhone and iPad Mix Clash of Clans with Call of Duty, and this is what you get.   | Read more »
Slider Review
Slider Review By Jordan Minor on November 25th, 2014 Our Rating: :: SLIDE TO PLAYUniversal App - Designed for iPhone and iPad Slider has all the excitement of unlocking your phone screen.   | Read more »
oh my giraffe (Games)
oh my giraffe 1.0.0 Device: iOS Universal Category: Games Price: $1.99, Version: 1.0.0 (iTunes) Description: Eat fruits while being chased by lions. Cut the vines to send fruit plummeting onto the lions. Don't worry, your flexible... | Read more »
One of 2000’s Most Loves Adventure Games...
One of 2000’s Most Loves Adventure Games, The Longest Journey, has Come to iOS Posted by Jessica Fisher on November 25th, 2014 [ permalink ] | Read more »

Price Scanner via MacPrices.net

Early Black Friday MacBook Pro sale: 15-inch...
 Best Buy has posted early Black Friday prices on 15″ Retina MacBook Pros, with models on sale for $300 off MSRP on their online store for a limited time. Choose free local store pickup (if available... Read more
A9 Chips Already?
It’s barely more than a couple of months since Apple got the first A8 systems-on-chip into consumer hands, but rumor and news focus is already turning to the next-generation A9 SoC. Apple Daily... Read more
NewerTech Announces NuGuard KXs Impact X-Orbi...
NewerTech has announced updates to its family of Impact X-Orbing Screen Armor bringing military grade, triple layer protection to Apple’s new iPhone 6 and 6 Plus. Like all models in the NuGuard KXs... Read more
13-inch 1.4GHz MacBook Air on sale for $889,...
 B&H Photo has the 13″ 1.4GHz/128GB MacBook Air on sale for $889 including free shipping plus NY tax only. Their price is $110 off MSRP. B&H will also include free copies of Parallels Desktop... Read more
Save up to $300 on Macs and iPads with your A...
Purchase a new Mac or iPad at The Apple Store for Education and take up to $300 off MSRP. All teachers, students, and staff of any educational institution qualify for the discount. Shipping is free,... Read more
Apple refurbished Mac Pros available for up t...
The Apple Store is offering Apple Certified Refurbished Mac Pros for up to $600 off the cost of new models. An Apple one-year warranty is included with each Mac Pro, and shipping is free. The... Read more
Jumptuit Launches One-Tap Windows 8.1 iTunes...
Jumptuit has launched Windows 8.1 support for One-Tap iTunes Sync. with which Windows 8.1 users can now easily sync their iTunes libraries with Microsoft OneDrive. Jumptuit provides easy access from... Read more
Apple restocks refurbished 13-inch 2014 Retin...
The Apple Store has restocked Apple Certified Refurbished 2014 13″ 2.6GHz Retina MacBook Pros for up to $230 off the cost of new models. An Apple one-year warranty is included with each model, and... Read more
CEA Study Finds More People Recycling Electro...
A new study by the Consumer Electronics Association (CEA) finds that electronics recycling receives the continued and growing support of consumers. According to the CEA,s Recycling and Reuse Study,... Read more
15″ 2.2GHz Retina MacBook Pro on sale for $17...
 B&H Photo has the 2014 15″ 2.2GHz Retina MacBook Pro on sale today for $1749. Shipping is free, and B&H charges NY sales tax only. B&H will also include free copies of Parallels Desktop... Read more

Jobs Board

*Apple* Retail - Multiple Positions (US) - A...
Sales Specialist - Retail Customer Service and Sales Transform Apple Store visitors into loyal Apple customers. When customers enter the store, you're also the Read more
*Apple* Solutions Consultant (ASC) - Apple (...
**Job Summary** The ASC is an Apple employee who serves as an Apple brand ambassador and influencer in a Reseller's store. The ASC's role is to grow Apple Read more
*Apple* Solutions Consultant (ASC) - Apple (...
**Job Summary** The ASC is an Apple employee who serves as an Apple brand ambassador and influencer in a Reseller's store. The ASC's role is to grow Apple Read more
*Apple* Solutions Consultant (ASC)- Retail S...
**Job Summary** The ASC is an Apple employee who serves as an Apple brand ambassador and influencer in a Reseller's store. The ASC's role is to grow Apple Read more
Project Manager, *Apple* Financial Services...
**Job Summary** Apple Financial Services (AFS) offers consumers, businesses and educational institutions ways to finance Apple purchases. We work with national and Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.