TweetFollow Us on Twitter

Paralation
Volume Number:8
Issue Number:7
Column Tag:Lisp Listener

The Paralation Model

A simple model comprised of only one data structure and three operators

By Paul Snively, Sunnyvale, California

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

We’re rapidly approaching perhaps the most interesting time in all of the existence of personal computing. The winds of change are upon us, and this much is clear: we programmers are going to have to adopt new programming paradigms if we expect to survive into the future.

In terms of the hardware that we use, old classifications and distinctions are being either blurred or obliterated on a daily basis. The line between high-end PC and low-end workstation is becoming artificial, a matter of marketspeak only, as time marches on. The machine that this magazine is devoted to has always had the capability that is probably 2nd most-often used to designate a workstation as opposed to a PC-networking capability. The most-often used designator, “performance,” is something that can be achieved principally through changing processors. Nearly everyone in the industry has heard about the architectures vying for attention-Motorola’s 88000 family, MIPS, Sun’s SPARC, and IBM’s RS/6000, to name only the ones that come immediately to mind. In addition, Apple has been investigating RISC for some time.

It only stands to reason that another revolution will ensue as the average power of even the most affordable machines goes from the order of the single-digit MIPS (with the “high end” Macintosh IIfx being around 8.5, if memory serves me correctly) to more on the order of the low-three-digit MIPS, say 120 on the “low” end of the spectrum. If the new revolution doesn’t begin-if all we do when we go from 8.5 MIPS to 120 MIPS is recalculate our spreadsheets faster-we’ll have only ourselves to blame. Regardless, something that should be fairly apparent in all of this is that the chips are approaching their limits. I’m not referring to limits of the state of the art; those change daily. Rather, I’m referring to the pace at which the capabilities of our silicon foundries are approaching the limits of the laws of physics. We can only go so far with smaller/faster/cheaper until Einstein, Bohr, and others step into muddy the waters. Mechanical engineering gives way to quantum mechanics, as it were.

This may sound pretty dismal, but there is an obvious solution to the problem: start building machines with more than one processor. If one blazing processor is good, wouldn’t more be better? The answer, of course, is “yes, depending upon the nature of the problem being solved,” and that opens a can of worms all its own. If you have a parallel architecture, the way that you must go about creating software for it changes accordingly, and most of us are not accustomed to programming for parallelism. God knows I’m not. And it’s not as though there were a consistent mental model to apply to the notion of parallel programming, either. For one thing, most of the effort that I’m familiar with regarding parallel programming has been done in order to adapt an already largely outmoded programming methodology-“structured programming,” the holy grail of the ’70s-to parallelism. Research to integrate what I’ll call the holy grail of the ’80s-“object-oriented programming,” which seems better suited to programming in the large than structured programming alone-with parallel programming still seems to be largely open, with few real-world implementations available.

Parallelism has its own language, of course, with terms like SIMD, MIMD, Shared Memory, coarse- and fine-grained, “massively,” data flow, and the like, and has systems with names like Transputer, Sequent, Butterfly, Hypercube, Cray, and Connection Machine. In terms of generally-applicable software to allow the implementation of parallel systems, the ones that I’ve heard the most about are Linda (as a model and environment) and Occam (as a language). Unfortunately for me, I could never wrap my brain around either of them, although I’d like to give Linda another shot some day.

In the meantime, there is another model that I think deserves a lot of attention, because it provides you explicit control over the costs of both computation (work done by the processors) and communication (how much wire and how many intermediate stops must a signal take to get from one processor to another). It’s architecture independent-it doesn’t care whether it’s on a MIMD, SIMD, Shared Memory, etc. machine. Most importantly to me, it’s simple. The whole model is comprised of one, count ’em, one data structure and three, count ’em, three operators. The model is also base-language independent: there are at least two implementations in Lisp, and an experimental implementation in C.

The model is called the Paralation Model, and was developed by Gary Sabot, a senior research scientist at Thinking Machines, Inc., creators of the Connection Machine, a massively-parallel computer capable of supporting up to 65,536 processors. It’s described in the book entitled The Paralation Model, published by The MIT Press, ISBN 0-262-19277-2.

The principal data structure of the Paralation Model is the field. A field is simply a collection of data that can be referred to by an ordinal index. Related fields comprise a paralation, which is short for “parallel relation.” A paralation always includes an index field. A paralation consists of sites, which are similar to the elements of an array; the nth site of a paralation contains the nth field elements of the fields of the paralation. The index field at that site has the value n. When you create a paralation, you create its index field. Additional fields can then be added to the paralation, and each of them contains a pointer to the index field.

Figure 1

Elements of fields in a given paralation that have a particular index are guaranteed to be near each other-that is, the 173rd elements of two fields in the same paralation are near, the 2nd elements are near, etc. This means that intra-site communication is guaranteed to be cheap. Inter-site communication, on the other hand, can be arbitrarily cheap or costly. By default, paralations are “shapeless;” there’s no information regarding the nearness of sites in a paralation to the sites in any other paralation. You can, however, specify the shape of paralations in order to ensure the least costly inter-site communication possible, which may have a dramatic effect on system performance.

When you create a paralation, you automagically create its index field, and in fact, that is what is returned:

A word about the examples is in order here: first, I’m a Lisp hacker, so the examples will all be in Paralation Lisp. A secondary reason for this is that, as of the writing of the edition of The Paralation Model that I own a copy of, the Paralation C compiler had not been completed, and all of the examples in the book are in Paralation Lisp. Perhaps the most important motivation of all, however, is that the book includes a complete “Tiny Paralation Lisp” simulator that will work in any [Steele84] Common Lisp implementation. In case you’re wondering how the simulator can be “complete” and “tiny” at the same time, the answer is that it completely implements the model (remember, it’s only one data structure and three operators), but it doesn’t go miles out of its way to do type checking, error handling, or generate efficient code on a serial machine. Neither does it support shaped paralations, which isn’t really a loss on a serial machine. There is a much better simulator for Common Lisp that you can order from The MIT Press. An order form is included with the book.

Common Lisp aficionados will realize that the #F notation used to represent fields in Paralation Lisp resembles the #A syntax that Common Lisp uses to denote an array. Also like an array, a paralation is made to be a certain length, and all fields in the paralation are of the same length. As the Common Lisp programmer would expect upon seeing the #F representation, fields can be given to the reader as literals, but each field read causes a new paralation to be constructed, so:

because the two objects are not the same object; they are two paralations that happen to consist of fields of the same length containing the same values. This is the only gripe that I have about Paralation Lisp: you cannot tell simply from looking at its printed representation what paralation a field belongs to.

That pretty much covers paralations and their fields, so let’s move on to the Paralation Model operators. They are elementwise evaluation, move, and match.

Elementwise evaluation is exactly what it sounds like: given a field, some evaluation is performed on each of the elements of the field. The result is a new field in the same paralation. The evaluations are done in parallel-correct paralation programs cannot rely on any order of initiation or completion of the evaluation among any of the elements of the field. The only synchronization guarantee that elementwise evaluation makes is that the operator will not terminate until the evaluations of all of the elements have terminated.

Here’s a completely trivial example of elementwise evaluation:

This example obviously isn’t terribly useful; its sole function is to create a new paralation with ten sites, assign the resulting index field to the global variable *foo*, and then to add a new field to the paralation whose elements are the integers one through ten rather than zero through nine by adding one to each index field element in parallel (if you were to run this on an implementation with at least ten processors, this would happen very quickly indeed, ignoring communication costs for now).

There are a few subtleties to the semantics of elwise. One is that elwise can operate on more than one field at a time, which is why the first parameter to elwise is a list. Another is that references to the field name within the body refer to an individual element rather than to the entire field. Still another subtlety is that elwise can support temporary binding in similar fashion to Common Lisp’s let. It’s fairly common to temporarily create a paralation and then want to apply elwise to the resulting index field without having to store it in some variable. An example of rewriting the above code would look something like this:

There aren’t any a priori restrictions on what can be done within the body of an elwise, although there are limitations as to what can be done correctly in the context of elwise’s parallel operation. For example, side effects in the body are perfectly fine:

results in a paralation with an index field suffering from an off-by-one bug (moral: there’s nothing to prevent you from hoisting yourself by your own petard). Note that the fields returned by the last two examples are not the same: the first is a new field added to the paralation; the second is the (incorrectly-formed) index field of the paralation.

One of the remaining Paralation Model operators is the move operator. You can think of it as a parallel assignment. The idea is to move values from one field to another in parallel. Of course, this implies that some way to specify which elements in the source field go into which elements in the destination field exists. This mechanism is called a mapping, and mappings are created by the third and final Paralation Model operator, match.

Match is fairly straightforward: it takes two fields and returns a mapping from one to the other. Any elements not present in both fields are represented as NIL in the mapping. The other elements are handled as follows: the elements in one field are labeled with the index of their first occurrence in the field, and the elements in the other field that match are given that same label. For example:

A picture being worth a thousand words, although I hate drawing:

Figure 2

(At least this is one possible representation of a mapping, and in fact it’s the one used by the Tiny Paralation Lisp system.)

If you’re looking closely at this, remembering that the principle purpose of match is to provide a mechanism for the move operator to use to get the elements of one field to another, you’ll perhaps notice that there are opportunities for a couple of things. One is that one or more elements in the “to” field might not get filled at all. In the figure above, this is represented by a field element with no arrowhead pointing to it, although you should keep in mind that duplicate elements in the “to” field also have no arrows pointing to them-they simply take on the first occurrence of the value in the field. Another situation that can occur is that there might be more than one value to be placed in the same element in the “to” field. This condition is represented by a field element having more than one arrowhead pointing to it. The example above suffers from both of these problems.

The move operator allows you to resolve both of these problems by specifying default values for the cases where no value to be moved in exists, and a combiner function to handle the cases where there’s more than one value to be moved in. Oftentimes you might wish to take a field and combine all of its elements-e.g. you may want a sum of a field of numbers. It’s trivial to write a library function, vref, in terms of the Paralation Model primitives that allows this conglomeration of elements in a field:

The function may not be too easy to understand at first. It checks to see if the length of the field is positive. If so, it uses the move operator (“<-” in Paralation Lisp) with a mapping of all of the elements of the field to the same place, passing in the combiner function. The result is a field of one element, so Common Lisp’s elt is used with a zero parameter to extract the result.

The secret is in the match form. It creates a “from” field consisting of zeros using elwise. These zeros will all map to the same place if the “to” field is simply the field #F(0), and the easiest way to get that is by making a paralation of length one.

A good example of the applicability of these data structures and operators to programming tasks that are normally done sequentially is the Sieve of Eratosthenes, the de facto standard algorithm for finding prime numbers. You can find the serial version of the algorithm in nearly any entry-level computer science text. Most of us are so accustomed to thinking of the Sieve in serial terms that we probably don’t even realize that the algorithm can be expressed in a parallel fashion, and remarkably simply in the Paralation Model:

(1)

(defun find-primes (n)
   (let* ((sieve (make-paralation n))
          (candidate-p (elwise (sieve) (if (> sieve 1) t nil)))
          (prime-p (elwise (sieve) nil)))
     (do ((next-prime 2 (position t candidate-p)))
         ((null next-prime)
          (<- sieve :by (choose prime-p)))
       (setf (elt prime-p next-prime) t)
       (elwise (sieve candidate-p)
         (when (and candidate-p (zerop (mod sieve next-prime)))
           (setf candidate-p nil))))))

The function find-primes takes one parameter, which is the maximum number of primes to find. First it creates the Sieve by making a paralation of size n. Next it creates a field that consists of booleans indicating whether the integer at that index is potentially prime or not (at this point, “potentially prime” is defined as being an integer greater than one). Finally a field of all NILs is created, meaning that initially no values are known to be prime.

Figure 3

A do loop is then entered. This loop binds next-prime to the appropriate value (beginning with 2 and continuing through all elements of candidate-p that are T). The loop is defined to terminate when next-prime is null, and the result is the evaluation of (<- sieve :by (choose prime-p)). Choose is another library function that is easily defined in terms of the primitives:

(2)

(defun choose (field)
  (let ((count 0)
        (from-field (elwise (field) nil)))
    (dotimes (i (length field))
      (when (elt field i)
        (setf (elt from-field i) count)
        (incf count)))
    (match (make-paralation count) from-field)))

Choose is pretty straightforward; it binds count to zero and from-field to a field of NILs, then loops for the length of the incoming field, checking the ith element of the incoming field and, if it is non-NIL, setting the ith element of from-field to the count of non-NIL elements. It then returns a mapping of from-field to a field of a length sufficient to hold only the non-NIL elements.

The value of the do loop in find-primes, then, will be the result of moving sieve (the index field) by the mapping returned by (choose prime-p). The body of the loop is concerned with seeing that prime-p is a field of NILs and Ts, with the indices of the Ts being prime numbers. It is always the case that next-prime is in fact the index of a prime number, so find-primes sets the appropriate element of prime-p to T. It then elwises across sieve and candidate-p, checking first to see if the current element is even a candidate and, if so, whether it’s evenly divisible by next-prime. If it is, then the element is removed as a candidate. Then the loop goes again. The interactions among the fields, especially prime-p and candidate-p, are fairly subtle; don’t feel discouraged if it’s not clear even after several readings why this function works. Hint: you can think of there being two loops (do and elwise), with prime-p being modified by the do loop and candidate-p being modified by the inner elwise.

The point of the algorithm is that finding prime numbers basically consists of checking several numbers against one number to see if they divide evenly or not, and elwise fits the bill very nicely. The loop then climbs, so that the next parallel modulo comparison can occur, and so on.

This is all subtle enough that some more pictures are called for, so let’s meander through an invocation of (find-primes 5). To see the paralation initially created by the let* form, please see Figure 3 above. Keeping in mind that elwise operates in parallel, here’s what happens as we evaluate our example.

The do binds next-prime to 2, and evaluates (setf (elt prime-p next-prime) t), giving:

Figure 4

Now we get to the real meat: the

(3)

(elwise (sieve candidate-p)
  (when (and candidate-p (zerop (mod sieve next-prime)))
    (setf candidate-p nil)))

form. Keeping in mind that references to a field name within an elwise actually refer to the elment, we see that this parallel evaluation will do nothing for element zero, since candidate-p is NIL, so the state doesn’t change; see Figure 4 above. Likewise for element one; candidate-p is also NIL. Things change, however, when we get to element two; candidate-p is T, so (zerop (mod sieve next-prime)) is evaluated. Two modulo two is, of course, zero, so the result of the zerop is T, which causes the (setf candidate-p nil) to be evaluated (see Figure 5). This may seem puzzling at first-after all, two is a trivially prime number, and should therefore seem to remain a candidate-but if you refer to Figure 4 above again, you’ll see that this has already been accounted for in the prime-p field.

Figure 5

Element three is next, and candidate-p is T, so we evaluate (zerop (mod sieve next-prime)) again. This time, three modulo two is one, not zero, so the zerop returns NIL, and the T value of candidate-p is left alone.

Finally we reach element four, and candidate-p is T for it as well, so we evaluate (zerop (mod sieve next-prime)) yet again. Unfortunately for candidate-p, four modulo two is zero, so the zerop returns T, and candidate-p goes NIL:

Figure 6

For the N = 5 case, that’s the end of the elwise. On parallel hardware, each element would presumably have been dealt with at the same time, meaning that the elements of candidate-p would get set very quickly. So finally we go through the do loop again. The binding of next-prime becomes (position t candidate-p), and if you refer to Figure 6 above, you’ll see that at this point, the position of the next T in candidate-p is element three. The (setf (elt prime-p next-prime) t) gets evaluated again, which gives us:

Figure 7

The elwise form is evaluated all over again, this time with next-prime bound to three. Everything gets left alone until element three is examined, at which point candidate-p is T and (zerop (mod sieve next-prime)) is also T, so candidate-p goes NIL again:

Figure 8

Element four is examined, but candidate-p was set NIL by the elwise that was done when next-prime was bound to two, so no change is made, and the elwise ends. The next time through the do loop, however, (position t candidate-p) is NIL, which is the terminating condition for the loop. The result form, (<- sieve :by (choose prime-p)), should be fairly self-explanatory, but in case it’s not, here we go.

I defined choose a few pages back. First it binds count to zero. Next, it adds a new field, from-field, to whatever paralation the parameter is in:

Figure 9

Next is a dotimes loop for the length of the parameter. It searches for non-NIL elements and, when it finds them, it stores the value of count into that element of from-field and increments count. In our case, the result of this is:

Figure 10

Finally, choose returns (match (make-paralation count) from-field). At this point, count is bound to 2, so the result of that is a new paralation with an index field of #F(0 1). The result of the match, then, is the mapping:

Figure 11

As you might expect, the <- operator applied to the sieve field with this mapping results in a field of #F(2 3), which is in fact a field containing all of the prime numbers from zero to four.

I believe that that pretty much covers what I wanted to say about the Paralation Model. It’s small, it’s clean, and it’s powerful. I hope that you’ve enjoyed your glance at it and that it’s given you food for thought as to the future of computing using architectures other than the traditional Von Neumann approach. The book goes into much more detail, of course, and comes highly recommended for those readers who might be curious about an elegant model for parallel computation. A special word of thanks is due here to Gary Sabot, creator of the Paralation Model, who kindly reviewed this article and convinced me that the figures were a necessary evil.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

ExpanDrive 4.3.2 - Access cloud storage...
ExpanDrive builds cloud storage in every application, acts just like a USB drive plugged into your Mac. With ExpanDrive, you can securely access any remote file server directly from the Finder or... Read more
RapidWeaver 6.0.8 - Create template-base...
RapidWeaver is a next-generation Web design application to help you easily create professional-looking Web sites in minutes. No knowledge of complex code is required, RapidWeaver will take care of... Read more
Artlantis Studio 5.1.2.7 - 3D rendering...
Artlantis Studio is a unique and ideal tool for performing very high resolution rendering easily and in real time. The new FastRadiosity engine now lets you compute images in radiosity-even in... Read more
MacUpdate Desktop 6.0.5 - Search and ins...
MacUpdate Desktop 6 brings seamless 1-click installs and version updates to your Mac. With a free MacUpdate account and MacUpdate Desktop 6, Mac users can now install almost any Mac app on macupdate.... Read more
BitTorrent Sync 2.0.82 - Sync files secu...
BitTorrent Sync allows you to sync unlimited files between your own devices, or share a folder with friends and family to automatically sync anything. File transfers are encrypted. Your information... Read more
Google Drive 1.20 - File backup and shar...
Google Drive is a place where you can create, share, collaborate, and keep all of your stuff. Whether you're working with a friend on a joint research project, planning a wedding with your fiancé, or... Read more
Simon 4.0.3 - Monitor changes and crashe...
Simon monitors websites and alerts you of crashes and changes. Select pages to monitor, choose your alert options, and customize your settings. Simon does the rest. Keep a watchful eye on your... Read more
Vitamin-R 2.23 - Personal productivity t...
Vitamin-R creates the optimal conditions for your brain to work at its best by structuring your work into short bursts of distraction-free, highly focused activity alternating with opportunities for... Read more
iDefrag 5.0.0 - Disk defragmentation and...
iDefrag helps defragment and optimize your disk for improved performance. Features include: Supports HFS and HFS+ (Mac OS Extended). Supports case sensitive and journaled filesystems. Supports... Read more
PCalc 4.2 - Full-featured scientific cal...
PCalc is a full-featured, scriptable scientific calculator with support for hexadecimal, octal, and binary calculations, as well as an RPN mode, programmable functions, and an extensive set of unit... Read more

Warner Bros. Interactive Entertainment A...
Warner Bros. has some exciting games coming down the pipe! | Read more »
GDC 2015 – Star Trek Timelines will Prob...
GDC 2015 – Star Trek Timelines will Probably Make Your Inner Trekkie Squeal With Glee Posted by Rob Rich on March 4th, 2015 [ permalink ] Any popular fictional universe has its fair share of fan fiction – where belo | Read more »
Protect Yourself from an Onslaught of Ca...
Surprise Attack Games has announced a Cat-astrophic new physics puzzler called Fort Meow! In the game, a young girl named Nia finds her grandfather’s journal which triggers an all mighty feline attack! Why do the cats want the journal? Who knows,... | Read more »
GDC 2015 – Jelly Reef will be Game Oven’...
GDC 2015 – Jelly Reef will be Game Oven’s Last Hurrah, and it Seems like a Good Note to Go Out on Posted by Rob Rich on March 4th, 2015 [ permalink ] It’s sad knowing that Game Oven ( | Read more »
daWindci Deluxe Review
daWindci Deluxe Review By Campbell Bird on March 4th, 2015 Our Rating: :: BLUSTERY PUZZLESUniversal App - Designed for iPhone and iPad This updated puzzle game offers some creative gameplay and new mechanics, but still suffers from... | Read more »
Dungeon Hunter 5 Coming on March 12
Gameloft has excitedly announced that Dungeon Hunter 5 is on its way! Once again, you will adventure across the land of Valenthia exploring dungeons and fighting monsters. The game will have a new asynchronous multiplayer mode called Strongholds... | Read more »
GDC 2015 – The Sandbox 2 is Coming, and...
GDC 2015 – The Sandbox 2 is Coming, and Now it has Textures! | Read more »
Warner Bros. Interactive Announces Mort...
Mortal Kombat X, by Warner Bros. and NetherRealm Studios, will be a a free-to-play fighting/card-battle Mortal Kombat game. The game promises card collecting, multiplayer team combat, classic characters such as Scorpion, Sub-Zero and Raiden, and the... | Read more »
GDC 2015 – Piloteer is Whitaker Trebella...
GDC 2015 – Piloteer is Whitaker Trebella’s Latest Project, and it’s Definitely Something DIfferent Posted by Rob Rich on March 3rd, 2015 [ permalink ] You know | Read more »
PangoLand Review
PangoLand Review By Amy Solomon on March 3rd, 2015 Our Rating: :: COME VISIT PANGO AND FRIENDSUniversal App - Designed for iPhone and iPad PangoLand is an open-ended world full of familiar characters, bright colors and interactive... | Read more »

Price Scanner via MacPrices.net

iPad: A More Positive Outlook – The ‘Book Mys...
It’s good to hear someone saying positive things about the iPad. I’ve been trying to bend my mind around how Apple’s tablet could have gone from zero to bestselling personal computing device on the... Read more
Mac Pros on sale for up to $279 off MSRP
Amazon has Mac Pros in stock and on sale for up to $279 off MSRP. Shipping is free: - 4-Core Mac Pro: $2725.87, $273 off MSRP (9%) - 6-Core Mac Pro: $3719.99, $279 off MSRP (7%) Read more
Sale! 13-inch Retina MacBook Pros for up to $...
B&H Photo has 13″ Retina MacBook Pros on sale for up to $205 off MSRP. Shipping is free, and B&H charges NY sales tax only: - 13″ 2.6GHz/128GB Retina MacBook Pro: $1219.99 save $80 - 13″ 2.... Read more
Another Tranche Of IBM MobileFirst For iOS Ap...
IBM has announced the next expansion phase for  its IBM MobileFirst for iOS portfolio, with a troika of new apps to address key priorities for the Banking and Financial Services, Airline and Retail... Read more
Sale! 15-inch Retina MacBook Pros for up to $...
B&H Photo has the new 2014 15″ Retina MacBook Pros on sale for up to $250 off MSRP for a limited time. Shipping is free, and B&H charges NY sales tax only: - 15″ 2.2GHz Retina MacBook Pro: $... Read more
WaterField Designs Introduces the Minimalist...
With Apple Pay gaining popularity, Android Pay coming in May 2015, and loyalty cards and receipts that can be accessed from smartphones, San Francisco’s WaterField Designs observes that it may be... Read more
Sale! 15-inch 2.2GHz Retina MacBook Pro for $...
 Best Buy has the 15″ 2.2GHz Retina MacBook Pro on sale for $1774.99 $1799.99, or $225 off MSRP. Choose free home shipping or free local store pickup (if available). Price valid for online orders... Read more
13-inch 2.5GHz MacBook Pro (refurbished) avai...
The Apple Store has Apple Certified Refurbished 13″ 2.5GHz MacBook Pros available for $170 off the cost of new models. Apple’s one-year warranty is standard, and shipping is free: - 13″ 2.5GHz... Read more
13-inch 2.5GHz MacBook Pro on sale for $100 o...
B&H Photo has the 13″ 2.5GHz MacBook Pro on sale for $999.99 including free shipping plus NY sales tax only. Their price is $100 off MSRP. Read more
27-inch 3.5GHz 5K iMac in stock today and on...
 B&H Photo has the 27″ 3.5GHz 5K iMac in stock today and on sale for $2299 including free shipping plus NY sales tax only. Their price is $200 off MSRP, and it’s the lowest price available for... Read more

Jobs Board

*Apple* Retail - Multiple Positions (US) - A...
Sales Specialist - Retail Customer Service and Sales Transform Apple Store visitors into loyal Apple customers. When customers enter the store, you're also the Read more
*Apple* Solutions Consultant - Retail Sales...
**Job Summary** As an Apple Solutions Consultant (ASC) you are the link between our customers and our products. Your role is to drive the Apple business in a retail Read more
Position Opening at *Apple* - Apple (United...
…Summary** As a Specialist, you help create the energy and excitement around Apple products, providing the right solutions and getting products into customers' hands. You Read more
Position Opening at *Apple* - Apple (United...
**Job Summary** The Apple Store is a retail environment like no other - uniquely focused on delivering amazing customer experiences. As an Expert, you introduce people Read more
*Apple* Solutions Consultant - Retail Sales...
**Job Summary** As an Apple Solutions Consultant (ASC) you are the link between our customers and our products. Your role is to drive the Apple business in a retail Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.