TweetFollow Us on Twitter

PowerPc Architecture

Volume Number: 10
Issue Number: 8
Column Tag: PowerPC Essentials

Understanding the PowerPC Architecture

Speak like a native in only two easy lessons!

By Bill Karsh, BillKarsh@aol.com

In this article (part one of a two-part article) we'll look briefly at the hardware architecture of the MPC601 processor, and then discuss the user programming model. Next month, we'll summarize its assembly language syntax in a condensed and easily digestible form for quick reference. In a sense, the article is a compressed and intelligently-filtered user manual.

I spent considerable time deciding what depth and scope to cover. Numerous articles have already appeared on the 601, not to mention Motorola's own, indispensable, "PowerPC 601 User's Manual," which is excellently written, thorough, systematic, but is needlessly overwhelming due to massive redundancy. Far more material is covered than is needed by non-system-level programmers. That great work clued me in on what not to do. Of course it must be accurate, but also simple. Here, we will not cover very much about floating-point operations, nor the Mac's specific run-time model (except where needed for clarification). Instead, so you can come away with a feeling of mastery of some useful area, we will focus on the integer and branch processing common to most Mac programs. I want the amount to be just right for a sitting or two, and not a week of intensive study. The section on 'The Machine' has been included for completeness, and to provide background. It is not strictly necessary for learning assembly, which starts with the 'User Programming Model' section in next month's article.

Sounding Off

It has been said many times before, but cannot be repeated often enough. I want you to be reading this material for the right reasons. First, I too am a die-hard 68K assembly programmer. Perhaps 60% of my personal toolbox is still in assembly. Not long ago, I saw red when I learned that neither CodeWarrior nor Think C would support inline PPC assembly, and that industry pundits routinely babbled nonsense about how we humans could never hope to do as good a job of optimization as a compiler. How can that be, I asked, when just yesterday the quality of their 68K code generation was lousy: not recognizing predecrement addressing, reloading registers far too often, not making use of implicit CCR updates,... Suddenly these guys know it all? Hah!

Well, it's not easy to adjust, but I'm working very hard to put my skepticism on hold and understand that something is fundamentally different about (reduced instruction set computer) RISC systems. We'll look in more detail at how it works in a minute, but briefly, RISC represents a partnership between hardware and compiler designers to share the job of optimization. In fact the burden has shifted dramatically toward the compiler. The machine is designed to simplify timing and dependency analysis so that instructions can be reordered, interleaved and scheduled to maximum advantage by the compiler. Now, I'm not going to forget about optimization hereafter. Rather, I'm going to yell and complain and poke sharp sticks at the responsible parties if they don't eventually get it together and make good on this promise. Nevertheless, I will give it time. The kinds of things that have to be managed to get superior performance are numerous and complex, but more amenable to analysis then they were with (complex instruction set computer) CISC systems. It looks exactly like a job to be automated - it looks exactly like what a compiler should do.

I no longer endorse assembly language programming. It is not and should not be your personal responsibility - it doesn't make sense anymore. That is the wrong way to use the material here. There are many things you can do, as always, to dramatically improve performance. Learn to profile your code to find out where you need bother in the first place. Then consider: better algorithms, more efficient data structures, better organization of information, better data base keys, moving indices instead of data, better default settings, presorting things at startup, using idle time, using locally buffered I/O, using asynchronous I/O, doing work offscreen, updating smaller areas... Retest, to make sure you've changed the right things. Also, remember that speed is in the eye of the beholder. Make use of progress bars, watch cursors and other busy box gizmos to amuse and delight. These are all better ways. They are stable, portable, maintainable and reusable.

The other reasons one formerly needed assembly were things like setting up access to global variables, and gluing things together. The people at Apple are not blind. They have done a lot to design the run-time model for PPC so that these problems are eased if not eliminated. There ought not to be a genuine reason to muck around from now on. Today we have to support multiple platforms and an explosion of new technologies just to reach acceptability, let alone competitiveness. I want you to be happy and successful. I urge you to apply your time, money and effort where it will make the biggest difference. Now more than ever, do the right thing.

The Right Reason

Debugging remains as the real reason you must have familiarity with the machine and assembly language. Traditional places where low level debugging has proved very useful are non-application code such as INITs and code resources, spotting the consequences of nil pointers and overwriting array bounds, hunting memory leaks and many others things. These will always be with us. Even working in a high level language, bad things can happen due to inexperience with a new tool, poor documentation, or just being interrupted at some critical moment and losing one's thread. Sometimes high level language errors can themselves be quite insidious. Here is my favorite example of how it can all go wrong, admittedly stinging me more than once. Everyone knows that, theoretically, multiplying two 16-bit quantities gives a 32-bit result. Which of these actually does that?

long  c;
short a, b;

1) c = a * b;
2) c = a, c *= b;
3) c = (long)a * b;
4) c = (long)a * (long)b;
5) c = (long)(a * b);

Congratulations on recognizing that either (3) or (4) does the right thing by using muls.w (68K) to create a long that's stored in c. (1) and (5) give a "wrong" result. They use muls.w to create the product, but move a sign extension of only the low word into c. (2) works, but uses software emulation (%%mul in Think C) to do the requested long * long multiplication - not something you want in a loop. This is the kind of slip that can lead to days of reevaluating your whole algorithm, or career as a programmer. It's most likely to be caught at a low level where you're watching instructions as well as results. I think you know how it goes. You will have to debug something stupid you did, your coworker did, the compiler did, a third party product did, Apple did... Everyone needs debugging skills. That's life in the Big City. Now that we understand each other, let's get on with it.

The Machine

The main theme characterizing RISC computing is keeping the CPU as busy as possible so that cycles are not wasted. This is achieved in two principal ways, superscalar design and pipelining. The term superscalar refers to the CPU being a collection of semi-independent execution units operating in parallel, so that instructions can be issued to these units in parallel (and possibly out of order, as long as they are not interdependent). The figure illustrates a simplified view of the communication paths among the execution units (IU, BPU, FPU) and the supporting memory managers and cache. We'll take a quick look at the operation of each, concentrating particularly on features contributing to machine performance.

Instruction and Branch Units

Instructions march through the instruction queue (IQ) from Q7 toward Q0 as vacancies are created. New instructions are requested as soon as possible. If the cache is hit, as many as eight instructions (the whole IQ, or, a cache sector) can be prefetched in one cycle. Otherwise, further bus cycles will be needed, but this is normally simultaneous with currently executing instructions. Instructions can be issued from any of the lower four elements of the IQ to either the branch processor (BPU) or floating-point unit (FPU), as long as the decode stage of the target unit is vacant. The integer unit (IU) is fed only through Q0, which doubles as the IU decode stage. Instruction fetch is normally sequential, unless the BPU decides on a change of execution path.

The BPU "owns" two registers holding branch target addresses, the link (LR) and count (CTR) registers, allowing relative independence of the BPU. The LR also gets the return address following a branch, if any. The condition register (CR) provides the information necessary to resolve conditional branches. One thing a good compiler should do is schedule the instruction that updates the CR well ahead of its dependent branch instruction to allow resolution as early as possible. The BPU can examine up to one branch instruction at a time. Unconditional branches are simply removed from the instruction stream, with fetching directed along the new path. Conditional branches have a predictor bit, indicating the more likely of branch taken/not taken. If a conditional branch is encountered, instructions continue to execute along the predicted path, but not as far as the writeback stage, where registers are updated. When the condition is resolved, if the prediction was correct, writeback is enabled and execution continues as if no branch occurred. If incorrect, the instruction unit backs up by flushing everything since the branch, and fetching a new cache sector of instructions. This process of effectively removing branches from code is termed branch folding.

This buys a great deal of speed. Recall that on the 68K, branches are among the most costly instructions. One used to employ loop unrolling to cut down on branches, making source code ugly, larger and potentially confusing. Another popular trick is to recode if-blocks as follows:

/* 1 */
if( condition )  x = B;
 x = A; recoded becomes   if( condition )
else    x = A;
 x = B;

Hopefully the need for such awkward constructions will be eliminated soon. I look forward to source code being an expression of ideas, abstracted away from machine dependent trickery.

IU

The integer unit does what you expect: arithmetic, logical, and bit-field operations on integers. It contains the general-purpose register file (GPR), and the integer exception register (XER). There are thirty-two GPRs, each 32 bits wide on the 601. Each handles either data manipulation or address calculation. They are dual-ported (as are the FPRs) to allow two independent accesses at once. The XER holds result flags such as carry and overflow from arithmetic operations. The IU handles address calculation for all execution units. All load and store requests (even for floating-point operands) are processed by the IU and passed from there to the memory management unit (MMU). The IU implements feed-forwarding, simultaneously making available the result of an integer execute stage to both the register writeback bus, and the execute stage of any follow-on integer instruction waiting for that result.

FPU

The floating-point unit contains the floating-point register file (FPR), and the status and control register (FPSCR). The thirty-two 64-bit registers handle either single or double precision operations. Only a subset of operations are handled in hardware, as with the 68040, the others must be emulated. Of interest, though, is the combined multiply-add instruction, which directly supports the vector and matrix algebra needed in common graphics transformations. It is also well suited to series expansions, speeding software emulation of transcendental functions. The FPSCR holds calculation result flags such as overflow, NaN, INF, etc., and environment controls, such as rounding direction. At this time, the FPU does not support feed-forwarding.

MMU

The memory management unit handles the translation of logical to physical addresses, access privileges, memory protection and virtual memory paging. Performance is enhanced in this unit by the incorporation of several on-chip tables of recently used addresses, so that translation can be bypassed whenever possible. Of course load and store requests look to the cache first. Misses are queued in the memory unit for servicing. The MMU can address up to 4E9 (4 Gigabytes) of physical memory, and 4E15 (4 Petabytes) of virtual memory. Where to store all that is a separate issue (4 Petabytes = 6.2 million CD ROMs).

Memory Unit

This unit buffers data transfers between the cache and memory. It contains a two entry read and three entry write queue. Each entry is actually capable of holding eight consecutive words (a cache sector). Writing to memory is primarily performed to make room in the cache for new entries. The least recently used (LRU) entry in the cache is moved to the write queue, where it waits its turn for use of the system bus. Reads are performed mainly to load the cache. If not interdependent, waiting reads and writes may be performed out of order, according to priority. However, special instructions are available to strictly enforce program order of reads and writes when needed.

Cache

A 32-kByte (write-back) cache is provided to minimize time waiting for off-chip accesses. In the 601 it is a unified cache, holding both instructions and data. In the future, it will likely have a Harvard architecture, keeping instructions and data separate. That would simplify and speed up cache searching, and allow concurrent data and instruction accesses. An advantage of the unified design is that those nasty programs that modify their own code are given a chance of running successfully - making the PPC look even safer sometimes than a 68040. Of course, nobody you or I know writes self-modifying code, right? The cache is subdivided into 8 pages. Each page contains 64 cache lines. Each line is subdivided into 2 sectors (or blocks) of 8 32-bit words each. The block is the smallest cacheable unit. Cache sectors are read and written in a special burst mode. To further reduce processing delays when a load or fetch misses in the cache, the specifically requested words are feed-forwarded to the waiting execution unit before the remainder of the sector read completes.

Pipelining

The second major factor in keeping the CPU busy is pipelining, which refers to the scheme of dividing the processing of an instruction into several independent serial stages - similar to a factory assembly line. Each execution unit is pipelined, but has a different number of stages. More stages allow breaking a process into simpler steps. The BPU, who's task is already simple has a combined decode/execute stage. The FPU, having the most complicated operations to perform has two execute stages. The term superpipelined is often used to characterize pipelines having more than four or five stages. For simplicity, we focus our discussion on the IU, whose four stages typify basic pipeline design. These stages are:

  • fetch/dispatch - get the instruction from the stream into the execution unit's decoder,
  • decode - figure out what the instruction does and initiate requests for any needed operands,
  • execute - apply indicated operation,
  • writeback - update target register(s) with result.

Any stage can work on only one instruction at a time. However, the IU as a whole can process four instructions at the same time - as long as its pipeline is kept full. Looked at another way, instructions complete at the rate of one per cycle with a full pipeline - that's fast. Sometimes it doesn't work out because of poorly separated dependencies. An example might be an executing instruction needing an operand, as yet unavailable from some previous instruction. The resulting inactivity is called a stall. During a stall, the stage remains occupied by the waiting instruction, but is essentially idle. When things pick up again, that stage will again be active. Now some silly nit-picking. Looking back at the history of what happened, we see that a certain stage was doing something, then stalled, then doing something again. A stall in this context is officially called a bubble (I would suggest we call poor code with too many bubbles, foamy - you heard it here first).

There is still more to the speed story. What has not yet been discussed is perhaps the single most important design feature of RISC - that instructions are a uniform size, and wherever possible, spend a uniform amount of time in each pipeline stage. I think you can easily convince yourself that if one or more stages took longer than the others, the slow ones would be bottlenecks - the pipeline could not march along in lock-step even at its theoretical best.

The constant size of instruction words is 32 bits for all PPC implementations. This affords uniform fetch and decode times - in fact, one cycle each. Writeback, another data movement operation, is also one cycle. Remember that we are writing to GPRs and FPRs at writeback, not memory. (Writing to memory, really the cache, only occurs via explicit store instructions as we'll soon discuss).

Execute is a little trickier since there are so many different things one might like to do to the data. How can you insure that, whatever operation, takes the one cycle we seem to have established? Aha! therein lies the power of the reduced instruction set. Most integer operations do take one cycle to execute - the principal exceptions are multiplies and divides. That's because the instructions are simpler than on typical CISC machines. In fairness, the bit-field operations owe their speediness to a TurboShift unit (I'm guessing about the name). Complex operations have to be constructed from a sequence of the available simple ones, but that's O.K., because performance benefits overall. About those multiply and divide operations - avoid them if you can. They are inherently expensive (36 cycle executes for divides), but that's nature's way as far as we know. Then again, stalling is not the end of the world either - it merely means you're not operating at absolute maximum throughput. If your task requires divisions that can not be accommodated by right-shifting (properly a compiler responsibility), then you have license to divide away.

We've now seen some of the mechanisms that contribute to the goal of ever filled pipelines: fixed-time pipeline stages, parallel instruction issue to multiple pipelines, branch folding, feed-forwarding and generous use of caches, queues, buffers and large register files. I remind you, the compiler is supposed to be aware of all this, bearing the awesome responsibility of reducing dependencies by reordering instructions and making a clever mix of floating-point, integer, branch and load/store operations - while not disrupting program logic. In light of this, if your compiler is doing its job to the utmost, you should be getting headaches figuring out low level code generated with optimizers on. Conversely, you will probably want to turn off optimizations for debugging.

Note that it's important to know which execution units are involved with each instruction to do optimization effectively. This is made much more difficult considering that the 603, for example, already breaks the IU up into three new units: a smaller IU, a dedicated load/store unit and a special-purpose register access unit. As the PowerPC family matures, the design will continue to evolve. Keeping up with it will be very difficult. On the down side, it will no doubt require a certain amount of lowest-common-denominator compromising to stay compatible with as many PowerPC family members as possible.

Finally, we have seen that an individual execution unit, working at maximum throughput can process instructions at a rate of one per cycle. In conjunction with the parallelism and other performance enhancements throughout the system, the achievable rate for application code as a whole is potentially faster than that.

It's incredibly cool to Geeks (informed folks) like you and me. So why isn't everybody rushing to buy a Power Mac? I often play dummy first-time buyer at superstores to see what salespeople will try to sell me. They're not pushing Power Macs - not even to first-time buyers. At the time of this writing, there are only fifty or so native applications. Even though the emulator can run existing Mac software at speeds fully adequate for most users, magazine reviewers and computer salespeople are insisting that the user won't be happy without the native versions. Nobody knows, when asked, when these applications can be expected. In spite of the work Apple did on compatibility, on emulator performance and on keeping it Macintosh all the way, the message is entirely garbled and misunderstood when it gets to the marketplace. To the user, it is not a machine for today with an even greater future - it is just a machine without software. We've all got to get on the stick and produce some of that anxiously awaited software, so that 1994 won't be like 1984.

User Programming Model

The foregoing material on the inner workings of the machine is, I hope, informative and interesting, but now we will get reacquainted with the CPU from the point of view of actual programmers. What one has to know for this purpose is far simpler. Again, we are focusing on integer programming. The figure illustrates the subset of the hardware with which you interact directly - your interface to the machine.

While there are many more registers than shown, these are all that are necessary for most purposes. In particular, we acknowledge, here only, the existence of the MQ (Multiply/Quotient) register, which is used by many 601 instructions to hold intermediate or extended results. The 601 is a transitional chip - the first PowerPC implementation of IBM's POWER architecture from which PowerPC is generally derived. The MQ register, and all instructions which depend upon it were retained from POWER in the 601 to give IBM developers time to make the transition to PowerPC while maintaining a high degree of compatibility with existing POWER software. These things are not part of the PowerPC definition, and will likely be dropped from subsequent implementations. We will speak of MQ no more.

The GPRs are 32 bits. Subsequent models may have 64-bit registers. Bit labeling, in general, is just the opposite of 68K conventions. The most significant bit is number 0, the least significant is 31. Low (least) is depicted as being to the right in both worlds. Left and right come into play in connection with shift operations. As ever, right shifts divide.

The 32 general-purpose registers all function identically. Each can be used for data or address calculation. Everything interesting happens here. Interaction with memory happens only through explicit load and store operations. This is common on RISC machines. They are said to have a load/store architecture.

We'll cover more about branching later, but for now, the link register typically holds the target address of a branch (and then an optional return address). The count register is dual purpose, holding a value to be decremented for conditional branching, similarly to the DBcc (68K) construction. It can also serve as an alternate branch target address.

The exception register (XER) is divided into several fields. Of interest for us are just the high 3 bits shown. The CA bit records carries out of bit 0 of a result during arithmetic operations. It is also the bit used for extended (multi-word) arithmetic. OV records arithmetic overflow, i.e., the result was too large to be represented with so many bits. SO records 'summary overflow' - as far as is documented, it behaves identically to OV - I see no case where only one is updated and not the other. In any case, the SO bit of the XER is the one copied to the CR to reflect overflow there.

The CR is actually a set of eight functionally identical condition records (cr0,...cr7), each 4 bits wide. Only cr0 is shown in the figure. Each can be individually targeted to hold the result of explicit compare instructions (you will rarely see any but cr0 used by your compiler). The cr0 and cr1 fields are different, however, in that when instructions other than compares are encoded for 'CR update,' cr0 is the implied target for integer operations, and cr1 is implied for floating-point (otherwise they work like the others). Bit 0 (or 4, 8, etc.) records the effect of comparing a result against zero. It is set if the result is negative (high bit is 1), similar logic applies for bits 1 and 2. Again, SO is copied from the XER. These bits are enough to characterize any signed or unsigned arithmetic or logical result.

Data Types and Alignment

The table lists the intrinsic addressable data types on the 601, and how their names differ from 68K conventions.

Size (bytes)Last 4 bits of address601 Type Name68K Type Name
1xxxxbytebyte
2xxx0half-wordword
4xx00wordlong-word
8x000double-wordNA
160000quad-wordNA

An extensive set of bit-field operations is available as well, acting on GPRs only, not memory.

Proper data alignment is something to be conscious of and design for on all PPC implementations (it speeds-up 68030 and 68040 code as well). Each type has a natural address at which it should reside. This address is an integral multiple of the type's size. As the table shows, for example, words should reside at addresses divisible by 4, and so have two zeros for the last two bits of their addresses. The reason is mainly that the machine can access aligned data faster. Misaligned data may require extra work to calculate and execute a sequence of bus cycles that will map onto data crossing their natural boundaries.

Aligning your data should be done in three places: global data (together with static data, which are stored as if global), local data, and structure definitions. For global data, definition is where storage is allocated, not where data are merely declared external. Achieving alignment is easy - just a matter of discipline. You would apply the same rules in any of the three areas. As an example, let's consider structures. Start with the assumption that the top of the structure is given to reside at a 4-byte boundary (xx00), such as x000. Lay out its fields such that each starts at a natural address for the field's type, as given in the table. Add padding if necessary. Consider a structure containing 2 longs, 1 short, and 1 char.

struct Misaligned {          struct Aligned {
  long L1;                     long L1;
  char C;                      charC;
  short  S;                    char reserved;  // padding
  long L2;                     short  S;
};                             long L2;
                             };

Both structure examples start at address x000. Note that the misaligned structure's S is at address x101, and L2 is at x111. All that was needed to correct this, and get all fields onto proper boundaries was a padding byte, as indicated.

You'll make things easiest for yourself by aligning each structure as an isolated entity. Make sure each structure has aligned fields, but is also a multiple of 4 bytes in total length. This strategy ensures that alignment is preserved when an array of such structures is allocated, or if the structure becomes an embedded substructure, or is defined globally or locally. You're better off designing so that you can ignore how the structure will be used by you or others. Make them good citizens.

For local data, one does the same thing, assuming that the first variable defined in any function starts at xx00.

Global data is most easily considered on a per-file basis - like the per-structure rule. Again, assume address xx00 for the top of each file.

Something has been overlooked. What about the validity of the assumption of 4-byte starting addresses? In the case of dynamically allocated memory (pointers or handles), the Mac will always hand you aligned blocks (this has been true since the 68030). For data local to a function, the stack frame is always set up so that your locals start on a 4-byte boundary (PPC run-time model specification). For globals, you have to be more careful. All of the global (and static) data for an executable module (application, INIT, etc.) are collected together (across all files) and stored as one giant block. In fact, it is loaded into memory as a heap block, so has an aligned starting address. However, to maintain alignment throughout the interior of this block, you have to be diligent about keeping the data for each file nicely aligned, and a multiple of four - similar to what was said for structures. Designing each file to have properly aligned globals when considered in isolation, ensures portability and reusability of files for other applications.

To wrap up for this month, I'll briefly mention the ordering of bytes in memory. The 601 is capable of operating in either big-endian or little-endian memory mode (register function is independent of these). I have debated with myself about how much to say on this complicated topic. The real reason to say anything at all is to assure those who are wondering, that Apple has chosen big-endian ordering for the run-time model's default mode. That means that, in memory, as you read a long for example, you find its most significant byte at the lowest address, and its least significant at the highest - just the same as on 68K machines. That's what you need to know, that there's nothing to worry about here.

If you're interested though, true little-endian storage is what's used on Intel machines. Let's compare the same document on floppies, as produced by a Mac version of some application and by its faster-selling DOS counterpart. The mapping difference is simple to explain. Strings look the same on either disk - they start at the same addresses, and the bytes are in the same order. However, all integers larger than 1 byte look like their bytes are reversed on the Intel disk - they have little-endian ordering. Basically, to transfer a floppy from one machine to the other, one has to byte-reverse all the numbers. Floating-point formats are just too dissimilar to worry about, so forget that. Now, the 601 can operate in a pseudo little-endian format. On disk, it looks neither like true big nor true little-endian. Why? Without going into too much detail, the 601 can make memory appear to the processor as true little-endian by playing with the addresses of load/stores, but without reversing any bytes. The result is a fast, simulated little-endian world, but it's not true little-endian in memory - numbers do not have reversed bytes, but their starting addresses are changed. It's not the case that the in-memory data are instantly ready for exchange with real PCs. However, this scheme helps make the 601 ready to speedily emulate a PC. Getting full data compatibility still requires moving fields and explicit byte-reversal during I/O - already slow, so less noticeable.

What's Next?

OK, you know all about the environment and operands of the 601. Now we'll learn its lingo. Next month we'll go into the details of reading and understanding the assembly language, and get you really prepared to do some in-depth debugging.

In the meantime, no matter how deep you plan to go, you should read the PowerPC 601 User's Manual. It's an essential resource, and it's available from APDA with their Macintosh with PowerPC Starter Kit. This package is reasonably priced at $39.95. It also includes the New Inside Macintosh volume PowerPC System Software, which explains the run-time model, the Mixed Mode Manager and the Code Fragment Manager - all crucial for high level language PowerPC development, as well as really understanding what's going on in your debugger.

[The New Inside Macintosh volume PowerPC System Software is also available from the Mail Order Store - Ed stb]

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Dropbox 193.4.5594 - Cloud backup and sy...
Dropbox is a file hosting service that provides cloud storage, file synchronization, personal cloud, and client software. It is a modern workspace that allows you to get to all of your files, manage... Read more
Google Chrome 122.0.6261.57 - Modern and...
Google Chrome is a Web browser by Google, created to be a modern platform for Web pages and applications. It utilizes very fast loading of Web pages and has a V8 engine, which is a custom built... Read more
Skype 8.113.0.210 - Voice-over-internet...
Skype is a telecommunications app that provides HD video calls, instant messaging, calling to any phone number or landline, and Skype for Business for productive cooperation on the projects. This... Read more
Tor Browser 13.0.10 - Anonymize Web brow...
Using Tor Browser you can protect yourself against tracking, surveillance, and censorship. Tor was originally designed, implemented, and deployed as a third-generation onion-routing project of the U.... Read more
Deeper 3.0.4 - Enable hidden features in...
Deeper is a personalization utility for macOS which allows you to enable and disable the hidden functions of the Finder, Dock, QuickTime, Safari, iTunes, login window, Spotlight, and many of Apple's... Read more
OnyX 4.5.5 - Maintenance and optimizatio...
OnyX is a multifunction utility that you can use to verify the startup disk and the structure of its system files, to run miscellaneous maintenance and cleaning tasks, to configure parameters in the... Read more
Hopper Disassembler 5.14.1 - Binary disa...
Hopper Disassembler is a binary disassembler, decompiler, and debugger for 32- and 64-bit executables. It will let you disassemble any binary you want, and provide you all the information about its... Read more

Latest Forum Discussions

See All

Zenless Zone Zero opens entries for its...
miHoYo, aka HoYoverse, has become such a big name in mobile gaming that it's hard to believe that arguably their flagship title, Genshin Impact, is only three and a half years old. Now, they continue the road to the next title in their world, with... | Read more »
Live, Playdate, Live! – The TouchArcade...
In this week’s episode of The TouchArcade Show we kick things off by talking about all the games I splurged on during the recent Playdate Catalog one-year anniversary sale, including the new Lucas Pope jam Mars After Midnight. We haven’t played any... | Read more »
TouchArcade Game of the Week: ‘Vroomies’
So here’s a thing: Vroomies from developer Alex Taber aka Unordered Games is the Game of the Week! Except… Vroomies came out an entire month ago. It wasn’t on my radar until this week, which is why I included it in our weekly new games round-up, but... | Read more »
SwitchArcade Round-Up: ‘MLB The Show 24’...
Hello gentle readers, and welcome to the SwitchArcade Round-Up for March 15th, 2024. We’re closing out the week with a bunch of new games, with Sony’s baseball franchise MLB The Show up to bat yet again. There are several other interesting games to... | Read more »
Steam Deck Weekly: WWE 2K24 and Summerho...
Welcome to this week’s edition of the Steam Deck Weekly. The busy season has begun with games we’ve been looking forward to playing including Dragon’s Dogma 2, Horizon Forbidden West Complete Edition, and also console exclusives like Rise of the... | Read more »
Steam Spring Sale 2024 – The 10 Best Ste...
The Steam Spring Sale 2024 began last night, and while it isn’t as big of a deal as say the Steam Winter Sale, you may as well take advantage of it to save money on some games you were planning to buy. I obviously recommend checking out your own... | Read more »
New ‘SaGa Emerald Beyond’ Gameplay Showc...
Last month, Square Enix posted a Let’s Play video featuring SaGa Localization Director Neil Broadley who showcased the worlds, companions, and more from the upcoming and highly-anticipated RPG SaGa Emerald Beyond. | Read more »
Choose Your Side in the Latest ‘Marvel S...
Last month, Marvel Snap (Free) held its very first “imbalance" event in honor of Valentine’s Day. For a limited time, certain well-known couples were given special boosts when conditions were right. It must have gone over well, because we’ve got a... | Read more »
Warframe welcomes the arrival of a new s...
As a Warframe player one of the best things about it launching on iOS, despite it being arguably the best way to play the game if you have a controller, is that I can now be paid to talk about it. To whit, we are gearing up to receive the first... | Read more »
Apple Arcade Weekly Round-Up: Updates an...
Following the new releases earlier in the month and April 2024’s games being revealed by Apple, this week has seen some notable game updates and events go live for Apple Arcade. What The Golf? has an April Fool’s Day celebration event going live “... | Read more »

Price Scanner via MacPrices.net

Apple Education is offering $100 discounts on...
If you’re a student, teacher, or staff member at any educational institution, you can use your .edu email address when ordering at Apple Education to take $100 off the price of a new M3 MacBook Air.... Read more
Apple Watch Ultra 2 with Blood Oxygen feature...
Best Buy is offering Apple Watch Ultra 2 models for $50 off MSRP on their online store this week. Sale prices available for online orders only, in-store prices may vary. Order online, and choose... Read more
New promo at Sams Club: Apple HomePods for $2...
Sams Club has Apple HomePods on sale for $259 through March 31, 2024. Their price is $40 off Apple’s MSRP, and both Space Gray and White colors are available. Sale price is for online orders only, in... Read more
Get Apple’s 2nd generation Apple Pencil for $...
Apple’s Pencil (2nd generation) works with the 12″ iPad Pro (3rd, 4th, 5th, and 6th generation), 11″ iPad Pro (1st, 2nd, 3rd, and 4th generation), iPad Air (4th and 5th generation), and iPad mini (... Read more
10th generation Apple iPads on sale for $100...
Best Buy has Apple’s 10th-generation WiFi iPads back on sale for $100 off MSRP on their online store, starting at only $349. With the discount, Best Buy’s prices are the lowest currently available... Read more
iPad Airs on sale again starting at $449 on B...
Best Buy has 10.9″ M1 WiFi iPad Airs on record-low sale prices again for $150 off Apple’s MSRP, starting at $449. Sale prices for online orders only, in-store price may vary. Order online, and choose... Read more
Best Buy is blowing out clearance 13-inch M1...
Best Buy is blowing out clearance Apple 13″ M1 MacBook Airs this weekend for only $649.99, or $350 off Apple’s original MSRP. Sale prices for online orders only, in-store prices may vary. Order... Read more
Low price alert! You can now get a 13-inch M1...
Walmart has, for the first time, begun offering new Apple MacBooks for sale on their online store, albeit clearance previous-generation models. They now have the 13″ M1 MacBook Air (8GB RAM, 256GB... Read more
Best Apple MacBook deal this weekend: Get the...
Apple has 13″ M2 MacBook Airs available for only $849 today in their Certified Refurbished store. These are the cheapest M2-powered MacBooks for sale at Apple. Apple’s one-year warranty is included,... Read more
New 15-inch M3 MacBook Air (Midnight) on sale...
Amazon has the new 15″ M3 MacBook Air (8GB RAM/256GB SSD/Midnight) in stock and on sale today for $1249.99 including free shipping. Their price is $50 off MSRP, and it’s the lowest price currently... Read more

Jobs Board

Early Preschool Teacher - Glenda Drive/ *Appl...
Early Preschool Teacher - Glenda Drive/ Apple ValleyTeacher Share by Email Share on LinkedIn Share on Twitter Read more
Senior Software Engineer - *Apple* Fundamen...
…center of Microsoft's efforts to empower our users to do more. The Apple Fundamentals team focused on defining and improving the end-to-end developer experience in Read more
Relationship Banker *Apple* Valley Main - W...
…Alcohol Policy to learn more. **Company:** WELLS FARGO BANK **Req Number:** R-350696 **Updated:** Mon Mar 11 00:00:00 UTC 2024 **Location:** APPLE VALLEY,California Read more
Medical Assistant - Surgical Oncology- *Apple...
Medical Assistant - Surgical Oncology- Apple Hill WellSpan Medical Group, York, PA | Nursing | Nursing Support | FTE: 1 | Regular | Tracking Code: 200555 Apply Now Read more
Early Preschool Teacher - Glenda Drive/ *Appl...
Early Preschool Teacher - Glenda Drive/ Apple ValleyTeacher Share by Email Share on LinkedIn Share on Twitter Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.