TweetFollow Us on Twitter

Software Errors
Volume Number:6
Issue Number:11
Column Tag:Developer's Forum

Software Errors: Prevention and Detection

By Karl E. Wiegers, Ph.D., Fairport, NY

Software Errors: Prevention and Detection

Most programmers are rather cavalier about controlling the quality of the software they write. They bang out some code, run it through some fairly obvious ad hoc tests, and if it seems okay, they’re done. While this approach may work all right for small, personal programs, it doesn’t cut the mustard for professional software development. Modern software engineering practices include considerable effort directed toward software quality assurance and testing. The idea, of course, is to produce completed software systems that have a high probability of satisfying the customer’s needs.

There are two ways to deliver software free of errors. The first is to prevent the introduction of errors in the first place. And the second is to identify the bugs lurking in your code, seek them out, and destroy them. Obviously, the first method is superior. A big part of software quality comes from doing a good job of defining the requirements for the system you’re building and designing a software solution that will satisfy those requirements. Testing concentrates on detecting those errors that creep in despite your best efforts to keep them out.

In this article we’ll take a look at why the issue of software quality should be on the tip of your brain whenever you’re programming, as well as discussing some tried-and-true methods for building high-quality software systems. Then we’ll explore the strategies and tactics of software testing.

Why Worry About Software Quality?

The computer hobbyist doesn’t think much about software quality. We write some little programs, experiment with graphics tricks, delve into the operating system, and try to learn how the beast works. On occasion we write something useful, but mostly just for own benefit. The “quality” of one-user, short-lived programs like these doesn’t really matter much, since they’re not for public consumption.

The professional software engineer developing commercial products or systems for use by his employer has a more serious problem. Besides the initial effort of writing the program, he has to worry about software maintenance. “Maintenance” is everything that happens to a program after you thought it was done. In the real world, software maintenance is a major issue. Industry estimates indicate that maintenance can consume up to 80 percent of a software organization’s time and energy. As a great example of the importance of software maintenance, consider how many versions of the Macintosh operating system have existed prior to version 7.0. Each new system version was built upon the previous one, probably by changing some existing code modules, throwing out obsolete modules, and splicing in new ones.

The chore of maintenance is greatly facilitated if the software being changed is well-structured, well-documented, and well-behaved. More often, however, code is written rather sloppily and is badly structured. Over time, such code becomes such a tangle of patches and fixes and kludges that eventually you’re better off re-writing the whole module, rather than trying to fix it once again. High-quality software is designed to survive a lifetime of changes.

What is Software Quality?

Roger Pressman, a noted software engineering author and consultant, defines software quality like this:

Conformance to explicitly stated functional and performance requirements, explicitly documented development standards, and implicit characteristics that are expected of all professionally developed software.

A shorter definition is that a high-quality software system is one that’s delivered to the users on time, costs no more than was projected, and, most importantly, works properly. “Working properly” implies that the software must be as nearly bug-free as possible.

While these are workable definitions, they are not all-inclusive. For example, if you build a software system that conforms precisely to a set of really lousy specifications, do you have a high-quality product? Probably not. Part of our job as software developers is to help ensure that the system specs themselves are of high quality (i.e., that the specs properly address the user’s needs), as well as building an application that conforms to this spec.

A couple of other important points are implied in this definition. One is that you HAVE specifications for the programs you’re writing. Too often, we work from a fuzzy notion of what we’re trying to do. This fuzzy image becomes refined over time, but if you’ve been writing code during that time, you’ll probably find that much of it has to be changed or thrown out. Wouldn’t you rather think the problem through in detail up front, and only code it once? I’ve discovered through years of practice that this approach results in a much better product than if I just banged out the code on the fly.

Another implication is that “software” includes more than executable code. The “deliverables” from a software development project also include the written specifications, system designs, test plans, source code documentation, and user manuals. Specifications and designs might include narrative descriptions of the program requirements and structure, graphical models of the system (such as data flow diagrams), and process specs for the modules in your system.

Software quality impacts these other system deliverables just as it affects the source code. The quality of documentation is particularly important. Have you ever tried to change someone else’s code without being able to understand his mindset at the time he wrote it? Detailed documentation about the parts of the software system, the logic behind them, and how they fit together is extremely important. But erroneous documentation is worse than nothing at all, since it can lead you down a blind alley. Any time the documentation and source code don’t agree, which do you believe?

There’s a compelling economic incentive for building quality into software. The true cost of a software development project is the base cost (what you spend to build the system initially) PLUS the rework cost (what you spend to fix the errors in the system). The rework cost rarely is figured into either the time or money budgets, with the consequence that many projects cost much more to complete than expected and soak up still more money as work is done to make the system truly conform to the specifications. In too many cases, the project is delivered too late to be useful, or not at all.

Software Quality Assurance

Software quality assurance, or SQA, is the subfield of software engineering devoted to seeing that the deliverables from a development project meet acceptable standards of completeness and quality. The overall goal of SQA is to lower the cost of fixing problems by detecting errors early in the development cycle. And if your SQA efforts prevent some errors from sneaking into your code in the first place, so much the better. SQA is a watchdog function looking over the other activities involved in software development.

Here are some important SQA thoughts. First, you can’t test quality into a product; you have to build it in. Testing can only reveal the presence of defects in the product. Second, software quality assurance is not a task that’s performed at one particular stage of the development life cycle, and most emphatically not at the very end. Rather, SQA permeates the entire development process, as we’ll see shortly. Third, SQA is best performed by people not directly involved in the development effort. The responsibility of the SQA effort is to the customer, to make sure that the best possible product is delivered, rather than to the software developers or their management. SQA won’t succeed if it just tells the managers what they want to hear.

Testing is certainly a big part of SQA, but by no means the only part. Testing, of course, is the process of executing a computer program with the specific intention of finding errors in it. It’s nearly impossible to prove that a program is correct, so instead we do our best to make it fail. Unfortunately, most of us perform testing quite casually, without a real plan and without keeping any records of how the tests went.

Proper software testing requires a plan, or test script. It includes documentation, sample input datasets, and records of test results. Instead of being informal and ad hoc, good software testing is a systematic, reproducible effort with well-defined expectations. We’ll talk more about good testing strategies later on.

Now let’s look at some goals of SQA for the various stages of structured software development. No matter what software development life cycle model you follow, you’ll always have to contend with requirements analysis, system specification, system design, code implementation, testing, and maintenance, so these SQA goals are almost universally applicable. For one-man projects, much of the formality of these stated SQA goals is not needed. Instead, try to discipline yourself enough to meet the most important aspects of the goals, while still having fun writing the programs.

Requirements Analysis

• Ensure that the system requested by the customer is feasible (many large projects have a separate feasibility study phase even before gathering formal requirements).

• Ensure that the requirements specified by the customer will in fact satisfy his real needs, by recognizing requirements that are mutually incompatible, inconsistent, ambiguous, or unnecessary. Sometimes needs can be addressed in better ways than those the user is requesting.

• Give the customer a good idea of what kind of software system will actually be built to address his stated requirements. Simple prototypes often are useful for this.

• Avoid misunderstandings between developers and customers, which can lead to untold grief and hassles farther down the road.

Software Specifications

• Ensure that the specifications are consistent with the system requirements, by setting up a requirements traceability document. This document lists the various requirements, and then tracks how they are addressed in the written specs, the system design (which data flow process addresses the requirement), and the code (which function or subroutine satisfies the requirement).

• Ensure that specifications have been supplied for system flexibility, maintainability, and performance where appropriate.

• Ensure that a testing strategy has been established.

• Ensure that a realistic development schedule has been established, including scheduled reviews.

• Ensure that a formal change procedure has been devised for the system. Uncontrolled changes, resulting in many sequential (or even concurrent) versions of a system, can really contribute to quality degradation.

Design

• Ensure that standards have been established for depicting designs (such as data flow diagram models for process-oriented systems, or entity-relationship models for data-oriented systems), and that the standards are being followed.

• Ensure that changes made to the designs are properly controlled and documented.

• Ensure that coding doesn’t begin until the system design components have been approved according to agreed-upon criteria. Of course, we all do some coding before we really “should”; so long as you think of it as “prototyping”, that’s fine. Just don’t get carried away prematurely.

• Ensure that design reviews proceed as scheduled.

Coding

• Ensure that the code follows established standards of style, structure, and documentation. Even though languages like C let you be super-compact and hence super-obscure in your coding, don’t forget that a human being (maybe even you) may have to work with that code again some day. Clarity of code is usually preferred over conciseness.

• Ensure that the code is being properly tested and integrated, and that revisions made in coded modules are properly identified.

• See that code writing is following the stated schedule. It probably won’t be; the customer is entitled to know this and to know the impact on delivery time.

• Ensure that code reviews are being held as scheduled.

Testing

• Ensure that test plans have been created and that they are being followed. This includes a library of test data, driver programs to run through the test data, and documentation of the results of each formal test that has been performed.

• Ensure that the test plans that are created do in fact address all of the system specifications.

• Ensure that, after testing and reworking, the software does indeed conform to the specifications.

Maintenance

• Ensure consistency of code and documentation. This is quite difficult; we tend to not update documentation when we change the programs. However, such an oversight can create nightmares the next time a change has to be made. Can you trust the docs, or not?

• Ensure that the established change control process is being observed, including procedures for integrating changes into the production version of the software (configuration control).

• Ensure that changes made in the code follow the coding standard, are reviewed, and do not cause the overall code structure to deteriorate.

By now you’re saying, “Yeah, right. No way, man. Can’t be done.” To be honest, most professional software engineers DON’T follow nearly this stringent a quality plan. Unfortunately, the result of having no SQA plan at all often is software of low, perhaps unacceptable, quality. If you can incorporate even a few of these SQA goals into your own development activity, you should see some real improvements. I certainly have.

Making SQA Happen

There are three principal SQA activities:

1. The creation and enforcement of standards, practices, and conventions.

2. Conducting and keeping records on project reviews.

3. Independent test case design and execution (independent of the guy writing the code, that is).

I’ll assume that you’re already practitioners of standard methods for high-quality software system design. By this I mean that you’ve selected methods and tools that suit your needs for structured analysis and design, such as data flow diagrams, data dictionaries, and process specifications. And of course I assume that you all practice good structured programming techniques. We’ll spend some time on software testing methods later on this article. Now let’s look at the powerful tool of reviews, or walkthroughs.

Reviews

Many software engineering authorities believe that the single most effective SQA activity is to hold structured reviews or walkthroughs at key stages of the development process. The review process involves collecting several pairs of skilled eyes, besides those of the developer, into a room and scrutinizing the deliverables from a particular phase of system development The purpose of the review is to find errors. It has the additional benefit of educating your associates about your project, which can be valuable if the members of a software development group ever have to do maintenance on systems they didn’t build (the usual real-world case).

I like to schedule reviews or walkthroughs at these points during development: after the requirements specification has been completed; after overview design is complete (usually a data flow model); after detail design is complete (process specifications for individual modules); after the test plan is written; and after coding has been done. I always include the primary customer representative (“project champion”) in the requirements and overview design reviews.

The specific topics covered during an SQA review depend on the nature of the deliverables produced. For example, you’ll do things a little differently if your specification document is a written narrative or if it includes a data flow model of the problem being addressed. Reviews of specifications and overview designs should examine data flow diagrams and data models for consistency, proper balancing between levels, and accuracy and completeness of the data dictionary. During the detail design review, structure charts and process specs are examined for errors and inconsistencies. Along the way, make sure that all of the requirements from the specification phase are being satisfied.

A code review checks for the presence of all the features described in the process specification for the module being studied. Again, you should look for inconsistencies with the data dictionary and other design documents. The code should be checked for quality of structure, documentation, completeness, complexity, and clarity. Look for the characteristics of well-structured modules, with high cohesion (performing a single task) and low coupling to other modules.

Each step of the review should check all deliverables against the formal standards you agreed upon at the outset of the project. This includes system and code-level documentation. It doesn’t matter so much exactly what standards you’ve selected, so long as your development products adhere to them. Consistency is much more important than dogma here.

Test Case Design

Let’s turn our attention to software testing techniques. Testing is the second level of error control: detection. Don’t confuse testing with debugging, which is the third step in error control. The goal is to find the problems, not necessarily to resolve them at this time.

Historically, testing has occupied the greatest fraction of the time and effort associated with software development. A big reason for this is the lack of attention traditionally paid to system specification and design. Of course, you could test any program until doomsday and still not be completely sure that it will work right 100% of the time. But if we use thorough, structured test procedures, we can have more confidence that we’ve eradicated most of the bugs from the final product.

The testing process begins with development of a testing plan. In principle, this can be done by either the software developer (you), or by an independent testing person or group. The difference is that the developer has a conflict of interest: he wants to demonstrate that the program works properly, while the independent tester wants to find the flaws in the software. The best compromise is to have the developer handle unit testing and have somebody else address integration and acceptance testing. We’ll talk more about these different test phases shortly.

The bulk of the test script should be written prior to the coding phase. This may seem silly to you, but I’ve found it to work very well. One of the hardest aspects of software development is for the system designer to share the customer’s vision of what the end product will be and do. The person who writes the test script has to visualize how the software is to behave whenever a particular set of inputs is received or a particular action specified. This rigorous thought process helps greatly to uncover any fuzziness or errors in the design plan. It also helps to catch any oversights in going from system specification to system design. Once when I was writing a test plan, I couldn’t remember writing the process narratives that would implement two functions I needed to test. Sure enough, I had overlooked those requirements. The test plan helped me catch my errors and correct them very early in the system development effort.

Your test script should include a bunch of individual tests that specify particular input values and action selections, and the expected results of the program execution under those conditions. Specify values for inputs that cover these situations: the smallest and largest allowable numbers; missing entries; values out of the legal range; random legitimate values; and incorrect data types. Design test paths that will ensure that each statement in the code is executed at least once. Make sure that your error-handling routines work right, and that your validation of input data catches all the sorts of errors that might realistically be encountered.

Similarly, devise tests for logical control flow constructs (IF/ELSE IF/ELSE, CASE/SELECT, etc.) to make sure that each conditional branching statement is executed at least once in every possible direction. You can’t possibly cover every possible combination of paths through even a very small program, but you should convince yourself that control is transferred properly in each individual branching statement. Loops should be tested with conditions that will produce 0, 1, and the maximum possible number of iterations, as well as a typical number of iterations in between the extremes.

The fundamental purpose of writing a test plan is to be able to reproducibly run your software through a wringer and see how it performs. The task of executing all these tests is simplified if you can build a library of test data files and sample output for comparison. Anything you can do to automate the testing process will save time and avoid human errors. This can be difficult with programs having a strong emphasis on user interface. One possibility is to use a tool to capture keystrokes and mouse clicks and build a library of macros that simulate specific sequences of human activities.

Testing Strategies

We can think of two different aspects to testing a piece of software. First, does the software properly perform its intended functions? And second, is the structure of the code free of syntax and logic errors? This is the dichotomy of “functional” versus “structural” testing. The customer is concerned with the first case; he doesn’t care what the code looks like, so long as it does the job. He thinks of the program as a black box: he supplies the inputs, and by some magical process he gets the desired outputs back. The developer is concerned about the structure and flow of his code; to test it properly, he must study the code and devise tests in accordance with the way the program is written. These two aspects of software testing are called “black box” and “white box” testing, respectively.

White box testing typically is performed by the developer. It is a unit- or module-level testing process, in which tests are devised to evaluate the program logic and internal control flow of each individual subroutine or function in the system. Black box testing is performed by the independent testers we discussed earlier, as well as by the customers. Black box testing focuses on the proper handling of supplied inputs to generate the expected outputs.

If you review the topics that I suggested your test script should include, you’ll see a mix of structural and functional aspects described. The customer is the ultimate judge of whether the software meets his stated requirements. But it’s the developer’s responsibility to guarantee that the code is properly written so as to trap errors, handle both valid and invalid input data, and transfer control properly.

System Integration and Integration Testing

Let’s assume you’re working on a software project with a few dozen separate modules. Your unit-level testing has convinced you that they are all well-behaved and properly coded. Your next problem is to assemble all these modules together into the final system, in accordance with the program architecture you devised during system design.

There are basically two ways to approach the process of system integration and integration testing. You can pull the modules all together in one fell swoop, cross your fingers, and offer sacrifices to the compiler gods. This is called “big-bang” integration, and it’s almost guaranteed to fail. Or, you can use an incremental approach of joining modules into small clusters and testing the clusters as you go. This technique is superior, and it will cost you much less money for aspirin.

The incremental integration approach can use either a top-down or a bottom-up method, although in practice a combined “sandwich” method often works best. With a top-down strategy, you begin with the highest-level control modules and add one lower-level module at a time. After each module is added, you perform appropriate tests to see that the fusion went properly. The most likely source of errors is in the data interface between each pair of modules. The bottom-up technique begins by clustering your lowest-level modules (where the real work gets done) together, testing the clusters for proper behavior, and working your way up until the whole functional processing cluster is joined to the high-level control modules.

The problem with any integration strategy is that you need to simulate the presence of certain modules before you’ve actually incorporated them. With top-down integration, you must simulate the presence of low-level modules by using “stub” modules. These stubs are replaced one at a time by their actual counterparts. The stubs must have a certain amount of functionality built in so that they can fake out the higher-level modules into thinking that they are the real thing. Stubs can show simple trace messages to indicate when control is properly passed into the lower-level module, and they can display any passed parameters to demonstrate whether the data interface is functioning correctly. Stubs may also have to return some data, either real or simulated, to the calling module to allow execution of the cluster you’re now testing to continue.

Similarly, the clusters you’re testing in a bottom-up approach aren’t designed to function without the higher-level modules attached, so you have to use temporary “driver” modules to get the program to execute. The drivers can simply invoke the top-level module in the cluster you’re now testing, or it can pass in some parameters both to test the data interface and to allow the cluster to do its thing and return some results to the driver.

Neither top-down nor bottom-up integration is perfect, so you may want to select some combination strategy for your individual projects. The method you choose may be based on where you expect to have the most problems. If control is complex and processing simple, top-down offers the advantage of testing the control modules first. If the data interface between your system and the outside environment is a big issue, bottom-up may be the best approach. In general, drivers are more complex than stubs, so you may have to do a little more work if you use a straight bottom-up method.

Testing isn’t complete just because you think you’ve integrated all your modules properly. It’s very important to perform an overall system test to look for additional errors that didn’t crop up during the unit testing or integration process. Also, acceptance testing by the users is essential to see if the final product does in fact match the stated requirements. For more discussion about these and other aspects of software testing, see the books listed in the references.

The Bottom Line

The overall message from this article is that software quality is a vitally important issue in modern software development, and that there are well-defined methods for assessing and improving the quality of the programs you write. I’ve used these methods, and I believe in them. To my way of thinking, SQA efforts are every bit as important as any other aspect of the software engineering process. If you take away only one thought from this article, please take this one: Quality cannot be tested into software; it can only be designed in.

References

1. Roger S. Pressman, Software Engineering: A Practitioner’s Approach, 2nd Edition, McGraw-Hill, 1987.

2. Boris Beizer, Software Testing Techniques, Van Nostrand Reinhold, 1983.

3. Boris Beizer, Software System Testing and Quality Assurance, Van Nostrand Reinhold, 1984.

4. Glenford J. Myers, Software Reliability: Principles and Practices, John Wiley & Sons, 1976.

 
AAPL
$501.11
Apple Inc.
+2.43
MSFT
$34.64
Microsoft Corpora
+0.15
GOOG
$898.03
Google Inc.
+16.02

MacTech Search:
Community Search:

Software Updates via MacUpdate

Paperless 2.3.1 - Digital documents mana...
Paperless is a digital documents manager. Remember when everyone talked about how we would soon be a paperless society? Now it seems like we use paper more than ever. Let's face it - we need and we... Read more
Apple HP Printer Drivers 2.16.1 - For OS...
Apple HP Printer Drivers includes the latest HP printing and scanning software for Mac OS X 10.6, 10.7 and 10.8. For information about supported printer models, see this page.Version 2.16.1: This... Read more
Yep 3.5.1 - Organize and manage all your...
Yep is a document organization and management tool. Like iTunes for music or iPhoto for photos, Yep lets you search and view your documents in a comfortable interface, while offering the ability to... Read more
Apple Canon Laser Printer Drivers 2.11 -...
Apple Canon Laser Printer Drivers is the latest Canon Laser printing and scanning software for Mac OS X 10.6, 10.7 and 10.8. For information about supported printer models, see this page.Version 2.11... Read more
Apple Java for Mac OS X 10.6 Update 17 -...
Apple Java for Mac OS X 10.6 delivers improved security, reliability, and compatibility by updating Java SE 6.Version Update 17: Java for Mac OS X 10.6 Update 17 delivers improved security,... Read more
Arq 3.3 - Online backup (requires Amazon...
Arq is online backup for the Mac using Amazon S3 and Amazon Glacier. It backs-up and faithfully restores all the special metadata of Mac files that other products don't, including resource forks,... Read more
Apple Java 2013-005 - For OS X 10.7 and...
Apple Java for OS X 2013-005 delivers improved security, reliability, and compatibility by updating Java SE 6 to 1.6.0_65. On systems that have not already installed Java for OS X 2012-006, this... Read more
DEVONthink Pro 2.7 - Knowledge base, inf...
Save 10% with our exclusive coupon code: MACUPDATE10 DEVONthink Pro is your essential assistant for today's world, where almost everything is digital. From shopping receipts to important research... Read more
VirtualBox 4.3.0 - x86 virtualization so...
VirtualBox is a family of powerful x86 virtualization products for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers... Read more
Merlin 2.9.2 - Project management softwa...
Merlin is the only native network-based collaborative Project Management solution for Mac OS X. This version offers many features propelling Merlin to the top of Mac OS X professional project... Read more

Briquid Gets Updated with New Undo Butto...
Briquid Gets Updated with New Undo Button, Achievements, and Leaderboards, on Sale for $0.99 Posted by Andrew Stevens on October 16th, 2013 [ | Read more »
Halloween – iLovecraft Brings Frightenin...
Halloween – iLovecraft Brings Frightening Stories From Author H.P. | Read more »
The Blockheads Creator David Frampton Gi...
The Blockheads Creator David Frampton Gives a Postmortem on the Creation Process of the Game Posted by Andrew Stevens on October 16th, 2013 [ permalink ] Hey, a | Read more »
Sorcery! Enhances the Gameplay in Latest...
Sorcery! | Read more »
It Came From Australia: Tiny Death Star
NimbleBit and Disney have teamed up to make Star Wars: Tiny Death Star, a Star Wars take on Tiny Tower. Right now, the game is in testing in Australia (you will never find a more wretched hive of scum and villainy) but we were able to sneak past... | Read more »
FIST OF AWESOME Review
FIST OF AWESOME Review By Rob Rich on October 16th, 2013 Our Rating: :: TALK TO THE FISTUniversal App - Designed for iPhone and iPad A totalitarian society of bears is only the tip of the iceberg in this throwback brawler.   | Read more »
PROVERBidioms Paints English Sayings in...
PROVERBidioms Paints English Sayings in a Picture for Users to Find Posted by Andrew Stevens on October 16th, 2013 [ permalink ] | Read more »
OmniFocus 2 for iPhone Review
OmniFocus 2 for iPhone Review By Carter Dotson on October 16th, 2013 Our Rating: :: OMNIPOTENTiPhone App - Designed for the iPhone, compatible with the iPad OmniFocus 2 for iPhone is a task management app for people who absolutely... | Read more »
Ingress – Google’s Augmented-Reality Gam...
Ingress – Google’s Augmented-Reality Game to Make its Way to iOS Next Year Posted by Andrew Stevens on October 16th, 2013 [ permalink ] | Read more »
CSR Classics is Full of Ridiculously Pre...
CSR Classics is Full of Ridiculously Pretty Classic Automobiles Posted by Rob Rich on October 16th, 2013 [ permalink ] | Read more »

Price Scanner via MacPrices.net

Apple Store Canada offers refurbished 11-inch...
 The Apple Store Canada has Apple Certified Refurbished 2013 11″ MacBook Airs available starting at CDN$ 849. Save up to $180 off the cost of new models. An Apple one-year warranty is included with... Read more
Updated MacBook Price Trackers
We’ve updated our MacBook Price Trackers with the latest information on prices, bundles, and availability on MacBook Airs, MacBook Pros, and the MacBook Pros with Retina Displays from Apple’s... Read more
13-inch Retina MacBook Pros on sale for up to...
B&H Photo has the 13″ 2.5GHz Retina MacBook Pro on sale for $1399 including free shipping. Their price is $100 off MSRP. They have the 13″ 2.6GHz Retina MacBook Pro on sale for $1580 which is $... Read more
AppleCare Protection Plans on sale for up to...
B&H Photo has 3-Year AppleCare Warranties on sale for up to $105 off MSRP including free shipping plus NY sales tax only: - Mac Laptops 15″ and Above: $244 $105 off MSRP - Mac Laptops 13″ and... Read more
Apple’s 64-bit A7 Processor: One Step Closer...
PC Pro’s Darien Graham-Smith reported that Canonical founder and Ubuntu Linux creator Mark Shuttleworth believes Apple intends to follow Ubuntu’s lead and merge its desktop and mobile operating... Read more
MacBook Pro First, Followed By iPad At The En...
French site Info MacG’s Florian Innocente says he has received availability dates and order of arrival for the next MacBook Pro and the iPad from the same contact who had warned hom of the arrival of... Read more
Chart: iPad Value Decline From NextWorth
With every announcement of a new Apple device, serial upgraders begin selling off their previous models – driving down the resale value. So, with the Oct. 22 Apple announcement date approaching,... Read more
SOASTA Survey: What App Do You Check First in...
SOASTA Inc., the leader in cloud and mobile testing announced the results of its recent survey showing which mobile apps are popular with smartphone owners in major American markets. SOASTA’s survey... Read more
Apple, Samsung Reportedly Both Developing 12-...
Digitimes’ Aaron Lee and Joseph Tsai report that Apple and Samsung Electronics are said to both be planning to release 12-inch tablets, and that Apple is currently cooperating with Quanta Computer on... Read more
Apple’s 2011 MacBook Pro Lineup Suffering Fro...
Appleinsider’s Shane Cole says that owners of early-2011 15-inch and 17-inch MacBook Pros are reporting issues with those models’ discrete AMD graphics processors, which in some cases results in the... Read more

Jobs Board

*Apple* Retail - Manager - Apple (United Sta...
Job SummaryKeeping an Apple Store thriving requires a diverse set of leadership skills, and as a Manager, youre a master of them all. In the stores fast-paced, dynamic Read more
*Apple* Support / *Apple* Technician / Mac...
Apple Support / Apple Technician / Mac Support / Mac Set up / Mac TechnicianMac Set up and Apple Support technicianThe person we are looking for will have worked Read more
Senior Mac / *Apple* Systems Engineer - 318...
318 Inc, a top provider of Apple solutions is seeking a new Senior Apple Systems Engineer to be based out of our Santa Monica, California location. We are a Read more
*Apple* Retail - Manager - Apple Inc. (Unite...
Job Summary Keeping an Apple Store thriving requires a diverse set of leadership skills, and as a Manager, you’re a master of them all. In the store’s fast-paced, Read more
*Apple* Solutions Consultant - Apple (United...
**Job Summary** Apple Solutions Consultant (ASC) - Retail Representatives Apple Solutions Consultants are trained by Apple on selling Apple -branded products Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.