Appendix A:

Benchmarking Methodology

Introduction

The purpose of this appendix is to outline the basic parameters for how MacTech Magazine performed benchmarking tests on VMware Fusion, and Parallels Desktop for the purpose of evaluating the performance of virtual machines running Windows XP and Windows 7.

Consistency

Since the tests involve both multiple machines and multiple pieces of software, the focus was on creating as much consistency across the tests as possible. MacTech accomplished this in several ways.

First, each set of tests was performed by a single MacTech staff member so as to eliminate any of the natural inconsistencies that often occur across individuals.

All of the tests were performed on the same version of the Mac operating system across the different hardware. At the time of the tests, this was Mac OS X 10.6.5 and included the most up-to-date versions of Apple patches as prescribed through "Software Update" in Mac OS X.

All of the tests were done on "virgin" systems, i.e., freshly wiped hard disks, with fresh Mac OS X, Virtualization, Windows and Microsoft Office installations, with no third party software installed beyond the standard Mac OS X. Furthermore, care was taken to make sure the virtual hard drives were located in a similar position on the hard drive.

All of the tests were performed with the most up to date set of required patches for Microsoft Windows and Office as prescribed by Microsoft’s automatic updates, including service packs.

Avoiding Interactions

While the tests covered a variety of applications, all tests (where appropriate) were performed with only that single application open. In other words, to the extent possible, no other applications will be running. (Obviously excluding background and OS tasks that are part of a standard install of either OSes or Microsoft Office.)

To avoid issues with a noisy network, the test machines were installed on what was considered a "quiet" network with minimal traffic. MacTech monitored the use of the network to make sure that the machine does have network access, but is not impacted by the network.

Measurements, Testing and Outliers

For timed tests with results under 60 seconds, tests were measured to within 1/100th of a second. For those over 60 seconds, tests were measured to within a second.

Most tests were performed at least three times per test and per machine configuration, and often 5+ times depending on the test. Outliers indicating a testing anomaly were retested as appropriate.

In most cases, the tester used successive tests, not "Adam" or "first" tests to better emulate typical daily use.

Those tests that could be impacted by the size of the window were tested with the same window size, and screen resolution under all scenarios.

Some tests were eliminated because the machines simply performed too fast to get an accurate measurement. For example, sending or sorting emails always performed faster on the machine than the tester could measure.

Appendix B: Testing Results

In keeping the results fully open, MacTech is making the test data available in the form of an Excel spreadsheet. Simply drop us an email and we’ll be happy to provide it to you.