How to optimise your Windows swap file

Our first problem is just how to test and benchmark the swap file. We want to test a whole range of situations from 1GB installs to 6GB installs.

That means a 64-bit Vista installation to keep everything consistent. As we need a low-memory situation we'll base it on a 1GB Windows Vista installation.

To benchmark this we'll need something that'll require a large block of memory and manipulate that data. To do that we run a benchmark that creates a large image file in memory and does various manipulations with it, from simply scrolling it, to rotation.

We time how long it takes to complete each stage and get results from completing the image write and read, and rotation speed timed in seconds and MB/s depending on the operation.

You might have spotted something here, isn't this just a drive-speed test? Well, yes and no. We are testing the speed of the swap file here, it just so happens that's dependant on the speed of the medium the swap file is stored on, but still how the swap file is stored is going to play a part, just as much as what it's stored on. To the test chamber!

To begin we need a control, which is going to be a standard Windows Vista 64-bit installation with a managed page file all running from the same partition on a Hitachi 250GB Deskstar drive. We'll just install the OS, update the drivers and let Windows manage the page file in whatever manner it sees fit.

With just 1GB of memory the system creates a 1.3GB page file and at rest is using around 660MB of that with little actually running. Checking up on that file it's already split into two fragments.

Under benchmark conditions the page file increases to 3.2GB and has now fragmented into five sections, remember this is on a clean 250GB installation so there are hundreds of gigabytes of empty space to use. You can imagine how on a real-life drive this fragmentation could become far worse over time as the available free space becomes limited.

This is the first of what are the conventional 'spinning disk' tests and on the whole are what we'd expect the vast majority of people to be running. The next four tests are all based on different variations around single drive and twin drive configurations.

The obvious first test is to run a user defined, fixed-sized page file, this is the option that tends to be most favoured by people and the thinking behind it makes a lot of sense; creating a fixed page file when Windows is first installed eliminates the chance of fragmentation so provides an optimal single continuous file. If you make this large enough then apart from losing a few gigabytes of drive space there's no real downside.

Our next scenario runs along the same lines but does so with the page file stored on its own partition on the same drive. This has been suggested as an optimal solution, but it has been pointed out that this unnecessarily encourages excessive amounts of drive head thrashing. As it ends up with the page file being physically removed from the working data.

This certainly rings true for us, we'd imagine in certain situations it would be fine for large sequential writes and reads to the page file, but in reality we'd imagine this is less the case particularly when multitasking. Unfortunately, this is one area our benchmark won't test very strongly, but we can see how it performs under our heavy single memory load.

Scenario four is running the page file off a secondary SATA hard drive. Before you get up in arms, it's a similar 250GB 7,200rpm Hitachi Deskstar made within six months of the other drive, so raw performance should be very similar.

Otherwise we're running a fixed user-defined page file as per the other scenarios. We'd expect performance here to be among the best of the 'spinning disk' test as the dedicated SATA line will help eliminate drive thrashing due to separating the page file and system access to the drive.

This leads on to the final scenario which was added more as an intellectual test. We were interested to see if running two page files over two drives enabled Windows to perform any sort of intelligent cacheing or even RAID style spanning. If it balances storage against spare access capacity that could offer some benefits and the Microsoft Knowledge Base article seems to allude to that sort of practice.