PCMark 8 is a benchmarking program for Windows PCs that includes a range of tests designed around common user scenarios. Each test gives a score, which you can use to compare different PCs, and detailed results to get a deeper understanding of system performance.

PCMark 8 Professional Edition is the only version licensed for business and commercial use. It offers additional Extended Storage tests, command line automation, the ability to export results as XML and PDF files, and priority customer support. PCMark 8 Professional Edition costs £1,005 (or $1495.00). Site license options are also available.

PCMark 8 Advanced Edition for home users includes all five tests, battery life testing, custom testing options, in-depth hardware monitoring graphs and the ability to save your results offline. It costs £29.99 (or $49.95).

PCMark 8 Basic Edition is free, but only includes the Home, Work and Creative tests. You can grab all three versions from here.

We spoke to Futuremark regarding its program and how PCMark 8 differs from other benchmark suites, and how benchmarks might evolve in the future.

TechRadar Pro: What are PCMark 8's components?

Futuremark: PCMark 8 includes five benchmark tests. The Home, Work and Creative benchmarks use workloads that reflect typical PC use in the home, the office, and for a selection of more demanding creative, entertainment and media tasks.

The Applications benchmark measures system performance using popular programs from the Adobe Creative Suite and Microsoft Office. The Storage benchmark is a dedicated test for measuring and comparing the performance of SSDs and HDDs. The tests are explained in detail in the PCMark 8 Technical Guide.

The PCMark 8 Home, Work, Creative, and Application benchmarks can also be used to test the battery life of laptops, notebooks and tablets.

TRP: How does it differ from other benchmark programs on the market?

FM: PCMark 8 benchmarks show the real-world differences between systems by measuring performance for common home and office tasks. Futuremark believes this approach is more useful to end users than synthetic component tests whose results may only be of practical use to engineers and other industry insiders.

TRP: What does Futuremark consider to be best practice when it comes to the benchmarking process?

FM: To get accurate and consistent benchmark results you should test clean systems without third party software installed. If this is not possible, you should close as many background tasks as possible, especially automatic updates or tasks that feature pop-up alerts such as email and messaging programs.

1. Install all critical updates to ensure your operating system is up to date.

2. Install the latest WHQL approved drivers for your hardware.

3. Restart the computer or device.

4. Wait 2 minutes for start up to complete.

5. Exit all other programs, especially those that run in the background or task bar.

6. Wait for 15 minutes.

7. Run the benchmark.

8. Repeat from step 3 at least three times to verify your results.

TRP: What are the challenges facing the benchmarking industry?

FM: Without a doubt, the biggest challenge for benchmarking is the change driven by mobile devices. It can be hard to create benchmark tests that scale from smartphones and tablets to desktop PCs and dedicated workstations.

A test that highlights differences between smartphones may not be relevant when comparing desktops. A test for desktops may be too heavy for a tablet. The challenge is to create useful benchmarks that help people compare performance, not only across all the different form factors, but across operating systems too.

The other significant challenge is that measuring performance alone is no longer enough. Battery life, power efficiency and thermal management are important considerations when choosing a new mobile device. Benchmarks must now do more than test the speed of the processor. They must measure the complete experience.

TRP: How do you see benchmarks evolving over the next few years?

FM: Over the next few years the quality and usefulness of mobile benchmarks will increase significantly. The standard will be raised by developers like Futuremark who have the expertise, wide industry connections, and open processes required to create high quality benchmark tests that are accurate, relevant and impartial.

Unfortunately, many of the mobile benchmark apps used today are created by single developers, or small teams, who lack the experience and industry connections needed to design fair and neutral tests. And even well-intentioned benchmarks can fail to present meaningful measures of performance, instead providing synthetic results that are difficult to relate to the differences seen when using real apps.